Sunday, April 26, 2009

Blender's GameLogic - Quick Comments

Several hours down the drain trying to find the connection between python scripting in Blender and the GameLogic module. Blender has two different python environments. There are python scripts that can be implemented via the command line and from the python scripts window. The GameLogic module is not visible from this environment. The other environment is inside of the game engine. Scripts are embedded in this environment through the 'buttons window' by attaching a script composed in the 'text window. Any python scripts intended for use in the game engine must be named in the Datablock name. Saving the file to disc does not matter. An example of everything setup and working:

Wednesday, April 22, 2009

Another interesting data set: US Residential Energy Survey

I don't know what to do with this data set yet, but it looks like another interesting data set to visualize: Residential Energy Consumption Survey
http://www.eia.doe.gov/emeu/recs/

Sunday, April 19, 2009

A site for geographic data

Here is a nice site which has data useful for visualizations which combine city names, zip codes, longitude and latitude. This site is handy because the data can be downloaded and embedded into a visualization.
http://www.geonames.org/

Saturday, April 18, 2009

How to use an IDE (Notepad++) with Python and Blender

After some searching for existing solutions, the result I like best is using Notepad++. There is a feature in Notepad++ that allows execution of a command line program passed the name of the script being edited. I'm working on a windows platform mostly, so here is how to setup Notepad++ to work with blender.
Once you have a working version of Notepad++ installed, type up a test script. Here is a 'hello world' script that will verify the installation:

Here is the code a little easier to copy:

import Blender import sys print 'Hello World' # test blender functionality obj=Blender.Object.Get() print str(obj) sys.stdout.flush()

In Notepad++, use the Run menu option to execute the script. From Run, specify:

"C:\Program Files\Blender Foundation\Blender\blender.exe" -P "$(FULL_CURRENT_PATH)"
Press "Run!" and Blender should start and be left running. If you find the command window is that launched Blender, you should see the following.
With this, a tool to build and test more complex scripts is now ready for use.

Sunday, April 12, 2009

How to build a scene from scratch in Blender

As a test of what it takes to build something from scratch, here is a very simple (and ugly) hello world program.
import Blender from Blender import Scene, Object, Camera, Text3d from Blender import Lamp from Blender.Scene import Render from Blender import Material from Blender import Window # scene = Scene.New() camdata = Camera.New() lampdata = Lamp.New() lampdata.setEnergy(10.0) lampdata.setSpotSize(180) txt = Text3d.New() txt.setText('Hello World') cam = scene.objects.new(camdata) cam.setLocation(0.0,-7.0,1.0) cam.setEuler(1.5,0.0,-0.35) lamp = scene.objects.new(lampdata) lamp.setLocation(0.0,-1,5) msg = scene.objects.new(txt) msg.setEuler(1.8,0.0,-0.05) # mat = Material.New('newMat') # create a new Material called 'newMat' mat.rgbCol = [0.8, 0.2, 0.2] # change its color mat.setAlpha(0.2) # mat.alpha = 0.2 -- almost transparent mat.emit = 0.7 # equivalent to mat.setEmit(0.8) mat.mode = Material.Modes.ZTRANSP # turn on Z-Buffer transparency mat.setName('RedBansheeSkin') # change its name mat.setAdd(0.8) # make it glow mat.setMode('Halo') # msg.setMaterials([mat]) # Window.RedrawAll() #
After executing this program and using the renderer to create an image, the following image is created.

How to generate an image from Blender

Blender can be scripted to generate an image using python. Once all of the objects are in place, the rendering and saving of the image can be scripted. The following code snippet shows how to script to generate an image. Animation is handled differently.
import Blender from Blender import * from Blender.Scene import Render scn = Scene.GetCurrent() context = scn.getRenderingContext() # enable seperate window for rendering Render.EnableDispWin() # draw the image context.render() # save the image to disk # to the location specified by RenderPath # by default this will be a jpg file context.saveRenderedImage('test')
If this is done right after starting Blender, the following image should be saved to '\tmp\test.jpg'. It should look like this.

How to delete an object from Blender

Objects in Blender are linked to Scenes. There may be more than one scene in Blender at any given moment. The following code snippet demonstrates how to remove the cube that is in the default scene in Blender using Pyhton.

Preparation:

  1. Open Blender
  2. Split the windows to see an outline view, a script view, and a 3-D view. This will help see what happens when the object is deleted.
  3. Manually enter the python script. When the last line is entered, the views will be updated and the default cube will disappear.
import Blender from Blender import Scene # get a list of all scenes in Blender scns = Scene.Get() # get the first scene in the list scn = scns[0] # get the first object in the scene, # this should be the cube obs = list(scn.objects) # unlink the object from the scene scn.unlink(obs[0]) # update all of the views in the user inteface Window.RedrawAll()

Saturday, April 11, 2009

How to build a list of all objects in Blender

Sample code to get a list of objects in a scene in Blender: Access the the Python scripting by first selecting the python view screen then select Scripts>System>Interactive Python Console
import Blender from Blender import Scene # get a list of scenes in Blender scn = Scene.Get() # build a list of objects obs = [ob for ob in scn[0].objects]

Tuesday, April 7, 2009

Loading data from the NHTS

The data in the NHTS is available in a couple of forms. My preference is to work with the CSV format. In Python, it is easy to load a CSV table into an object in memory. For the CSV format, the NHTS data is organized into 4 files. Each file contains the data for one table. There are common columns in each file that allow information to be correlated between the tables. A CSV file typically is organized into a header row followed by data rows. The NHTS follows this format.
HOUSEID,VHCASEID,VEHID, ... ,DRVRCNT,MSAPOP 010000018,01000001801,01, ... ,2,7608070 . . . 915637259,91563725904,04, ... ,2,-1
One way to get this data into Python is using the file object. This allows a file to be opened for read access, then for each line in the file to be loaded individually. To organize the data, there are several options. For this data I used a Python dictionary which associated each header with a column from the file. The simplest implementation of this function is only a few lines:
def loadTable(filename,maxRows=1e9,keepList=[],ignoreList=[]): f = file(filename,'r') table = {} for line in f: if count == 0: headers = line.split(',') for header in headers: table[header]=[] else: line = line.strip() line = line.strip('\n') row = line.split(',') for idx in range(0,len(row)): table[headers[idx]].append(row[idx]) return table
While this code will work, it is not robust to a host of problems. The wrong file name can be supplied, the system may not have enought memory, or a row may be ill formed. Additionally, no real documentation is provided so the dir() can provide help on the function. Also, you might not want all of the data in the file to be loaded. Certain columns can safely be omitted. To correct these issues, the following function will be used to load the data tables:
def loadTable(filename,maxRows=1e9,keepList=[],ignoreList=[]): ''' This function will load up to maxRows from the CSV file with the first row specifying the names of the columns. into a dictionary where each key specified a column. To minimize memory use, there are two optional lists of strings which limit the columns loaded from the file. If keepList==[] and ignoreList==[] then all columns from the file will be loaded. If keepList!=[], then only those colums in keepList and not in the ignoreList are loaded. If keepList==[] then all columns which match names in ignoreList are omitted from the returned table. ''' try: f = file(filename,'r') count = 0 table = {} for line in f: if count == 0: headers = line.split(',') for header in headers: table[header]=[] else: line = line.strip() line = line.strip('\n') row = line.split(',') for idx in range(0,len(row)): if ((headers[idx] in keepList or len(keepList)==0) and (len(ignoreList)==0 or not (headers[idx] in ignoreList))): table[headers[idx]].append(row[idx]) count += 1 if count==maxRows: print 'Terminated load - maximum number of rows exceeded' return table return except IOError: print 'IOError: File can not be opened...' return {} except MemoryError: print 'MemoryError: ran out of working memory...' return table except: print 'Unknown error when loading table ....' return {}

Monday, April 6, 2009

Build the working directory

To replicate the work I'm going to share, you will need to build a directory that will share the source code and the data. Follow these steps to build that directory.
  1. Create a new directory for testing.
  2. Download the 2001 NHTS ASCII.CSV data set.
  3. Unzip the contents of that file into the testing directory. You should have four CSV files.

If you do not have Python already available on your machine, I'd recommend starting with Portable Python, I'm using Portable Python 1.1 which implement Python 2.5.4 as a portable program (try saying that 3 time fast!). Everything initially will work in this implementation.

Using Python to Visualize the 2001 NHTS - Project Scope

This project will work through using Python for visualizing a complex data set to gain insight and understanding. One of my favorite data sets is the US National Household Travel Survey. This survey summarizes demographics and a days worth of driving for 70,000 people over a one year period. The data set is so complex that after about two years of looking at it, I've only just started to find the interesting information contained in it. My initial goals are as follows:
  1. Write pure Python libraries for inputting the data sets and preprocessing the results. I'm going to write everything in pure Python from scratch, rather than use existing tools for two reasons. First I need the exercises to learn Python better. Second, I want my tools for importing the data to reusable under Jython and for use with Blender out of the box.
  2. Use the following tools to visualize information in the NHTS:
  3. I'll be using the following Python distributions:
  4. Create both static and animated visualizations in each tool.
  5. Gain some insight into travel patterns that I did not have before starting this project.
  6. Build some tools for exploring the 2008 release of the NHTS.