The essence of my project was to be able to move an object on a table in the physical world, and cause an object in a digital environment to move and rotate based on the position and orientation of the object on the table. To do this, I first constructed a table with a clear acrylic surface, so that when a webcam was placed underneath it, looking up, it could see the bottom on any object on the table. That idea was taken from Kollision’s Tangible 3D Tabletop. The object was marked with two symbols: one to track position, and a second to track rotation.
When the user starts the program a window displaying the video feed from the webcam pops up. Overlaid on the video is text telling the user to align the symbols on the bottom on the object inside preset frames. The user aligns the symbols within the frames and presses ‘Enter’ and the program takes snapshots of both symbols separately. The program then finds the area in the video image that most closely matches the input symbols.
To calculate position, the program finds the center point of the first symbol. A line is drawn from the center point of the first symbol to the center point of the second symbol. To calculate angle, the program finds the inverse tangent of the dy/dx. After the user presses enter the program sends this position and angle data to a separate program which runs Panda3D. This was necessary because Panda3D only runs in Python 2.6 and OpenCV only runs in Python 2.7. The Panda3D program loads an environment and an ‘Actor’ (3D model). I chose a bamboo forest environment and a panda model. When the user moves the object on the table, Panda3D updates the position and orientation of the model base on the position and orientation of the object on the table.
Class: Fundamentals of Programming
Date: Summer 2012, 3rd Year
Duration: 1 Week
Instructor: David Kosbie
Awards: 1st Place term-project
Tangible Input Table
- Categories →