Video Clips of Augmented Reality System in Operation

The links below are to files showing the Augmented Reality system in operation. These are in MPEG format acquired at a frame rate of 15 frames/second.

One note about the videos available here.  Our method relies on tracking features in the scene and using those features to create an affine coordinate system in which the virtual objects are represented.  These clips are from two implementations of the system.  One used the corners of two black rectangles as the tracked feature points.   The second implementation used green colored markers.  If there are green markers in the image you are looking at the operation of the second implementation.   There still are high-contrast rectangular areas in the images but those were not used as tracking features.

Overall view 1 (640 kbytes), Overall view 2 (954 kbytes)
This shows the overall view of the augmented reality system. There is the frame with the two black rectangles that is used to define the affine reference frame. The monitor shows the augmented view of the scene. Initially, there is no object shown and then a globe appears within the frame. The image on the video monitor may not be too clear. For video clips of just the augmented view follow the links below.
 
Basic operation
This shows the basic operation of the augmented reality system. A virtual object is positioned on the frame. It appears to stay fixed to the real object as that object is moved around in front of the video camera. Slight movements of the virtual object are due to inaccuracies in feature tracking and delays in the system.

The orientation of the feature points in the previous video segments is done just for convenience.  It allows automatic location of the object on the L-frame.  This is not a requirement of the method.  These two segments illustrate augmenting a scene where the feature points are placed in a more arbitrary arrangement.

Construction example (839 kbytes) This is a two dimensional example showing an example in construction.  A blueprint is applied over the area on a wall to give an augmented view of the interior of the wall.  Distortions of the blueprint due to our affine approximation can be seen.  The viewer can also see improper occulsions of foreground objects by the virtual blueprint.

Animation in affine space (585 kbytes)
An important feature of an augmented reality system would be the ability to animate the virtual objects. This clip shows a virtual cube whose translation is animated in the the affine coordinate frame.
 
Handling occlusions
This method of augmenting reality uses the computer graphics system to resolve hidden surfaces in the virtual objects and to properly handle virtual objects occulding other virtual objects.  Due to the nature of the merging of the virtual scene with the live video scene, a virtual object drawn at a particular pixel location will always occlude the live video at that pixel location.  By defining real objects in the affine coordinate system real objects that are closer to the viewer in 3D space can correctly occlude a virtual object.
 
Dealing with latency
Latency is as much of a problem in augmented reality systems as it is in virtual reality systems.  Other than just getting faster equipment, some researchers are investigating predictive methods to help mitigate the latency effects.  Most of these efforts use models of the human operator and position measurements to predict forward in time.  Our system does not have position measurements available.  Instead we experimented with simple forward prediction on the location of the feature points the system tracks.  We assumed a constant velocity motion in image space and did a simple first order forward prediction.  To filter some of the jitter introduced by noisy feature trackers we add Kalman filtering to the output of our color feature trackers.
 
Video see-through operation (831 kbytes)
Because our system only uses the input from video cameras for defining its common coordinate system, switching to a  video see-through head-mounted display (HMD) was a simple as placing two cameras on the HMD.  These cameras view the real scene in stereo.  The augmented view is presented to the user on the display of the HMD.   In this sequence the monitor shows what the user is viewing.
 
Haptics in Augmented Reality
One of the areas that has not been investigated in augmented reality systems is the incorporation of interaction with the virtual objects.  Using a PHANToM™ haptic interface device manufactured by Sensable Technologies.   This interface allowed the user to operate the system in WYSIWYF mode (What You See is What You Feel).
 
Handling occlusions (again)
In the examples of haptic interactions with the virtual objects it is easily seen that the user's hand and the Phantom are not properly occluding the virtual objects when they are in front.  We do not model the hand and the Phantom in the virtual scene.  (Barely visible in these sequences is a red marker that was added to assist the user in identifying where the active point of the phantom was located.)

It might be possible to define a model of the Phantom for the system but the user's hand and arm would still be a problem.  We decided to explore the use of foreground detection using color statistics.  At runtime, in any area of the video image where the color is statistically different than in a background scene analysed prior to starting operation we assume it represents a foreground motion at the 3D depth of the Phantom.


Forward to the next augmented reality section

Back to Augmented Reality Home Page


Page last modified: 22 August 2002;