FreeVR Library Programming Tutorial (w/ OpenGL)
The FreeVR Library will be used to exemplify many VR interface and
programming techniques.
FreeVR will work in most local VR facilities (eg. CAVE™ or Head-Based-Display),
as well as on the available desktop machines in a simulated-VR mode.
It can be used on PC's with Linux and OpenGL installed, or
in a (currently very) limited way on PC's with Cygwin installed.
The FreeVR webpage is www.freevr.org.
This tutorial takes the approach of starting with an extremely simple initial
example and then advancing in very small increments (baby steps) until we can
perform some interesting tasks in virtual reality.
Another point to make about this tutorial is that it is not intended as a
means to learn advanced OpenGL programing.
The computer graphics in these examples are all very simple.
The goal of the tutorial is to highlight programming tasks that are unique
to programming for a virtual reality system.
On the other hand, some basic OpenGL techniques can be learned by following
this tutorial, as it will show how to handle texture maps, billboards,
cutting planes, and moveable lights.
Other tutorials are under development for interfacing FreeVR to additional
rendering and world simulation systems.
Presently there is a tutorial available for the SGI Performer scene-graph
library, but as Performer has greatly decreased in usage new tutorials
will cover similar, but more popular libraries.
This tutorial has been most recently tested with FreeVR version 0.6a.
Components of a VR system
The FreeVR integration library — a crucial component of a VR system
- The FreeVR Library began development in 1998 to provide an Open-source
alternative to other VR libraries that were then in wide-spread use, but
not open systems.
- FreeVR was designed from the outset to handle both projection style VR
systems as well as head-based VR systems.
- FreeVR has been designed to work on a variety of different operating
systems (though at present most are Unix-based).
- FreeVR works with basic OpenGL, OpenSceneGraph and Performer rendering
libraries.
- FreeVR has an API designed for simplified porting from specific other
non open-source VR integration libraries.
- FreeVR has an extensive tutorial demonstrating many common programming
interface features available to the VR programmer.
Features of a typical VR-integration library
- It handles the I/O necessary for creating a VR experience.
- User tracking
- User input controls
- Visual rendering process
- It handles the graphics windowing interface by:
- doing the window placement (and deletion)
- calculating the proper projection matrix for each window
- calling the render routine each frame for each viewpoint
- It is configurable.
- Can be configured to run almost any type of VR display.
- Can be interfaced to a wide variety of tracker and other input systems.
- Can be adjusted for any placement of screens and trackers.
- Many other features can be controlled.
- How to render the scene is left to the application developer.
- It has a built in simulator display allowing any VR application
to be run on any compatible workstation desktop screen.
This is generally used for development and testing, but also
allows for additional people to join in a multiperson VR
experience.
(NOTE: many libraries work under most types of UN*X
operating systems, and occasionally non-UN*X OSes.)
- It allows VR users and developers to share applications and code.
Example 0: A simple virtual world — in Xwindows (ex0)
- The common method of rendering real-time visual scenes of a
virtual world is to represent the world as a collection of
polygons, specify a view ("camera") position, pass this
information to the rendering hardware, update the world based
on any pending input events, and do it again.
- A typical (eg. SGI or Linux) desktop graphical application uses OpenGL
in combination with some Xwindows API.
- Example 0 (ex0_xwin.c) is such
an application.
- This lecture does not cover how to program with OpenGL,
but we need to have something in our virtual world to look at.
A simple shape rendering API is used by all the examples
(shapes.c).
- This lecture also does not cover the use of Makefiles, but
a Makefile is provided to compile
this and all the example programs.

- Perhaps a more fair comparison between using a VR integration library
and a simple OpenGL rendering program is with the GLUT or SDL desktop
integration libraries.
Basic structure of FreeVR Library application
The FreeVR Library runs many tasks in parallel.
This helps reduce tracker lag, as well as maintain better frame rates.
- Simulation process
- Rendering processes (1 - N)
- Input processes (1 - N)
- Telnet communication process
Function callbacks are used to inform the library how to render, etc.
Shared memory is used by the library to maintain consistency across
processes, and by the application to pass data from the simulation to
the rendering.
Example 1: The bare essentials FreeVR application (ex1)
- The FreeVR Library handles all the viewpoint calculations, so these
can (must) be left out of the FreeVR application.
- The FreeVR Library monitors many types of input, and allows the
application to consume this information.
- The basic functions needed for a FreeVR application are:
- vrStart() — forks the processes, etc.
- vrFunctionSetCallback() — sets the render function
- vrExit() — cleans house
- vrCallbackCreate() — make the render function a callback
- Example 1 (ex1_bare.c) is such
an application using the same simple world as example 0.
- There are some significant
differences between ex0 and ex1.
- This example is the rudimentary way to program a FreeVR application.
- Keystrokes (on the numeric keypad) can be used to alter the
simulated view into the world.

Example 2: A more civilized FreeVR application (ex2)
- Same world as before, but this time a little more civilized programming.
- The additional FreeVR functions in this application are:
- vrConfigure() — pass arguments to override the default configuration
- VRFUNC_DISPLAY_INIT — render process initializations
- vrFrame() — VR system upkeep
- vrGet2switchValue() — read a keyboard input
- Self documentation — a collection of functions that allow users a quick reference to using the application:
- vrSystemSetName() — specify the name of the application
- vrSystemSetAuthors() — specify the name(s) of application's authors
- vrSystemSetExtraInfo() — specify some extra information about the application
- vrSystemSetStatusDescription() — provide a status string about the state of the application
- vrInputSet2switchDescription() — specify the action generated by a particular physical button input
- The application programmer provides three functions to create the
virtual world (in addition to the simple main():
- init_gfx() — one-time graphics initialization
- draw_world() — render the world once
- update_world() — make any changes to the world
("World Simulation")
- Example 2 also adds some signal handling routines to ensure that an
interrupt signal received during the VR system initialization will not
interfere with that operation — which could result in only a
partial termination of the program, leaving some orphaned processes behind.
- Example 2 (ex2_static.c) makes use of
these more advanced features, while using the same simple world as before.
- There are some minor
differences between ex1 and ex2.
- For the user the only difference is how the application terminates
(pressing the 'Escape' key vs. waiting 10 seconds).
Example 3: A world with some action (ex3)
The two objects of the simple virtual world are now dynamic.
They move over a simple path.
One new (static) object has been added to the world.
The user still cannot interact with the world other than moving to view
it from another perspective.
- World dynamics are done in the Simulation (main) process.
(Because there may be many rendering processes.)
- Data is passed from the world-simulation process to the rendering
processes via shared memory.
- Collecting shared data into a class or structure is wise.
- New function in the main process to initialize the world — init_world().
- Rendering callback now passes an argument to the data describing the virtual world.
- The additional FreeVR functions in this application are:
- vrShmemAlloc() — allocation some shared memory
- vrCallbackCreate() (revisited) — passing arguments
- vrCurrentSimTime() — how long have we been doing this?
By incorporating the time within the simulation into the circular path
calculation, we ensure that the rate of movement is constant regardless
of the speed of the rendering computer.
It is good programming practice to make use of the passage of real time
when calculating movements in a virtual world to guarentee consistence
across systems.
- Now all our application-programmer supplied operations (init_world(),
update_world(), and draw_world()) are all passed the wd
pointer to the world database.
This is how the state of the virtual world is initialized/simulated and then
used for rendering.
- All changes to the state of the virtual world should happen in the simulation
routine (here called update_world()). The display (aka rendering) process
just reads information about the state of the virtual world and converts it to
graphics primitives.
- The nature of how many virtual reality displays work is to provide a drastically
different perspective rendering on different screens.
The result of this is that the OpenGL lighting mechanism can result in mismatched
colorization if the proper rendering ordering is not followed.
The solution is for the light location/direction value to be set after the
perspective matrix is calulated and placed on the rendering stack.
In other words, the location/direction of the light must be set at the
beginning of the rendering ("draw_world()") routine.
- Example 3 (ex3_dynamic.c) is
such an application — with a slightly more complex world.
- There are minor
differences between ex2 and ex3.
Aside: Simulating VR on the Desktop
Frequently, the availability of a virtual reality facility might be limited.
This might be the result of high usage of the facility, or for some
developers, the facility might be located at a distance that precludes
frequent visits.
In any case, there is a general need for developers of VR applications
to be able to do some amount of testing at their desktop, on systems
that do not have the input or output capabilities of a true VR display.
Therefore most VR integration libraries include the ability to simulate
running VR applications with more mundane interfaces.
- Simulator mode enables developers to work at their desktop for much
of programming effort required to create a new virtual reality
application.
- It is important to note however that the user interface and other
aspects of a good VR application must be tested on actual VR hardware,
so the simulator cannot fully replace a virtual reality system.
- Applications running on a computer not configured as a VR display
will run in "simulator mode."
- Complete Instructions on using the simulator mode are in the
FreeVR User's Guide.
- FreeVR Simulator mode translates keystrokes and mouse movements into
VR inputs:
- Controller Inputs
- mouse BUTTONS — controller (aka wand) buttons
- SPACE + mouse — controller (aka wand) joystick
(hold down the spacebard and move the mouse in X & Y)
- Head and Wand Tracking
- ARROW keys — move active 6-dof tracker left, right, forward, back
- SHIFT + ARROW keys — move active 6-dof tracker left, right, up, down
- ALT + ARROW keys — rotate active 6-dof tracker left, right, up, down
- 'w' key — select the wand as the currently active 6-dof tracker
- 's' key — select the head (skull) as the currently active 6-dof tracker
- 'n' key — select the next 6-dof tracker to be the active tracker
- 'r' key — reset the active 6-dof tracker to its initial position
- '*' key — reset all 6-dof trackers to their initial position
- Simulator View Controls
- Keypad ARROW keys ('2','4','6','8') — rotate the simulated view
- Keypad '-' & '+' keys — zoom simulated view in and out
- Keypad '5' key — reset the simulated view
- Keypad 'Enter' key — set a new home view
- Keypad 'Home' key — jump to the home view
- Simulator Rendering Controls
- Keypad '*' key — toggle the FPS display (not limited to simulator views)
- Keypad '/' key — toggle the simulator representations
- Keypad 'PgUp' key — toggle the self help info (not limited to simulator views)
- Keypad 'PgDn' key — toggle input tracking
- Keypad 'Del' key — toggle the timing statistics (not limited to simulator views)
- Keypad 'End' key — toggle world rendering
- Misc
- '?' — print simulator usage information to the tty
Aside: FreeVR naming conventions
To facilitate the writing and interpretation of FreeVR application code,
the FreeVR library adheres to a set of naming conventions.
The conventions are:
- All functions & types begin with "vr";
- All Performer related functions begin with "vrPf";
- All features of the function are contained within the function's name;
- Each word that identifies a feature begins with an upper case letter;
- Acronyms are upper case throughout (e.g. "RW");
- Function groups begin with the group name followed by the operation.
E.g.:
- vrRender...()
- vrUserTravel...()
- vrGet...()
- vrShmem...()
- Functions that return a complex type value, begin with the type name.
[NOTE: the returned value is also placed in the first argument of
such functions, and the returned value is a pointer to the result.]
E.g.:
- vrMatrixGet...()
- vrVectorGet...()
- vrMatrixMult...()
- vrVectorSubtract()
- Function groups with similar operations but taking different arguments
have the argument list as part of the name.
E.g.:
- vrMatrixSetTranslation3d() — takes 3 doubles
- vrMatrixSetTranslationAd() — takes an array of doubles
Aside: FreeVR complex types
FreeVR provides a small set of type definitions for operations on
mathematical types that contain more than a single scalar value.
In most cases, the internal data representation is a single dimensional
array of values.
The types are:
- vrMatrix — internally represented as 16 doubles stored as array field "v"
- vrPoint — internally represented as 3 doubles stored as array field "v"
- vrVector — internally represented as 3 doubles stored as array field "v"
- vrEuler — internally represented as 2 arrayes of 3 doubles stored as fields "t" & "r"
The types vrPoint and vrVector have identical internal representations.
Therefore they can be easily converted through a simple type-cast.
However, mathematically points and vectors are different, therefore it is strongly advised
that the appropriate type be used for any given circumstance.
NOTE: one can think of points and vectors as actually being 4 element arrays, with the
fourth element of a point being '1' and the fourth element of a vector being '0'.
The use of the vrEuler type is strongly discouraged.
There are two occasions in which vrEuler is not discouraged:
- when porting code of an existing VR application that was written with a VR integration library in which the use of Euler angles was encouraged,
- when using the billboard orientation calculation routines provided by FreeVR.
For functions returning values as a complex type, the first argument is always a
pointer to the location into which the result will be copied.
The function then returns a copy of that pointer as the return value.
The reason for doing this is to eliminate the need for the library to
continually allocate and deallocate memory when performing these operations.
By returning a copy of the pointer as the return value, it is possible to
nest operations.
Aside: FreeVR inputs
Having the ability to render worlds that provide the proper perspective
for a VR user is a good start to creating a virtual reality experience.
But a world in which all the user can do is move their head about to
see from different perspectives can become boring rather quickly.
Therefore, the next two examples (ex4 & ex5) begin to demonstrate
the basic means by which physical inputs from a hand-held controller
can affect the virtual world.
Later examples with introduce more complex means of interaction,
including direct input interactions, virtual input interactions,
as well as a means by which agent input interactions can be
accomplished.
There are three types of FreeVR inputs that are most commonly used:
- 2switch (i.e. a button)
- returns a single integer value
- valuator
- returns a double precision floating point value in [-1.0, 1.0]
- a joystick is implemented as a pair of valuators
- 6sensor (i.e. a 6-degree-of-freedom position)
- returns a vrMatrix value
- is used for returning the positions of 6-dof tracker devices (e.g. the position of the head or a controller)
- there are many functions for extracting particular aspects of a 6sensor rather than the complete matrix
(e.g. the vector pointing forward from the hand controller)
Each type of input can be querried in one of two ways:
- vrGet...Value() returns the current value
- vrGet...Delta() returns the difference in the value between now and the last time a query was made
Using the "Delta" form of the input queries can be especially useful when determining
whether a button was just pressed or just released.
NOTE: In general, input queries should take place only in the simulation routine,
and not in any of the rendering routines.
The effects of the inputs should then be calculated as part of the simulation,
and passed to the rendering routines as part of the state of the world.
Example 4: Very simple inputs — buttons (ex4)
The use of buttons to affect the virtual world is the most basic form of input
possible.
In this example, buttons 1 and 3 (which frequently correspond to
left and right buttons on a VR controller) move the green pyramid up and down
by one unit.
Button 2 resets the green pyramid back to the center of the working volume.

This type of interaction where the manipulation of a physical input device
causes an immediate response to an object in the virtual world is known
as the "physical control" input method.
Here is a sample of the new code:
update_world(WorldDataType *wd)
{
...
if (vrGet2switchDelta(1) == 1)
wd->obj3_y += 1.0;
if (vrGet2switchDelta(2) == 1)
wd->obj3_y = 5.0;
if (vrGet2switchDelta(3) == 1)
wd->obj3_y -= 1.0;
...
}
NOTE that by using the "Delta" version of the input query we can easily specify
that the world database should be altered precisely when a particular button has
just been pressed (vs. all the time that it is pressed).
There are minor
differences between ex3 and ex4.
Example 5: Very simple inputs — valuators/joysticks (ex5)
Another typical controller input for virtual reality (as well as other
hardware such as game controllers) is the joystick.
A joystick is really just two valuators connected to allow one to
easily move in both the X and Y directions at the same time.

- Here is a code snippet:
#define JS_EPSILON 0.125
#define MOVE_FACTOR 0.25
update_world(WorldDataType *wd)
{
double joy_x, joy_y;
...
joy_x = vrGetValuatorValue(0);
joy_y = vrGetValuatorValue(1);
if (abs(joy_x) > JS_EPSILON)
wd->obj3_x += joy_x * delta_time * MOVE_FACTOR;
if (abs(joy_y) > JS_EPSILON)
wd->obj3_z += (joy_y-copysign(JS_EPSILON,joy_y))
* delta_time * MOVE_FACTOR;
...
}
NOTES:
It is generally a good idea to have a "dead zone" around the zero value
due to the nature of physical inputs — often there is no precise zero
value so a small positive or negative number will be returned instead.
Also, light presses on sensitive joysticks may cause unwanted movement.
The JS_EPSILON factor represents the size of the dead zone.
This example shows two ways of handling the "dead zone".
In the case of the joy_x value, as soon as the X dimension of
the joystick moves past the epsilon value, the movement will immediately
jump to that value, causing a discontinuity in the movement.
For the joy_y value, the epsilon value is subtracted from
the input (in a sign-neutral way) such that when the input moves slightly
outside the epsilon range, it will respond with slight movements
reflecting the small delta between the actual value and the epsilon.
The previous button inputs (from example 4) are still active, so they
too can control the green pyramid, and in particular button 2
will reset the location of the pyramid — though the code changed
a bit to reset all three location parameters.
Example 6: Very simple output — 2D text (ex6)
Text rendering is often useful as part of the user interface,
or for debugging applications.
This example introduces a very basic way of adding text into the
rendering of the virtual world.
It is not the best method for most circumstances, but will serve
to get us started.

- The FreeVR Library provides:
- vrRenderText() — Render a string as 2D text in
the virtual world.
- vrRenderInfo — an internal FreeVR structure that
provides a conduit for information through the rendering
routine.
- This example provides our first exposure to the vrRenderInfo type.
This FreeVR internal structure contains information about the current
rendering state.
This state information is not directly useful for the application
programmer, but it is required by some of the functions that will
be called by the render routine — such as the vrRenderText()
function used in this example.
Thus, the new argument to the render routine which is then passed
directly to the FreeVR render helper functions can be thought of a
a conduit for the render state information.
- NOTE: 2D raster text is sub-optimal for rendering 3D worlds.
Especially when the user can change their relative view of
the text. It is always displayed parallel to the bottom of
the screen, and always at a constant size.
The result is that as the text moves relative to the user
it will have the appearance of changing in size.
This effect is the result of the text remaining a constant
physical size, while the rest of the virtual world will be
changing in response to user movement.
The relative changes between the text and the rest of the
virtual world are what produce the appearance of the text
growing and shrinking.
One way to circumvent this issue, is to affix the text to the
physical location of one of the screens.
Since the relationship between the screen and the user
is always physically matched between the virtual and real
worlds, the text will always be rendering correctly.
- NOTE also: the OpenGL lighting model is disabled during the rendering
of the text. This is to keep the text rendered as a bright white,
rather than the dimmer color that would occur due to only being
partially illuminated by the lights in the scene.
- Example 6 (ex6_text.c) code
shows how to render simple 2D text.
- There are minor
differences between ex5 and ex6.
Example 7: A virtual pointer & Coordinate Systems (ex7)

- One of the more difficult aspects of VR programming is
dealing with coordinate systems.
- There is a coordinate system (CS) for the real world,
the tracking system, each screen,
the virtual world, the head, the hand, each eye, etc.
- The FreeVR Library provides some relief:
- vrRenderTransform6sensor() — adds a matrix to the
OpenGL matrix stack that moves (transforms) from the
world coordinate system to the coordinate system of
the given sensor.
- This function is called in the application's drawing routine.
- Example 7 (ex7_pointer.c) uses
vrRenderTransform6sensor() to render a (long, cyan) pointer
"attached" to the wand.
- There are minor
differences between ex6 and ex7.

Example 8: Objects with behavior & using vectors (ex8)
Objects that move on their own can make for a more interesting virtual world.
In this example, we use the direction the wand is pointing (a 6sensor input)
to affect the direction of a small projectile.

The wand will be used to shoot a small yellow cube.
Once the cube is shot, it continues to move on its own.
This example introduces a new type and two new functions:
- vrVector — a structure with a three element array
storing a direction.
Field "v" is an array of doubles.
- vrVectorGetRWFrom6sensorDir() — gets the direction
a 6-dof sensor is pointing relative to the real world.
The result is a vrVector.
- vrGet2switchDelta() — gets the change in value
between now, and the last time this function was called
for the given input.
Thus, a 2-switch can return:
- -1: switch was just released,
- 0: switch has not changed, and
- 1: switch was just pressed.
Example 8 (ex8_shoot.c)
lets the user shoot a small yellow cube.
There are minor
differences between ex7 and ex8.
Example 9: Object Selection (ex9)
- Before we can manipulate an object, we need to specify which one.
- By determining where the wand is, we can put it to use.
- The FreeVR Library helps with:
- vrPointGetRWFrom6sensor() — tells us where things are
in virtual space.
- vrPoint type — stores an X,Y,Z location as an array
in field "v".
Note: in the example code, a #define is used to treat the
vrPoint.v field as a simple array named "wand_location".
- Example 9 (ex9_contact.c)
lets the user select one of the objects in the virtual world.
- There are minor
differences between ex8 and ex9.
- This form of selection is called "selection by contact."

Example 10: Manipulating the world (ex10)
- Doing something with the selected object.
- We will once again use the FreeVR function:
- vrGet2switchValue() — a user controlled input variable
- With this we can tell the world simulation when we want the
object to mimic our movement.
- Example 10 (ex10_manip.c)
does just that.
- There are minor
differences between ex9 and ex10.

Example 11: Using Locks to Safeguard the Code (ex11)
- Locking code is added to remove the potential for bugs that can occur
when one process is writing to the same memory from which another
is simultaneously reading.
- The FreeVR Library provides:
- vrLockCreate() — Create a new vrLock
- vrLockWriteSet() — Prevent other code fragments
bound by this lock from reading or writing to the
associated memory.
- vrLockWriteRelease() — Allow other code fragments
bound by this lock to access the associated memory.
- vrLockReadSet() — Prevent other code fragments
bound by this lock from writing to the associated memory.
NOTE: other reading threads are permissible.
- vrLockReadRelease() Allow other code fragments
bound by this lock free access the associated memory.
- With the use of locks, it is sometimes more efficient to store
information from shared memory in a local variable to reduce
the number of times locking is required.
- Example 11 (ex11_locks.c) code
shows how to do read and write locking.
- There are minor
differences between ex10 and ex11.
Example 12: Traveling through the world (ex12)
Another common way to "interact" with a virtual world is to move through it.

- The FreeVR Library provides a collection of utilities to accomplish this:
- vrGetValuatorValue — user inputs from the "joystick"
on the wand.
- vrVectorGetRWFrom6sensorDir() — a direction from
one of the tracked objects.
- vrPointGetVWFromUser6sensor() — get a sensor's
position in the virtual worlds coordinates with
respect to a particular user.
- vrUserTravelTranslate3d(),
vrUserTravelRotateId(),
vrUserTravelReset() — functions to specify the
relationship between the real and virtual worlds.
- vrRenderTransformUserTravel() — a matrix that moves
from the real world position to a virtual world position.
- Notice that some objects (perhaps the user interface) can be
left in real-world coordinates, and other objects can be in
the coordinates of one of the tracked sensors (such as the
wand avatar).
- Note that we again must be concerned with how much time has
passed between simulation frames.
Otherwise, speed of travel will be relative to
the speed of the computation hardware.
- Another note: interaction with objects in the virtual world
now requires that we perform calculations with
respect to the virtual world coordinate system.
- Example 12 (ex12_travel.c)
lets the user fly through the virtual world.
- There are minor
differences between ex11 and ex12.
- There are many ways to fly:
- There is also a C++ version of example 12 (ex12_travel.cpp)
that demonstrates the small number of changes needed to compile FreeVR with a C++
application.
Example 13: Agent-style interaction with the world (ex13)
- There are many possible interface techniques.
- Some techniques can be seen in the CAVE demos (CRUMBS, Boiler, Crayoland).
- Many of the currently common methods are still very
reminiscent of the desktop metaphor interfaces.
- Text is also a possible interface. One that allows other
possible interfaces to specify functions to the world simulation.
- Pressing keys on the keyboard is the simplest example of this.
- The ubiquitous 'ESC' key is one example
- Some applications also read keyboard presses to change modes, etc.
(although to be useful this generally requires someone sitting by
the keyboard).
- Text commands can also be sent via a socket connection.
- Example 13 (ex13_socket.c) is
an example of this.
- There are some significant
differences between ex12 and ex13.
- The socket connection can take any message, and use a simple parser,
or something more complex.
- By setting up a listen-socket, the application is behaving
as the server half of a client/server relationship.
- The client can take many forms:
- keystrokes in a telnet session
- simple send-a-message unix program
- scripting language (eg. perl)
- and (building on the first method) voice
- A simple socket API is provided by the FreeVR library.
Example 14: User Interface: Virtual Controls (ex14a-14f)
The following six examples show how different forms of virtual controls
can be implemented and used to control actions within the virtual world.
These examples are implemented in a very rudimentary fashion, with
limits on how they can be positioned in the virtual world.
These examples now require some additional shapes not included in the
simpler version of the shape source code.
The new shapes code is in
(wrs_shapes.c).
Example 14a: Virtual Controls: Toggle Button (ex14a)
- Introducing the first virtual control: a toggle button.
- Interaction with the different types of objects (artifacts
in the world vs. user interface objects) must each be done
in the proper coordinate systems.
- When the virtual button is pressed, its state toggles between
on and off.
- Virtual controls affect the interaction with the world, just
like other type of controls.
Here, travel control is only enabled when the button is "on".
- Example 14a (ex14a_button.c)
lets the user toggle a virtual button to enable/disabled travel.
- There are a few
differences between ex13 and ex14a.
Example 14b: Virtual Controls: Slider (ex14b)
- When the virtual slider is moved, a control values can be
moved through the range from [-1, 1].
- The slider can be grabbed by the plate, or in the rod.
- The slider controls the speed of rotation (negative values reverse
the direction).
- Example 14b (ex14b_slider.c)
lets the user adjust the speed of travel with a slider-bar widget.
- There are a few
differences between ex14a and ex14b.
Example 14c: Virtual Controls: Lever (ex14c)
- The lever is also a single valued valuator (like the slider).
- The lever rotates about the base when grabbed by the ball-handle.
- Implementing the lever is a little more complicated than the slider-bar.
- In this example, the lever is used to control the travel rotation
of the virtual world.
- Example 14c (ex14c_lever.c)
lets the user control the direction of travel with a lever widget.
- There are a few
differences between ex14b and ex14c.
Example 14d: Virtual Controls: Radio Buttons (ex14d)
- Like the buttons on a car radio, the virtual radio button widget
allows the user to select one option from among many choices.
- This widgets uses text strings to indicate the available choices,
but icons or 3D models could also be used.
- This example does not make use of the radio button selection.
- Example 14d (ex14d_radio.c)
lets the user make a radio button choice.
- There are a few
differences between ex14c and ex14d.
Example 14e: Virtual Controls: Joystick (ex14e)
- Like the real-world counterpart, a virtual joystick control is a
two-valuator input device.
- We can implement virtual controls such as the joystick to operate
as though they have springs that will return them to center, or
the control can remain where it was released.
The same is true for the lever.
A "spring-loaded" SNAPBACK feature can be enabled in this example
by setting a #define prior to compilation.
- In this example, the virtual joystick now operates the travel
controls instead of the physical joystick on the wand.
- Example 14e (ex14e_joyst.c)
lets the user control direction and speed of travel with a
virtual joystick widget.
- There are a few
differences between ex14d and ex14e.
Example 14f: Virtual Controls: Push Button (ex14f)
- The virtual push-button control operates like a typical spring-loaded
contact switch.
The button is on when the user touches it, but once the user
ceases to touch the button, it turns off.
- Here, the push-button resets travel back to the initial coordinates.
- Example 14f (ex14f_pushbut.c)
lets the user reset travel with the touch of a button.
- There are a few
differences between ex14e and ex14f.
Example 15: The four methods of manipulation (ex15)
This example is not yet fully implemented.
Example 16: 3D text in the world (ex16)
- By using the GLUT library, we can render (flat) 3D text in the world.
- Now text can be located in the virtual world (not just on a screen's
surface), and it will appear in the correct size and orientation.

- Example 16 (ex16_3dtext.c)
adds GLUT-based 3D text to the virtual world.
- There are minor
differences between ex13 and ex16.
NOTE that we are skipping back to example 13 for the code differences, since
this (and future) examples do not include the virtual widgets.
Example 17: Objects with textured surfaces (ex17)
Objects with textured surfaces, make for a more interesting looking virtual world.

Texture images can be rather large.
Therefore it becomes important to ensure that sufficient shared memory
is allocated to contain the texture images.
Standard OpenGL texture mapping techniques are used to generate
objects with textures.
In this example there is only a single texture, so it is declared once,
and then texturing is enabled/disabled as required.
Often textures are read from image files.
For this example a simple checkerboard texture is created algorithmically.
The function to create the texture (make_texture_checker_pattern())
will either create a white/gray pattern or a white/transparent pattern
depending on the third argument.
Example 17 (ex17_texture.c)
puts textures on particular objects in the world.
There are minor
differences between ex16 and ex17.
Example 18: Objects with billboarded textured surfaces (ex18)
A common technique in computer graphics to simplify rendering
(and thus reduce rendering time), is to use a textured plane that
rotates to continually face the viewer — known as a "billboard".
Common examples include trees and spheres.

In a virtual reality system, the location of the user is
affected by their physical movements.
Thus, additional calculations must be performed to properly
orient billboards toward the user.
FreeVR has a special function to perform these calculations:
vrRenderGetBillboardAngles3d().
Again we use the vrRenderInfo conduit to pass rendering
state information into the function.
The second argument (after the conduit) is a vrEuler.
This is is one of the few places where the use of Euler angles
is not discouraged.
The final argument(s) provide the location of the billboarded object.
This information can be provided in one of two ways:
- vrRenderGetBillboardAngles3d() — three double precision values
- vrRenderGetBillboardAnglesAd() — an array of three double precision values
Depending on the symetry of the billboarded object, there are two
common ways to perform the billboarding rotations:
- cylindrical — symmetry about the Y (up) axis; and
- spherical — symmetry about the X (lateral) and Y (up) axes.
Cylindrical billboarding requires one rotation to properly face the
user, while spherical requires two rotations.
An example of each is shown in the code.
Example 18 (ex18_texture.c)
shows a cylindrical and spherical billboards.
There are minor
differences between ex17 and ex18.
Example 19: The world in miniature (ex19)
The World-in-Miniature (WIM) technique is a common virtual world
interface tool.

By separating out the rendering of just the virtual world (as opposed
to the user interface objects), we can reuse the new virtual-world rendering
function, to render the world twice — once at full scale, and again
as the miniature view.
In this example, a representation of the user's location is included
(a sphere).
In order to properly place the sphere representation, the travel
matrix must be removed from the matrix stack.
This is accomplished by doing a matrix multiple with the inverse of the
travel matrix using the function vrRenderTransformUserTravelInv().
For this example, the method of travel has been alter to walk-through
(rather than fly-through).
This is to better demonstrate the usefulness of the WIM as a means
of navigating through the simple maze.
Example 19 (ex19_wim.c)
lets the user summon a WIM view.
There are minor
differences between ex18 and ex19.
Example 20: Clipping away part of the world (ex20)
Using standard OpenGL clipping routines, we can provide a means for
the user to see-through part of the world.

By clipping away part of the virtual world, we can peer inside and
through objects.
Through judicious use of enabling and disabling the clipping plane
operation, we can control which parts of the world are subject to
clipping.
In this example, the user interface objects cannot be clipped, but
the virtual world can be.
A simple rectangle shape is added to the user interface to represent
the orientation of the clipping plane.
Of course, planes are infinite in scope, so the rectangle is only
a plane-segment.
NOTE that the specification of the OpenGL clipping plane parameters
are done in real-world coordinates.
Example 20 (ex20_clipplane.c)
lets the user remove half the virtual world.
There are minor
differences between ex19 and ex20.
Example 21: Using OpenGL lighting for interactive lights (ex21)
In addition to manipulating objects in the world, sometimes it is valuable to
manipulation other features of the world, such as the lighting parameters.
In this case, giving us the effect of holding a flashlight.
- Again, we make use of standard OpenGL capabilities to affect the
lighting of the world.
- The standard fixed light (the "scene light") can now be toggled on and off.
- In addition to the scene light, a light can be held in the hand.
The type of hand-held light can be:
- none,
- a positional light,
- a directional light, and
- a directional spot light.
- Unlike all of the previous examples, this one makes use of two additional
button inputs (4 & 5).
Therefore to allow this example to work on systems with less than five
buttons, two of the previous operations were moved to buttons 4 & 5
allowing the control of the lighting parameters to be performed with
buttons 1 & 2.
- Example 21 (ex21_flashlight.c)
lets the user shoot a small yellow cube.
- There are minor
differences between ex20 and ex21.
Tutorial Summary:
This tutorial demonstrates most of the important and unique features of
the FreeVR virtual reality integration library.
Each function is introduced as part of a small progression toward
increasinly more capable demonstration applications.
The functions that are not presented here tend to be very similar
to ones that are presented, plus many of the mathematical operations
which do not need to be fully enumerated.
The FreeVR webpage includes a
functions reference document
that fully lists all the functions needed for robust application
development.
Other tutorials are also under development to demonstrate the use of
FreeVR with scene-graph and physics libraries.
These will become available on the
FreeVR tutorials webpage as
they reach sufficiently documented states.
There is also a FreeVR-Performer tutorial available on the webpage,
though that is of limited value since the Performer library is not
as widely used as it has been in the past.
Programming Caveats to Remember:
As a reminder, there are a handful of things to watch our for when
writing virtual reality applications:
- Applications not tested with multi-process rendering may not
work with multi-process rendering.
- E.g. applications developed in conjunction with an ImmersaDesk
or other single-screen display system and/or simulator displays
may not work in a CAVE.
- This can be avoided by testing the application in a
multi-process rendering situation.
- A special configuration file will allow for multi-process rendering
even for a simulated VR system on multi-CPU systems.
Last modified 10 January 2010.
Bill Sherman, shermanw@indiana.edu
© Copyright William R. Sherman, 2010.
All rights reserved.
In particular, republishing any files associated with this tutorial
in whole or in part, in any form (included electronic forms)
is prohibited without written consent of the copyright holder.
Porting to other rendering systems is also prohibited without
written consent of the copyright holder.