Cave automatic virtual environment

From Wikipedia, the free encyclopedia

A cave automatic virtual environment (better known by the recursive acronym CAVE) is an immersive virtual reality environment where projectors are directed to between three and six of the walls of a room-sized cube. The name is also a reference to the allegory of the Cave in Plato's Republic in which a philosopher contemplates perception, reality and illusion.

General characteristics

The first CAVE was invented by Carolina Cruz-Neira, Daniel J. Sandin, and Thomas A. DeFanti at the University of Illinois, Chicago Electronic Visualization Laboratory in 1992.[1] A CAVE is typically a video theater situated within a larger room. The walls of a CAVE are typically made up of rear-projection screens, however flat panel displays are becoming more common. The floor can be a downward-projection screen, a bottom projected screen or a flat panel display. The projection systems are very high-resolution due to the near distance viewing which requires very small pixel sizes to retain the illusion of reality. The user wears 3D glasses inside the CAVE to see 3D graphics generated by the CAVE. People using the CAVE can see objects apparently floating in the air, and can walk around them, getting a proper view of what they would look like in reality. This was initially made possible by electromagnetic sensors, but has converted to infrared cameras. The frame of early CAVEs had to be built from non-magnetic materials such as wood to minimize interference with the electromagnetic sensors; the change to infrared tracking has removed that limitation. A CAVE user's movements are tracked by the sensors typically attached to the 3D glasses and the video continually adjusts to retain the viewers perspective. Computers control both this aspect of the CAVE and the audio aspect. There are typically multiple speakers placed at multiple angles in the CAVE, providing 3D sound to complement the 3D video.[citation needed]


A lifelike visual display is created by projectors positioned outside the CAVE and controlled by physical movements from a user inside the CAVE. A motion capture system records the real time position of the user. Stereoscopic LCD shutter glasses convey a 3D image. The computers rapidly generate a pair of images, one for each of the user's eyes, based on the motion capture data. The glasses are synchronized with the projectors so that each eye only sees the correct image. Since the projectors are positioned outside the cube, mirrors are often used to reduce the distance required from the projectors to the screens. One or more computers drive the projectors. Clusters of desktop PCs are popular to run CAVEs, because they cost less and run faster.

Software and libraries designed specifically for CAVE applications are available. There are several techniques for rendering the scene. There are 3 popular scene graphs in use today: OpenSG, OpenSceneGraph, and OpenGL Performer. OpenSG and OpenSceneGraph are open source; while OpenGL Performer is free, its source code is not included.


To be able to create an image that will not be distorted or out of place, the displays and sensors must be calibrated. The calibration process depends on the motion capture technology being used. Optical or Inertial-acoustic systems only requires to configure the zero and the axes used by the tracking system. Calibration of electromagnetic sensors (like the ones used in the first cave) is more complex. In this case a person will put on the special glasses needed to see the images in 3D. The projectors then fill the CAVE with many one-inch boxes set one foot apart. The person then takes an instrument called an "ultrasonic measurement device" which has a cursor in the middle of it, and positions the device so that the cursor is visually in line with the projected box. This process can go on until almost 400 different blocks are measured. Each time the cursor is placed inside a block, a computer program records the location of that block and sends the location to another computer. If the points are calibrated accurately, there should be no distortion in the images that are projected in the CAVE. This also allows the CAVE to correctly identify where the user is located and can precisely track their movements, allowing the projectors to display images based on where the person is inside the CAVE.[2]


The concept of the original CAVE has been reapplied and is currently being used in a variety of fields. Many universities own CAVE systems. CAVEs have many uses. Many engineering companies use CAVEs to enhance product development.[3][4] Prototypes of parts can be created and tested, interfaces can be developed, and factory layouts can be simulated, all before spending any money on physical parts. This gives engineers a better idea of how a part will behave in the product in its entirety. CAVEs are also used more and more in the collaborative planning in construction sector.[5] Researchers can use CAVE system to conduct their research topic in a more accessible and effective method. For example, CAVEs was applied on the investigation of training subjects on landing a F-16 aircraft.[6]

The EVL team at UIC released the CAVE2 in October 2012.[7] Similar to the original CAVE, it is a 3D immersive environment but is based on LCD panels rather than projection.

See also


  1. ^ Cruz-Neira, Carolina; Sandin, Daniel J.; DeFanti, Thomas A.; Kenyon, Robert V.; Hart, John C. (1 June 1992). "The CAVE: Audio Visual Experience Automatic Virtual Environment". Commun. ACM. 35 (6): 64–72. doi:10.1145/129888.129892. ISSN 0001-0782. Retrieved 6 April 2017. 
  2. ^ "Archived copy". Archived from the original on 2007-01-09. Retrieved 2006-06-27. 
  3. ^ "Virtual reality in the product development process". Journal of Engineering Design. 13: 159–172. 1970-01-01. doi:10.1080/09544820210129823. Retrieved 2014-08-04. 
  4. ^ Product Engineering: Tools and Methods Based on Virtual Reality. 2007-06-06. Retrieved 2014-08-04. 
  5. ^ Nostrad (2014-06-13). "Collaborative Planning with Sweco Cave: State-of-the-art in Design and Design Management". Retrieved 2014-08-04. 
  6. ^ Repperger, D. W.; Gilkey, R. H.; Green, R.; Lafleur, T.; Haas, M. W. (2003). "Effects of Haptic Feedback and Turbulence on Landing Performance Using an Immersive Cave Automatic Virtual Environment (CAVE)". Perceptual and Motor Skills. 97: 3. doi:10.2466/pms.2003.97.3.820. 
  7. ^ EVL (2009-05-01). "CAVE2: Next-Generation Virtual-Reality and Visualization Hybrid Environment for Immersive Simulation and Information Analysis". Retrieved 2014-08-07. 

External links

  • Carolina Cruz-Neira, Daniel J. Sandin and Thomas A. DeFanti. "Surround-Screen Projection-based Virtual Reality: The Design and Implementation of the CAVE", SIGGRAPH'93: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, pp. 135–142, DOI:10.1145/166117.166134
Retrieved from ""
This content was retrieved from Wikipedia :
This page is based on the copyrighted Wikipedia article "Cave automatic virtual environment"; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA