Research and Advances
Artificial Intelligence and Machine Learning

Immersive Authoring: What You eXperience Is What You Get (WYXIWYG)

Users experience and verify immersive content firsthand while creating it within the same virtual environment.
Posted
  1. Introduction
  2. Natural Way to Construct Virtual Scenes
  3. Creating AR Applications
  4. Tangible AR Applications
  5. Conclusion
  6. References
  7. Authors
  8. Figures

Augmented reality (AR) interfaces typically use handheld or head-worn displays to enable users to overlay virtual graphics on real-time views of the everyday physical world. Sophisticated tracking technologies help create the illusion that these virtual images are attached to real-world locations and objects. Notable potential application domains include medicine, entertainment, engineering, museum displays, and advertising.

Figure

Building AR applications involves many hours of low-level coding and content development. AR technology thus needs authoring tools that reduce the time and cost of that development. Here, we explore these tools, focusing on one called immersive authoring for Tangible Augmented Reality, or iaTAR, [5] we are building at Pohang University of Science and Technology (vr.postech.ac.kr).

The usability of high-level multimedia authoring tools (such as Macromedia Flash, Microsoft PowerPoint, and Max/MSP) has led to a surge in the amount of digital audio and 2D graphics content available today. Interactive 3D graphics applications can also be produced through interactive authoring tools with 2D graphical user interfaces (such as Alice [9] and Adobe Atmosphere). Similarly, the general availability of AR authoring tools would also make it possible for many more people to build AR applications on their own computers.

Building an interactive AR application involves two main steps: creating graphics and audio content and describing object behaviors and the interactions between users and that content. Because commercial tools are available for creating content, most research in AR authoring today involves specifying object behavior and user interaction. Work in AR authoring draws on the rich heritage of authoring environments in computer graphics and virtual reality. Previous work [8, 10] showed that visual programming techniques could be used to develop 3D interactive graphics applications. This approach is being commercialized through software from Virtools (www.virtools.com) and EON Reality (www.eonreality.com) for the visual programming of virtual environments; developers connect icons and specify object properties to create virtual reality applications.

Similarly, the Authoring MIxed Reality (AMIRE) system from the AMIRE consortium (www.amire.net) and its editing tool Component-based Authoring Tool for Mixed Reality (CATOMIR) [11] can be used to visually program AR applications. The AMIRE project uses a component-oriented framework for building AR applications whereby the developer selects and connects components. CATOMIR, which is a visual front end for AMIRE, consists of an authoring interface and a live AR workspace. Users specify object properties and behaviors, then switch to the workspace to review the effects of the modifications they’ve made. Users create components by selecting them from a list integrated into CATOMIR’s 2D graphical user interface and specifying their property values through the keyboard. Users specify interactions by visually dragging and linking the properties they want connected. After constructing a collection of components in the 2D desktop authoring environment, they test their application within the AR workspace.

Instead of visual programming, many high-level authoring tools use scripting and time-based media. For example, Director from Macromedia (www.macromedia.com) enables users to build compelling 2D interactive multimedia content. The Designers Augmented Reality Toolkit (DART) project [6] at Georgia Tech is weighing script- and time-based methods for authoring AR applications. DART is a tool for the rapid prototyping of AR applications; it works as plug-in software for Director, using its score, sprites, and scripted behaviors, along with AR-specific functions related to trackers, sensors, and live-camera input.

Building AR applications with Director, developers work with DART-supplied behavior components representing AR functions (such as virtual objects, video input, tracker data, audio playback, and control logic triggers). These behaviors are added to the Director score; by properly setting their attributes, users place the virtual objects in physical scenes from the video input. To help create interactive applications, DART also provides event-trigger components. Like CATOMIR, DART uses computer vision tracking to associate virtual content with physical models and track the user’s point of view when looking at an AR scene.

Back to Top

Natural Way to Construct Virtual Scenes

Direct 3D manipulation of virtual objects is a natural way to construct virtual scenes [1, 7]. A number of immersive authoring systems is available for authoring virtual reality environments [4, 10]. In immersive authoring systems, virtual reality developers build applications from within the virtual environment, an approach similar to visual programming. Programming elements are represented with 3D virtual objects themselves, and direct 3D manipulation is used to assemble virtual scenes and specify object behaviors and attributes. Developers place and rearrange objects in the scene based on a first-person point of view.

The same approach can also be applied to describing the behaviors and interactions of virtual objects; for instance, developers might grab, move, and interact with them to specify their motion and intermittent behavior. Such user interaction is a variation of the “programming by demonstration” approach [2] used for building 2D graphical user interfaces. SmartScene from Digital ArtForms (www.digitalartforms.com) and DIVISION dVISE from PTC (www.ptc.com) are commercial examples of immersive authoring environments. For example, in SmartScene, users grab virtual objects and behaviors from a virtual palette to interactively build virtual reality applications from within the application.

Immersive authoring allows users to experience and verify immersive content firsthand, creating it through natural and direct interaction within the same environment. We coined the term “What You eXperience is What You Get” (WYXIWYG) to describe the merits of immersive authoring, putting content artists in the driver’s seat of the production and providing a means of communicating with programmers to help them realize their ideas.

We are researching programming metaphors that allow developers to interact with graphics directly, intuitively, and in 3D (where appropriate). This approach is similar to the approach taken by visual programming [8], although in our AR authoring work, we try to represent programming elements with 3D virtual objects as much as possible through direct 3D manipulation. We use both 2D and 3D metaphorical objects and interfaces (in the 3D environment) to describe the content’s nonspatial and logical features.

Back to Top

Creating AR Applications

These design principles have helped us implement the iaTAR system, one of the first AR authoring tools to allow developers to create AR scenes from within an AR application. Unlike DART and CATOMIR, iaTAR does not require the author to change modes between authoring and viewing AR content. Tangible AR is iaTAR’s basic interaction metaphor [3]. In tangible AR interfaces, each virtual object is registered and mapped to a physical object, and the user interacts with virtual objects by manipulating the corresponding physical object. All interactions are through physical props, and the AR content is represented by a number of physical/virtual objects and author-specified links among their properties. Thus, authoring amounts to the creation of objects, changing their property values and making connections among various objects and their properties.

An iaTAR user wears a lightweight head-mounted display with attached video camera; video from the camera is analyzed to calculate the user’s point of view relative to the physical tracking markers. Using these tracking results, iaTAR combines virtual imagery and video of the real world, so the virtual objects appear anchored on real objects. Like CATOMIR [11], iaTAR is a component-based system with three main component types: physical object, virtual object, and logic/behavior box. Each has its own set of properties users browse, select, and change, building AR applications by physically connecting components together. For example, a virtual object can be moved by changing the value of its position property, which can be connected to a real object. As a result, the virtual object appears to follow the real object to which it is connected.

Users observe and interact with virtual objects by manipulating their physical counterparts (props with markers). These props are the simple pads and cubes commonly used in tangible AR applications. Figure 1 outlines the props in iaTAR’s authoring tasks, including a component browser, a manipulator, and inspector pads. The component browser (upper row) is a physical interface for browsing and selecting available 3D virtual objects. Users browse the models one by one, pressing the arrow buttons on either side of the browser. To create a new instance of a virtual object, they point at the target 3D model for a moment with the cube manipulator, attaching the new object to the cube manipulator. The cube manipulator can then be used to move the virtual model within the AR scene. The manipulator lets users position and orient the model; physically covering the cube with their hands attaches the model to the scene (middle row). While manipulating the virtual objects, their motions can also be recorded, a useful feature for intuitively describing the motions of virtual objects.

Users browse and change the values of component properties by using an inspector pad and keypad prop (lower row). Touching objects with the inspector pad prompts the system to present users a virtual list of object properties. Browsing these properties by tilting the inspector pad enables them to select a property whose value they want to change. The inspector pad and the keypad are connected (virtually) when the user places them next to each other, and touching the keypad marker changes the values of the selected property. Alternatively, users connect two inspector pads together so the properties of one object affect the properties of another object.

Back to Top

Tangible AR Applications

The efficiency of this immersive authoring method is demonstrated by way of several tangible AR applications we’ve developed using the iaTAR system. The first involves a simple scene with a windmill (see Figure 1, lower right) consisting of three virtual objects: the ground, a tower, and windmill blades. We used a logic box representing a rotation behavior to specify the spinning of the windmill. Connecting the logic box to the windmill blades continually updates blade orientation, producing an animated scene with a rotating windmill.

In addition to building passive animations of virtual objects, users can also build interactive applications. Figure 2 shows a simple application that causes the appearance of models to change when they are put next to one another. For example, moving the tile with the virtual hare next to the tile with the virtual tortoise changes the models to show the hare and tortoise greeting each other. Building this application requires four virtual objects: the normal models and the greeting posed models for the hare and the tortoise. The virtual objects are first placed on two physical tiles, one for the hare, one for the tortoise. The visibility properties of the virtual objects are controlled by the proximity value of the physical tiles. In order to check the distance and control the visibility properties, iaTAR employs a logic box with a proximity function. It has two input properties of position and a Boolean output property that changes to “true” when the two input positions are close together (distance less than a predefined threshold). This property triggers a change in the appearance of the hare and tortoise virtual models.

Figure 3 shows an interactive AR hare-and-tortoise storybook built with iaTAR. We based the scenes on the well-known Aesop’s fable “The Tortoise and the Hare”; the motions of the characters were recorded with iaTAR’s recorder tool. We also added interactivity by connecting component properties to decide the winner in the final scene, depending on whether the user opts to have the hare stop running and take a nap during the race.

Subjects in our user studies have found iaTAR easy to use and faster than other 2D GUI-based authoring tools (such as CATOMIR) on similar tasks. They particularly enjoy iaTAR’s support for physically based manipulation. However, while useful for rapidly building simple AR applications, iaTAR could be a challenge for users developing complex interactive interfaces; most visual programming applications often do not scale well. Moreover, object parameters are best modified using more traditional mouse and keyboard input (such as the precise specification of size or scale). 2D GUI-based authoring tools handle these issues through pop-up dialogue windows for text input. The ideal AR authoring tool would likely combine mouse and keyboard input and visual programming. For example, one third of the participants in our recent user study said they would prefer a combined immersive authoring and desktop interface.

Back to Top

Conclusion

The main challenge in immersive authoring is how to apply WYSIWIG and direct interaction methods in 2D graphical user interfaces to creating immersive content. Due to the limitations of its functions and hardware, more immersive authoring research is needed before the first commercial systems would be available. We are collecting and evaluating more results from case studies and user studies to further understand the advantages and disadvantages of immersive authoring. We will also add more complex logic-box functionality and explore how to support text and keyboard input.

Immersive authoring with WYXIWYG features will play an important role in future virtual reality content production, helping popularize virtual and mixed-reality content by transferring more of the expressive power in content creation to artists, domain experts, and end users. We look forward to making it possible for even young children to create their own AR play spaces.

Back to Top

Back to Top

Back to Top

Figures

UF1 Figure. AR volcano book. Readers see 3D virtual images of volcanoes while looking at a real book through a handheld display. (Human Interface Technology Laboratory New Zealand, University of Canterbury, Christchurch, New Zealand)

F1 Figure 1. In the iaTAR system, users choose and place 3D models into a scene, record their motion, and construct data-flow networks to describe their behavior.

F2 Figure 2. An interactive AR application built with iaTAR.

F3 Figure 3. Interactive AR story book content created through iaTAR.

Back to top

    1. Butterworth, J., Davidson, A., Hench, S., and Olano, T. 3DM: A three-dimensional modeler using a head-mounted display. In Proceedings of the 1992 Symposium on Interactive 3D Graphics (Cambridge, MA, Mar. 29–Apr. 1). ACM Press, New York, 1992, 135–138.

    2. Cypher, A., Ed. Watch What I Do: Programming By Demonstration. MIT Press, Cambridge, MA, 1993.

    3. Kato, H., Billinghurst, M., Poupyrev, I., Imamoto, K., and Tachibana, K. Virtual object manipulation on a tabletop AR environment. In Proceedings of the IEEE and ACM International Symposium on Augmented Reality (Munich, Germany, Oct. 5–6). IEEE Computer Society Press, Los Alamitos, CA, 2000, 111–119.

    4. Lee, G., Kim, G., and Park, C. Modeling virtual object behavior within a virtual environment. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (Hong Kong, Nov. 11–13). ACM Press, New York, 2002, 41–48.

    5. Lee, G., Nelles, C., Billinghurst, M., and Kim, G. Immersive authoring of tangible augmented reality applications. In Proceedings of the IEEE and ACM International Symposium on Mixed and Augmented Reality (Arlington, VA, Nov. 2–5). IEEE Computer Society Press, Los Alamitos, CA, 2004, 172–181.

    6. MacIntyre, B., Gandy, M., Dow, S., and Bolter, J. DART: A toolkit for rapid design exploration of augmented reality experiences. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (Santa Fe, NM, Oct. 24–27). ACM Press, New York, 2004, 197–206.

    7. Mine, M. ISAAC: A Virtual Environment Tool for the Interactive Construction of Virtual Worlds, Tech. Rep. CS TR95-020. Department of Computer Science, University of North Carolina, Chapel Hill, 1995.

    8. Najork, M. Programming in three dimensions. Journal of Visual Languages and Computing 7, 2 (June 1996), 219–242.

    9. Stage3 Research Group. Alice. Carnegie Mellon University, Pittsburgh, PA; www.alice.org.

    10. Steed, A. and Slater, M. Dataflow representation for defining behaviors within virtual environments. In Proceedings of the 1996 Virtual Reality Annual International Symposium (Santa Clara, CA, Mar. 30–Apr. 3). IEEE Computer Society Press, Los Alamitos, CA, 1996, 163–167.

    11. Zauner, J. and Haller, M. Authoring of mixed reality applications, including multi-marker calibration for mobile devices. In Proceedings of the 10th Eurographics Symposium on Virtual Environments (Grenoble, France, June 8–9). Eurographics Association, Aire-la-Ville, Switzerland, 2004, 87–90.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More