Through eons of human evolution, we have developed sophisticated skills for sensing and manipulating our physical environment. However, most of them are not used when interacting with the digital world where interaction is largely confined to graphical user interfaces. With the commercial success of the Apple Macintosh and Microsoft Windows systems, the GUI has become the standard paradigm for human-computer interaction.
GUIs represent information (bits) in the form of pixels on bit-mapped displays. These graphical representations are manipulated with generic remote controllers (such as mice and keyboards). By decoupling representation (pixels) from control (input devices) this way, GUIs are malleable enough to graphically emulate a variety of media. However, when interacting with the GUI world, we cannot take advantage of our evolved dexterity or utilize our skills in manipulating physical objects (such as building blocks or clay models) (see Figure 1).
The Tangible Media Group at the MIT Media Laboratory moved from GUIs to tangible user interfaces (TUIs) in the mid-1990s. TUIs represented a new way to embody Mark Weiser’s (former chief scientist at Xerox PARC) vision of ubiquitous computing by weaving digital technology into the fabric of the physical environment, rendering the technology invisible [9]. Rather than make pixels melt into an interface, TUIs use physical forms that fit seamlessly into a user’s physical environment. TUIs aim to take advantage of these haptic-interaction skills, an approach significantly different from GUIs. The key TUI idea remains: give physical form to digital information [3], letting serve as the representation and controls for its digital counterparts. TUIs make digital information directly manipulatable with our hands and perceptible through our peripheral senses through its physical embodiment (see Figure 1).
Urp: First-Generation TUI
To illustrate basic TUI concepts, I start with the Urban Planning Workbench, or Urp (developed by the Tangible Media Group in 1999), as an example early TUI [8]. Urp uses scaled physical models of architectural buildings to configure and control an underlying urban simulation of shadow, light reflection, wind flow, and traffic congestion (see Figure 2). In addition to a set of building models, Urp provides interactive tools for querying and controlling the parameters of the urban simulation, most notably position and rotation control via the physical models. Also included are a clock tool to change the position of the sun, a material wand to change the building surface between bricks and glass (with light reflection), a wind tool to change wind direction, and an anemometer to measure wind speed.
The physical building models in Urp cast digital shadows onto the workbench surface (via video projection) corresponding to solar shadows at a particular time of day. This time, representing the position of the sun, can be controlled by turning the physical hands of a “clock tool,” like the one in Figure 2. The building models can be moved and rotated, with the angle of their corresponding shadows transformed depending on position and time of day.
Moving the hands of the clock tool can cause Urp to simulate a day of shadow movement among the buildings. Urban planners can identify and isolate intershadowing problems (shadows cast on adjacent buildings) and reposition buildings to avoid areas that are needlessly dark; alternatively, they can maximize light among the buildings.
In Urp, the physical models of buildings are tangible representations of digital models of the buildings. To change their location and orientation, users simply grab and move the physical model, rather than a mouse, to point to and drag a graphical representation on a screen. The physical form of Urp’s building models and the information associated with their position and orientation on the workbench represent and control the state of the urban simulation.
Although standard GUI interface devices (such as keyboards, mice, and screens) are also physical, the physical representation in a TUI provides an important distinction. The physical embodiment of the buildings (representing the computation in building dimensions and location) allows for the tight coupling of control of the object and manipulation of its parameters in the underlying digital simulation.
In Urp, the building models and interactive tools are physical representations of digital information (shadow dimensions and wind speed) and computational functions (shadow interplay). The physical artifacts also serve as controls for the underlying computational simulation (specifying the locations of objects). The specific physical embodiment allows dual use in representing the digital model and the control of the digital representation.
However, Urp lacks the ability to change the forms of tangible representations during user interaction. Users must use a predefined finite set of fixed-form objects (building models in this case) and change only the spatial relationship among them, not the form of individual objects. All tangible objects in Urp must be predefined (physically and digitally) and are unable to change their forms on the fly. This is why the Tangible Media Group designed the second generation of “organic” TUI.
SandScape: Second-Generation TUI
The advent of new sensing and display technologies made it possible to add dynamic form development into TUIs, suggesting movement toward new digital/physical materials that seamlessly couple sensing and display capabilities. Rather than using predefined discrete objects with fixed forms, the Tangible Media Group developed new types of organic TUIs that utilize continuous tangible material (such as clay and sand) for rapid form sculpting for landscape design; examples include Illuminating Clay [6] and SandScape [2]. With the advent of flexible materials that integrate fully flexible sensors and displays, this category of organic TUI shows great potential to express ideas in tangible form.
SandScape [2] is an organic tangible interface for designing and understanding landscapes through a variety of computational simulations based on physical sand (see Figure 3). Users view these simulations as they are projected onto the surface of a sand model representing the terrain. They choose from a variety of simulations highlighting the height, slope, contours, shadows, drainage, or aspect of the landscape model.
Users alter the form of the landscape model by manipulating sand with their hands, seeing the resultant effects of computational analysis generated and projected onto the surface of the sand in real time. The project demonstrates how TUIs take advantage of our natural ability to understand and manipulate physical forms while harnessing the power of computational simulation to help us understand model representations. SandScape, which uses optical techniques to capture the geometry of a landscape model, is less accurate than its predecessor, Illuminating Clay, which used laser range finders to capture the geometry of a physical clay model [6].
SandScape and Illuminating Clay both demonstrate the potential advantage of combining physical and digital representations for landscape modeling and analysis. The physical clay and sand models convey spatial relationships that are intuitively and directly manipulated by hand. Users also insert any found physical objects directly under the camera. This approach allows them to quickly create and understand highly complex topographies that would be difficult and time-consuming to produce through conventional computer-aided design tools. This “continuous and organic TUI” approach makes better use of our natural ability to discover solutions through direct manipulation of physical objects and materials.
Conclusion
TUIs give physical form to digital information and computation, facilitating the direct manipulation of bits. The goal is to empower collaboration, learning, and decision making through digital technology while taking advantage of our human ability to grasp and manipulate physical objects and materials. Here, I’ve introduced the genesis and evolution of TUIs over the past 10 years, from rigid discrete interface toward organic and malleable materials that enable dynamic sculpting and computational analysis using digitally augmented continuous physical materials. This new type of TUI delivers rapid form giving in combination with real-time computational feedback.
In addition to rapid form giving, actuation technology plays a critical role in making the interface more organic and dynamic. The Tangible Media Group is exploring the new genre of TUIs that incorporates actuation mechanisms to realize kinetic memory for educational toys like Curlybot [1] and Topobo [7]. It is also designing a new generation of tabletop TUIs that utilize actuation to make tangible objects behave more actively, dynamically representing the internal computational state; examples include the Actuated Workbench [4] and physical intervention in computational optimization, or PICO [5].
I hope the TUI evolution I’ve explored here will contribute to the future discussion of malleable, dynamic, organic interfaces that seamlessly integrate sensing and display into soft and hard digital/physical material.
Figures
Figure 1. By giving tangible (physical) representation to digital information, tangible user interfaces make information directly graspable and manipulable through haptic feedback. Intangible representation (such as video projection) may complement tangible representation, synchronizing with it.
Figure 2. Urp and shadow simulation. Physical building models that cast digital shadows and a clock tool to control time of day (position of the sun).
Figure 3. SandScape. Users alter the form of the landscape model by manipulating sand while seeing the resultant effects of computational analysis generated and projected onto the surface of sand in real time.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment