Research and Advances
Architecture and Hardware RFID: tagging the world

Spontaneous Marriages of Mobile Devices and Interactive Spaces

Configuring themselves through Elope middleware, tagged physical objects and rooms let users seamlessly integrate their content and invoke services.
Posted
  1. Introduction
  2. Related Work
  3. Objects As Intentions
  4. Prototype Implementation
  5. Interaction/Privacy Trade-Offs
  6. Conclusion
  7. References
  8. Authors
  9. Figures
  10. Sidebar: RFID's Place in Interactive Spaces

Mobile computing technology enables anyone to digitally work or play anytime, anywhere, but often requires an elaborate set-up ritual to connect and configure the various required devices. The Elope system we’ve been developing over the past two years simplifies this configuration by providing a means to quickly invoke a distributed wireless application using RFID-tagged objects to seamlessly marry mobile devices and interactive spaces. The system uses an RFID standard that operates within a few inches, thereby localizing the interaction; data transfer is quick, and tags can contain enough data to encode the information necessary to initiate connections between devices and invoke a specific service.

The primary motivation for marrying mobile devices with interactive spaces is that the devices typically possess significant storage and computation resources to manage our personal data, but are not optimally suited for all tasks due to their small displays and keyboards. This observation served as the inspiration for the Personal Server Project at Intel Research in 2001 [11], a model of computing in which mobile devices wirelessly use the superior user-interface capabilities of nearby infrastructure. Interactive spaces and even collections of PCs afford a rich user experience; wirelessly blending them together with mobile devices results in the best of both worlds. Fully realizing the synergy between mobile devices and interactive spaces requires a smooth integration process not encumbered by messy cables or elaborate connection and configuration menus.


As an analogy, consider that instead of a formal wedding involving a complicated ritual, the union between space and device is accomplished quickly, as if a couple had eloped.


Interactive spaces [5], like the one in Figure 1, include wall-size interactive surfaces, surround-sound speaker systems, and specialized input devices to facilitate group interaction. Users naturally want to configure them to work with the personal data they carry on their mobile devices (such as laptops, PDAs, and mobile phones). Ad hoc interoperation mechanisms (such as mobile code in Speakeasy [2] and intermediation in the Patch Panel [1]) allow interoperation of components without prior knowledge of one another. However, because two devices can meaningfully interoperate in a variety of ways, the result is often ambiguity for both devices. For example, specifying that a mobile phone should be connected to a wall-size interactive surface does not sufficiently specify a configuration. The display could be used to show pictures, movies, or presentations stored on the device. The number and variety of applications involving the combined use of the two computers is virtually unlimited.

The Elope approach uses tagged physical objects to embody these configurations, leveraging the affordances of the physical world to characterize the intended configuration (see Figure 2). For example, to show a personal photo album stored on a smart phone through a large-screen projector, a user would need to scan a tagged “show photo album” object using the RFID reader embedded in his or her mobile phone. When the tag of the physical object is scanned, the mobile phone knows the user intends to show photographs in the room. The phone is then “married” to the interactive space, simplifying the integration tasks (such as network, device, and application configuration). As an analogy, consider that instead of a formal wedding involving a complicated ritual, the union between space and device is accomplished quickly, as if a couple had eloped. This streamlined process significantly eases the integration burden for users, allowing them to concentrate on higher-level tasks (such as giving their presentation or relating to the people nearby). A prototype implementation, consisting of a personal mobile device with embedded RFID reader (such as a smart phone), an RFID-tagged physical object (such as a projector or room remote control), and supporting infrastructure, demonstrates how existing technology can be used to realize this vision.

Back to Top

Related Work

Several other projects utilize handheld devices to display information about physical objects or spaces. For example, E-tag [10] investigates how tagged objects can be used to present information on a wireless handheld device equipped with a tag reader; scanning an RFID tag of a book would bring up a relevant Web page on the handheld but does not make the content of the handheld available to nearby infrastructure. Similarly, Cooltown [6] allows multiple kinds of tags and users to browse Web pages associated with objects and rooms through their mobile devices. Neither approach includes setting up bidirectional communication and enabling services to access data stored on mobile devices. In contrast, Elope focuses on setting up the links between devices, allowing bidirectional data exchange. This capability supports the Personal Server model [11], enabling users to tap the I/O-rich infrastructure to access and manipulate content on their mobile devices.

Join and Capture [7] reflects a vision of opportunistic assemblies whereby input and output resources are quickly mapped to work together, associating URLs with a variety of objects and services to allow users to “join” collaborative tasks or “capture” interactive devices for use in the tasks. Instead of enabling devices to just join a space or collaborative task, Elope sets up direct connections between devices and applications, removing several steps that would otherwise be necessary to invoke even the most common applications.

Many other projects, including Synchronous Gestures [3], SyncTap [8], and Touch and Connect [4], also allow quick device associations and end-user configurations between components using physical gestures. However, the general case of setting up communication channels between two previously unknown devices is a much more difficult problem, requiring more information than is easily imparted by a simple gesture. Elope uses the data provided by RFID tags to accomplish wireless associations, connection formation, network configuration, and service invocation. Additionally, many of these mechanisms do not provide a way to disambiguate multiple services provided by devices and are instead limited to simply linking the two devices together.

The Speakeasy [2] project allows users to define task-oriented templates that are preconfigured to simplify the configuration of an interactive space. Users interact with a Web-based GUI to manually invoke services. The Elope approach further simplifies this process by physically representing these associations in an RFID tag and automatically configuring the wireless communication channels.

Back to Top

Objects As Intentions

The Elope system combines advanced mobile devices, interactive spaces, and tagged objects to enable the complete configuration of the space (based on simple user action), including launching the desired application and loading a user’s personal data. These technologies combine to support a model requiring “near zero” configuration—the minimum interaction necessary to perform a task. The system encapsulates an “intention” in a physical object; for example, “listen to your music on this room’s sound system” could be encapsulated in a stylized box with a musical note attached to it.

In a scenario involving two co-workers giving a business presentation (see the sidebar “RFID’s Place in Interactive Spaces”), each of them (Grünberg and Jones) have scanners embedded in their personal devices (phones and laptops). The environment also contains shared presentation surfaces and two physical objects functioning as gateways to seamlessly engage their associated services. The interaction is supported by two physical objects in the space:

Meeting Minder. This object allows users to form a shared information space facilitated by the proximity of tags to the mobile computing components. When wireless communication channels are established through Elope, the components are logically joined automatically, allowing users to take advantage of the rich interaction capabilities now provided by the space to share information. Interaction with the Meeting Minder could grant easy association with the meeting space, including a list of email addresses provided to all participants.

Presentation Remote Control. This physical object serves as a token for giving a presentation in the room. After the user’s mobile computer scans the remote control’s tag, it can interact with the presentation service to show the user’s presentation. Tags automatically configure the initial presentation setup and transition between presenters fluidly and without encumbrance. The remote control can then be used to interact (such as to advance to the next slide) with the presentation.


Using a mobile device to scan tags embedded in the environment avoids many of the privacy concerns associated with carrying RFID technologies.


Back to Top

Prototype Implementation

A prototype interactive space based on Elope, including a personal mobile device, a tagged object, and supporting infrastructure, demonstrates how existing technologies can be used to implement the presentation scenario. A tagged object invokes a Web browser showing a standard HTML-formatted presentation provided by the mobile device; the user then uses the tagged object to control the presentation. Although the prototype focuses on this particular scenario, the basic system implementation is capable of supporting a much richer set of applications after minor modifications to the end-user application software. The prototype system includes several technologies (see Figure 3):

  • Mobile device and RFID reader. A Stargate mobile platform (platformx.sourceforge.net/) serves as the user’s mobile device, processing messages from the handheld RFID reader and delivering them to the infrastructure using the Bluetooth radio standard. The RFID reader is built around a small keyfob-size battery-powered RFID reader based on the M1-Mini reader from SkyTek (www.skyetek.com/readers_Mini.html). The reader scans for tags when the user presses a button, communicating the results to the Personal Server using a Mote radio (www.xbow.com/Products/Wireless_Sensor_Networks.htm). The “cell phone” then uses the Bluetooth radio to communicate with the room infrastructure. Although the RFID reader is physically separate from the user’s device in the prototype, the reader and device effectively function as a single device to the rest of the system and are easily repackaged into a single compact unit, similar to the Nokia 3220 Near Field Coupling cell phone.
  • Presentation Remote Control. This physical device, augmented with a Texas Instruments (ISO15693) RFID tag (www.ti.com/rfid/docs/products/transponders/1356mhz-encapsulated.shtml) and several buttons, provides the necessary connection iniformation and triggers events in the infrastructure. The tag contains up to 256B of information and is programmed with basic information (such as the object’s unique ID, a description of the service provided, the Bluetooth address of a nearby access point, and the appropriate service for the mobile device to contact). The buttons trigger another Mote radio to provide local wireless broadcast communications.
  • Infrastructure middleware. Patch Panel [1] software running on top of iRoom middleware [5] handles event transport and coordination among multiple services. In the business-presentation scenario outlined in the sidebar, the Patch Panel is used to route control messages to a program controlling the standard Web browser used by participants to view a presentation. The middleware runs on standard OS platforms, including computers integrated with large-screen projected displays. Additionally, the infrastructure runs a Bluetooth access point, providing a simple mechanism that enables mobile devices to connect to local services.

Once a mobile device scans the RFID tag embedded in the Presentation Remote Control, the device uses the Bluetooth address obtained through the interaction to form a local-area IP-capable network connection using the Bluetooth personal area network profile. Then, using the additional information in the tag, the mobile device supplies the room’s middleware with the remote control’s ID, the desired service, and a self-referencing URL. The middleware is then responsible for invoking the necessary services and routing the control packets. The self-referencing URL points to a Web server running on the mobile device itself, providing the data necessary to show the user’s personal presentation. The buttons attached to the Presentation Remote Control independently broadcast messages to the room that are then routed by the room’s middleware to the presentation application.

This system, based on the Elope architecture, provides a complete mechanism for showing and controlling a user’s personal presentation while limiting the user’s interaction to a single device—the Presentation Remote Control. Including an RFID tag makes this interaction possible because it simplifies the connection process, instead of requiring the user to perform multiple integration steps. It is capable of showing a presentation served from a mobile device in approximately 13 seconds, a period representing the end-to-end delay from scanning to presenting. Without the detailed information provided by the RFID tag, the entire process would take 15 seconds longer [9], because it would include the laborious steps of manual device discovery and selection, increasing the total time to almost 30 seconds.


In the wedding context, the ceremony might serve to include the extended family or observe some religious rite.


Back to Top

Interaction/Privacy Trade-Offs

We now describe why tagging objects to facilitate interaction is a design choice that allays the fears users may have about loss of privacy and about the trustworthiness of the system. This approach also builds on the affordances of physical objects to self-describe actions and highlights the power of the Elope model, along with some issues that need further resolution.

  • Privacy. Using a mobile device to scan tags embedded in the environment avoids many of the privacy concerns associated with carrying RFID technologies. In contrast, many workplace environments require employees to wear ID badges with RFID tags that can be used to track their movements (such as when they use them to gain access to restricted areas). The Elope tagged-object model protects users’ privacy by giving them control over when, if ever, they scan the tagged objects, and decide if they wish to invoke the specified service.
  • Affordance. How do people know what objects are available in a given space and what these objects will do when activated? For example, if they are alone in an unfamiliar room, how do they know what they could do with the room or what scanning the green disk in the center of the room might do? Solutions include: cultural convention (rooms with large screens allow presentations); social mediation (people tell them what it does); obvious objects (the object is labeled “presentation”); and a room-level directory (a printed placard near the room’s entrance). Elope provides a platform on which to implement all of these solutions.
  • Trust. How might users learn to trust an object to do what they expect it to do? Apart from minimizing annoyances (the object started playing music instead of showing a presentation), the main concern is how to prevent damage to the users’ systems and personal reputations. One way to manage trust is to limit the content that is wirelessly available from a device to a limited set of data designated for sharing. In this way, the system would be protected from rogue infrastructure trying to access data beyond the intended content.

Back to Top

Conclusion

Elope aims to streamline the integration of mobile content and interactive spaces by using tagged objects to configure not only the connection between the space and a mobile device, but also between the user’s data and the intended application. The Elope prototype is capable of quickly showing a presentation served from a mobile device with the press of a single button. In contrast, other solutions typically require a complicated configuration process involving laborious and time-consuming manual interaction.

Scanning a tagged object to trigger a specific interaction is just about the simplest conceivable technique for achieving that goal. A true-zero configuration alternative—having a user walk into a room where the intended interaction happens automatically—is simply not practical because the available information is inadequate for the system to deduce what the user might want to happen. For example, if a group of people were in a room with multiple screens, how would the room know that one of them wanted to show a presentation on a particular screen?

Elope configures the entire system and invokes the desired service when the user scans an object. This process is achieved by representing user intentions with objects physically tagged with RFID tags and using multiple tagged objects to represent a variety of configurations instead of using the menus and forms employed in traditional computer systems. RFID tags are able to store and rapidly transfer enough information in a scan to make this happen in real time.

Although it is capable of providing an end-to-end configuration, Elope does not address all aspects of system operation. For example, what would happen if two users simultaneously try to invoke a service on the same display? Or how would these users disconnect from a space or know what space(s) and applications they are currently connected to? Solutions to these questions might be application-specific or involve social conventions outside of Elope’s domain. Elope was designed to automatically explore the initial stages of the process and bring these questions and their possible solutions to the forefront.

Before Elope could become widely incorporated in real-world business and consumer environments, a standard format would be needed for tagged objects compatible with the hardware and software being used on users’ mobile devices—most likely their mobile phones. The emerging Near Field Communication standard promises to address many such concerns (www.nfc-forum.org); however, standardization of the higher-level formats used to describe connection options and services is still necessary for implementing the complete Elope architecture.

Elope accomplishes automatically what many such unions really need—an end result without too much pomp and circumstance. Sometimes, however, a more formal ceremony is necessary, either to showcase the event or fulfill some additional cultural or user need. In the wedding context, the ceremony might serve to include the extended family or observe some religious rite. In technology, a more formal set-up procedure might enable more complex interactions or provide added security. However, in many circumstances the goal is simple—perhaps making a presentation or adjusting a room’s air conditioning—where unnecessary ceremony gets in the way.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. A mobile device can be used to import personal data into an interactive space by simply scanning a tag embedded in the space.

F2 Figure 2. The Elope system’s major architectural components.

F3 Figure 3. Handheld prototype components, including tagged presentation remote control (upper left), prototype mobile device (right), and RFID reader (bottom left). The remote control is tagged with an RFID tag (black circle on end), and the RFID reader is exposed to show its inner circuitry. (A Euro, a British Pound, and a U.S. quarter are included for scale.)

Back to Top

    1. Ballagas, R., Szybalski, A., and Fox, A. Patch Panel: Enabling control-flow interoperability in ubicomp environments. In Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (Orlando, FL, Mar. 2004).

    2. Edwards, W., Newman, M., Sedivy, J., Smith, T., Balfanz, D., Smetters, D., Wong, H., and Izadi, S. Using Speakeasy for ad hoc peer-to-peer collaboration. In Proceedings of the ACM conference on Computer Supported Cooperative Work (New Orleans, LA, Nov. 16–20, 2002).

    3. Hinckley, K., Ramos, G., Guimbretiere, F., Baudisch, P., and Smith, M. Stitching: Pen gestures that span multiple displays. In Proceedings of the Advanced Visual Interfaces conference (Gallipoli, Italy, May 2004), 23–31.

    4. Iwasaki, Y., Kawaguchi, N., and Inagaki, Y. Touch-and-Connect: A connection request framework for ad-hoc networks and the pervasive computing environment. In Proceedings of the Third IEEE International Conference on Pervasive Computing and Communications, 2003, 20–29.

    5. Johanson B., Fox, A., and Winograd, T. The Interactive Workspaces project: Experiences with ubiquitous computing rooms. IEEE Pervasive Comput. Mag. 1, 2 (Apr.-June 2002).

    6. Kindberg et al. People, places, things: Web presence for the real world. In Proceedings of the IEEE Workshop on Mobile Computing Systems and Applications (Oct. 2000).

    7. Olsen, D., Jr., Travis Nielsen, S., and Parslow, D. Join and capture: A model for nomadic interaction. In Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology (Orlando, FL, Nov. 11–14, 2001), 131–140.

    8. Rekimoto, J., Ayatsuka, Y., and Kohno, M. SyncTap: An interaction technique for mobile networking. In Proceedings of the Mobile Human Computer Interaction Conference (2003).

    9. Scott, D., Sharp, R., Madhavapeddy, A., and Upton, E. Using visual tags to bypass Bluetooth device discovery. In Proceedings of the ACM SIGMobile Conference (2005); also in Mobile Comput. Commun. Rev. 9, 1 (Jan. 2005), 41–53.

    10. Want, R., Fishkin, K., Gujar, A., and Harrison, B. Bridging physical and virtual worlds with electronic tags. In Proceedings of the Computer-Human Interaction Conference (1999), 370–377.

    11. Want, R., Pering, T., Danneels, G., Kummar, M., Sundar, M., and Light, J. The personal server: Changing the way we think about ubiquitous computing. In Proceedings of the International Conference on Ubiquitous Computing (2002).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More