Research and Advances
Architecture and Hardware RFID: tagging the world

Photosensing Wireless Tags For Geometric Procedures

Interacting with a self-describing, self-locating world sprinkled with RFIG tags, physical objects come alive through augmented reality labels and context-sensitive annotation.
Posted
  1. Introduction
  2. How RFIG Works
  3. Industrial Applications
  4. Conclusion
  5. References
  6. Authors
  7. Figures

Radio-frequency tags enable physical objects to be self-describing, communicating their identities to a nearby RF reader. Our goal is to build a radio frequency identity and geometry (RFIG) transponder that also communicates geometry, inter-tag location history, and context-sensitive user-defined annotation (www.merl.com/projects/rfig/).

We achieve this functionality by augmenting each tag (Figure 1, top) with a photosensor (Figure 1, bottom). We use modulated light to provide optical communication with this composite RF-photosensing tag. Here, we explore tag/reader communications using a projector paired with the tag-RF-reader. The projector performs the dual operation of sending optical data to the tag (like a TV IR remote control) and providing visual feedback by projecting instructions on objects. Although conventional tag/reader combinations operate in broadcast mode with no regard for directional communication, RFID tags can be located within a few millimeters of their physical locations, support selection of individual tags, and create a 2D or 3D coordinate frame for the tags. The system of projector and photosensing tag provides a set of rich geometric operations, representing an entirely new medium for computer vision, with projector and tags replacing camera and image interest points.


The RFIG method supports intricate, multipurpose geometric operations with ambient intelligence via hybrid optical and RF communication and photosensing wireless tags.


Our experimental work is based on active, battery-powered radio-frequency tags, though our goal is to develop methods that can be used with passive, unpowered RFID tags. A photosensor is one of the few types of sensor that is compatible with the size and power requirements of passive RFID. The key issue in evolving our active tag system to passive tags is the limited power available on a tag. Here, we’ve limited ourselves to computation and sensing consistent with the size and power levels we feel are achievable in a passive RFID system. For example, tags are not photosensing or computing until activated and powered by the nearby RF reader; we do not include a light-emitting diode on the tag as a visual beacon to human or camera-based systems because it would be too power-hungry.

RF-received-signal strength or time of arrival is the most popular method for location tracking but requires multiple readers, and accuracy might be insufficient for complex geometric procedures [1]. Previous systems also married RF tags with optical or ultrasound sensors to improve accuracy. Some such systems use active RF tags that respond to laser pointers. For example, the FindIT Flashlight uses one-way interaction and an indicator light on the tag to signal that the desired object is found [2]. Other systems use two-way interaction, where the tag responds back to a handheld device using a power-hungry protocol like 802.11 or X10 [3]. CoolTown uses beacons actively transmitting device references but without the ability to point or provide visual feedback (www.cooltown.com/research/). The Cricket project [6] computes location and orientation of a handheld device using installed RF and ultrasound beacons, and projects augmented reality labels.

Back to Top

How RFIG Works

Conventional tag communication broadcasts from an RF reader and accepts a response from all in-range tags. Limiting communication to a required tag is conventionally achieved using a short-range tag-reader and close physical placement with respect to the tag. In contrast, users of an RFIG system can select tags for long-range interaction through projected light while ignoring unwanted in-range tags. The handheld device in the RFIG system first transmits an RF broadcast, and each in-range tag is activated and powered by the signal. The tag’s photosensor then takes a reading of ambient light, using it as a zero for subsequent illumination measurements. The next step in the procedure involves turning on the projector’s illumination. Each tag detecting an increase in incident illumination sends a response to the RF reader to indicate it is in the projector’s beam, ready for interaction.

The user aims the handheld device in the direction of a tagged surface. The device sends an RF signal to synchronize the tags, followed by illumination with a sequence of binary patterns, or binary structured light, where 0 means the absence of light and 1 means the presence of light. Each projector pixel emits a unique binary code over time, thus encoding its position. The tag records the binary code that is incident on its photosensor, then makes an RF transmission of its identity and location (recorded in binary code) plus the recorded Gray-code back to the handheld RF reader. The projector uses the identity, plus the recorded (x,y) location, to beam instructions, text, or images onto the tagged object as augmented reality labels. It is then straightforward to display correctly positioned labels on the tagged surface.

Back to Top

Industrial Applications

We described several aspects of RFIG in our previous work [4] and have shown interaction techniques using a handheld or pocket projector [5]. RFIG is motivated by the promise of commercially important applications in inventory control. But because photosensing tags may have many other innovative uses, our goal here is to present the new RFIG-related ideas in the context of several promising industrial applications, outlining broad modes of deployment for geometric analysis. (Note that these are speculative uses, not fully implemented commercial systems.)

Location feedback (such as warehouse management) (see Figure 2). Consider the task of locating boxes containing perishable items (such as crates of fish or boxes of produce) about to expire. Even with traditional RF tagging that includes expiration date information in the indexed database, an employee would have to serially inspect each and every box in a warehouse full of boxes, marking each one with a message identifying about-to-expire products. Using RFIG tags, the handheld or fixed projector first locates the queried tags, then illuminates them with symbols (such as “X” and “OK”), giving other employees visual feedback. Note that a second user can perform similar operations, without RF collisions with the first reader or tags, as long as the two projector beams do not overlap.

Obstruction detection (such as an object obstructing railroad tracks) (see Figure 3, left). A common task in computer vision with cameras is detecting abnormal conditions through image processing. An example is detecting obstructions on railroad tracks in order to trigger an alarm if a person is, say, on the tracks in a subway station or if some suspicious material is on the tracks of a remote freight line. Processing images of videos from camera-based systems to detect such incidents is difficult because the ambient lighting conditions are unpredictable and challenging, while several other activities can result in false positives. Alternatively, one can solve the vision problem by sprinkling RFIG tags along the tracks, then illuminating them with a fixed or steered beam of temporally modulated light (not necessarily a projector), possibly a 40Kz infrared beam from a sparse array of light emitters.

The operation is similar to the “beam break” technique commonly used to detect potentially unlawful intruders in home-security alarm systems. But wireless tag-based systems are ideally suited for applications where running wires to both ends is impractical. Using retro-reflective markers and detecting a return beam is another common strategy for avoiding wires, but sprinkling a large number of markers creates an authoring nightmare in terms of creating a large index table. In the case of RFIG, tag IDs and locations, along with the status of reception of the modulated light, are easy to report. Lack of reception indicates some kind of obstruction, which can be relayed to a central facility where a human observer monitors the scene, possibly through a pan-tilt-zoom surveillance camera.

Ordered placement and orientation (such as books in a library) (see Figure 3, middle). A common task in libraries and pharmacies, as well as in any storage facility, is maintaining a large number of objects in some predetermined order. In libraries with RF-tagged books, a list of books is easily retrieved (for those books within the RF range). However, without location information it is difficult for librarians to determine which books are out of alphabetically sorted order. In addition, without book-orientation information, they would have a difficult time determining whether books are placed upside down. With RFIG and a handheld projector, the system instantly identifies book title, as well as location and orientation. The handheld device then sorts the books by title and by 2D geometric location. A mismatch in the two sorted lists indicates the corresponding book is out of alphabetical order. The system knows the current locations for books within the RF range, as well their locations on the shelf. The projector display gives instant visual feedback and instructions, as shown in Figure 3, middle, as red arrows from current positions to intended positions. A single book can also be tagged with two RFIG transponders, one at the top of a book’s spine, one at the bottom of the spine. Comparing the coordinates of these two tags enables the librarian to determine whether a particular book is upside down.

3D path planning/guiding (such as for a robot on an assembly line with arbitrarily oriented objects) (see Figure 3, right). RFIG tags can be used in factories for robot guidance. The idea is similar to more established laser-guided robot operations (such as welding in factories). But suppose a robot has been instructed to grab a certain object in a pile moving by on a conveyor belt. RFID simplifies the object-recognition problem in machine vision, though precisely locating the object is difficult. The idea is to use a fixed projector to locate the RFIG-tagged object, then illuminate the object with a steady, easily identifiable temporal pattern. A camera attached to the robot arm locks onto this pattern through pattern matching, enabling the robot to home in on this object.

Note that in most of the applications we’ve outlined here, the projector’s function is similar to the kind of devices everyone is familiar with, including television and IR remote controls and laser pointers, but with some spatial or temporal modulation of light. The projector is a glorified remote control that communicates with a photosensor in the location-sensing phase and a glorified laser pointer in the image-projection phase to display augmented reality labels.

Back to Top

Conclusion

Several problems can influence optical communication between the projector and a tag. It can, for example, be affected by ambient light, and wavelength division multiplexed communication (such as TV remote controls and IR photosensors) is commonly used to solve this problem. Optical communication also gets noisier as projector-tag distance increases and as the photosensor gets dirtier. However, within these limitations, the RFIG method supports intricate, multipurpose geometric operations with ambient intelligence via hybrid optical and RF communication and photosensing wireless tags. Our RFIG work indicates some of the possibilities for blurring the boundaries between the physical and the digital worlds, making the everyday environment into a self-describing wireless data source, a display surface, and a medium for interaction.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. (Top) Conventional RFID transponder communicates with an RF reader and responds with the ID number stored in the tag’s memory. (Bottom) RFIG transponder communicates with an RF reader, as well as with a spatio-temporal light modulator (such as a modulated IR light). With a full-fledged data projector, the system can, for example, find the stored ID, along with the (x,y) projector pixel location illuminating the tag.

F2 Figure 2. Warehouse scenario. An employee locates items about to expire and receives visual feedback. A second employee performs a similar operation without causing conflict in the interaction because the projector beams do not overlap.

F3 Figure 3. (Left) Detecting an obstruction (such as person on the tracks near a platform, a disabled vehicle at a railroad intersection, or suspicious material on the tracks). Identifying an obstruction with a camera-based system is difficult, owing to the necessary complex image analysis under unknown lighting conditions. RFIG tags can be sprinkled along the tracks and illuminated with a fixed or steered beam of temporally modulated light (not necessarily a projector). Tags respond with the status of the reception of the modulated light. Lack of reception indicates an obstruction; a notice can then be sent to a central monitoring facility where a railroad traffic controller observes the scene, perhaps using a pan-tilt-zoom surveillance camera. (Middle) Books in a library. RF-tagged books make it easy to generate a list of titles within the RF range. However, incomplete location information makes it difficult to determine which books are out of alphabetically sorted order. In addition, inadequate information concerning book orientation makes it difficult to detect whether books are placed upside down. With RFIG and a handheld projector, the librarian can identify book title, as well as the book’s physical location and orientation. Based on a mismatch in title sort with respect to the location sort, the system provides instant visual feedback and instructions (shown here as red arrows for original positions). (Right) Laser-guided robot. Guiding a robot to pick a certain object in a pile of objects on a moving conveyor belt, the projector locates the RFIG-tagged object, illuminating it with an easily identifiable temporal pattern. A camera attached to the robot arm locks onto this pattern, enabling the robot to home in on the object.

Back to top

    1. Hightower, J. and Borriello, G. Location systems for ubiquitous computing. IEEE Computer 34, 8 (Aug. 2001), 57–66.

    2. Ma, H. and Paradiso, J. The FindIT Flashlight: Responsive tagging based on optically triggered microprocessor wakeup. In Proceedings of the International Conference on Ubiquitous Computing (Ubicomp) (2002), 160–167.

    3. Patel, S. and Abowd, G. A two-way laser-assisted selection scheme for handhelds in a physical environment. In Proceedings of the International Conference on Ubiquitous Computing (Ubicomp) (2003), 200–207.

    4. Raskar, R., Beardsley, P., Van Baar, J., Wang, Y., Dietz, P., Lee, J., Leigh, D., and Willwacher, T. RFIG Lamps: Interacting with a self-describing world via photosensing wireless tags and projectors. ACM Trans. Graph. 23, 3 (Aug. 2004).

    5. Raskar, R., Van Baar, J., Beardsley, P., Willwacher, T., Rao, S., and Forlines, C. iLamps: Geometrically aware and self-configuring projectors. ACM Trans. Graph. 22, 3 (July 2003), 809–818.

    6. Teller, S., Chen, J., and Balakrishnan, H. Pervasive pose-aware applications and infrastructure. IEEE Comput. Graph. Applic. (July 2003).

    7. Want, R. RFID: A key to automating everything. Scientific American 290, 1 (Jan. 2004), 56–65.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More