News
Architecture and Hardware News

3D Sensors Provide Security, Better Games

A variety of techniques allow sensors to locate and recognize objects in space.
Posted
  1. Article
  2. Author
facial recognition demo, illustration

Sensor technology is designed to allow machines to interact with real-world inputs, whether they are humans interacting with their smartphones, autonomous vehicles navigating on a busy street, or robots using sensors to aid in manufacturing. Not surprisingly, three-dimensional (3D) sensors, which allow a machine to understand the size, shape, and distance of an object or objects within its field of view, have attracted a lot of attention in recent months, thanks to their inclusion on Apple’s most-advanced (to date) smartphone, the iPhone X, which uses a single camera to measure distance.

Indeed, the TrueDepth system, which replaces the fingerprint-based TouchID system on the Apple handset, shines approximately 30,000 dots outward onto the user’s face. Then, an infrared (IR) camera captures the image of the dots, which provides depth information based on the density of the dots (closer objects display a dot pattern that is spread out, whereas objects that are farther away create a denser pattern of dots. Altogether, the placement of these dots creates a depth map with 3D data that is used to supply the system with the information it needs to check for a facial identity match, which then unlocks the device. The key advantage to this single-camera system and others like it is the relatively low cost of implementation.

For many consumers, their first exposure to 3D sensing technology came in 2010, in the form of Microsoft’s groundbreaking Kinect motion-sensing input device designed for the Xbox gaming consoles and Windows PCs. Utilizing a chipset from Israeli developer PrimeSense (which was since purchased by Apple in November 2013), the 3D scanner system called Light Coding incorporated an IR emitter and an IR depth sensor. The emitter projected infrared light beams, and the depth sensor read those IR beams that were reflected back to the sensor. The reflected beams were converted into depth information that allowed the distance between an object and the sensor to be measured. The result was a gaming system that could accurately track a player’s motion, so long as they stayed within an approximately six-square-meter zone in front of the Kinect bar.

In subsequent versions of the Xbox, Microsoft replaced the PrimeSense technology with its own time-of-flight sensor technology, which features wide-angle coverage and three times the fidelity of the previously used technology. Not surprisingly, as the technology has improved, so has the demand for devices with embedded 3D sensing technology, according to Anand Joshi, a principal analyst with technology research firm Tractica LLC.

uf1.jpg
Figure. The components in Apple’s iPhone X required for Face ID 3D-scanning technology to work.

“Cameras have become really cheap and are being integrated into a wide range of devices,” Joshi says, noting that the growth and availability of computer vision technology and the software to extract depth information has enabled a large application development ecosystem to thrive, thereby supporting even more innovations.

Another 3D imaging technique that may support more advanced functionality is called stereopsis, or stereovision. This technology uses a sensor that takes the feeds from two cameras, and then compares the delta between the horizontal placement of each object to determine how far the object is from the sensor. This technique is deployed by LinX, another technology company acquired by Apple, possibly because the technology is not subject to interference, portending greater potential for use in outdoor applications and over greater distances.

While unconfirmed, it is possible this stereo vision technology may be used within the iPhone’s forthcoming rear 3D sensor package, which may add 3D depth-sensing features to the handset’s rear camera, possibly enabling virtual and augmented reality applications, along with new types of videoconferencing and gaming applications, to be accessed via the iPhone.

Other 3D sensors utilize a variety of techniques to recognize objects in space, although they each use beams of light to measure distance and therefore may also be used to measure or ascertain the size, shape, and orientation of an object. Called “time of flight” (TOF) technology, it is based on a laser scattering infrared light pulses; the technology measures the amount of time it takes for the pulses to bounce back, to ascertain where an object is located. A key advantage of this technology is that an object’s proximity and size can be captured very quickly with a high degree of accuracy (within a few centimeters), making it suitable for industrial, automotive, or other applications where speed and accuracy are required.

Examples of sensors that use time-of-flight technology include light detection and ranging (LiDAR) sensors used in advanced driver assistance systems (ADAS) to determine the distance between the vehicle and other objects (for systems such as forward collision warning, lane-departure warning, and adaptive cruise control), as well as in gaming systems like the second-generation Kinect sensor, which is used in Microsoft’s popular Xbox One console. LiDAR technology is being incorporated across the board by automotive OEMs, as they seek to incorporate sensing technology that can augment and enhance drivers’ awareness and reaction times. The advanced sensing technology is also being incorporated into forthcoming autonomous vehicles as part of a comprehensive and integrated network of sensing technology.

Intel Corp’s RealSense technology functions similarly to the Kinect’s cameras, by projecting a laser and measuring how it bounces or otherwise interacts with the environment. This type of technology is well suited to indoor applications, and is well supported by an existing base of software designed specifically to manage this technique..

The choice of the type of sensor to utilize is largely based around the application, and how the technology will be deployed, according to Joshi.

“One can use single, stereo, or a multiple-camera based system,” he explains. “You can also supplement the visible light image with additional sensors such as infrared or radar. Mobile phones are based on a single or stereo camera, and use TOF or triangulation. However, automotive [applications] use radar or other sensors to supplement camera systems. [Microsoft’s game-focused] Kinect has used structured light. The choice of algorithm is up to the developer, depending on the sensors and compute capacity available.”

Joshi notes that while single-camera 3D systems, such as those used in mobile phones, are least expensive because they are limited. Says Joshi, “The sophistication starts when you add stereo cameras, which use two sensors, and keeps going on up when IR and other sensors are added.”


Applications for 3D sensors are not limited to smartphones; they can be used for any type of 3D scanning in which an object must be mapped with precision.


The growth of the personal mobile handset market, as well as other verticals, including automotive, healthcare, and retail, has led to significant activity among makers of 3D sensors and chipsets. Indeed, Sony Corp., which currently has 49% of the market for image sensors, is developing new TOF sensors that are smaller than today’s sensors, and can calculate depth at greater distances.

Meanwhile, STMicroelectronics, which currently supplies Apple and a number of other smartphone vendors with its FlightSense sensors, has shipped more than 300 million TOF chips, according to the company. These sensors are suited for a number of applications beyond smartphones, including drones, augmented-reality applications, and industrial applications. A single-module design integrating a laser and sensor array allows reliable proximity detection, ambient light detection, and low-power operation to be easily incorporated into a number of device types.

Not to be outdone, Apple announced it has invested $390 million out of its $1-billion Advanced Manufacturing Fund in Sunnyvale, CA-based Finisar Corp., a supplier of vertical-cavity surface-emitting lasers (VCSELs), helping to fuel speculation Finisar will ultimately wind up being a supplier involved with the potential rear-facing 3D sensor system on the next iPhone. Beyond basic scanning, the real value of 3D sensors is the ability to combine them with other sensor types. Industry participants and analysts say the availability of lower-cost, higher-powered processors and software has enabled this sensor data fusion, which combines input from multiple sensors and then processes the data in real time, that is enabling devices to be built with near-humanlike sensing abilities.

“Computer vision has also allowed sensor fusion to incorporate additional sensors such as laser and radar,” Joshi says, noting that the market has been driven by a “a combination of falling price [and] better technology” availability.

According to Joshi, Intel, Google, Apple, and Microsoft are leading the way on the consumer side, while Natick, MA-based Cognex Corp. is big on the industrial side. Within the automotive sector, Nvidia is the leader in AI chipsets that are used to power computer vision technology, and many automotive companies use them along with a wide variety of other sensors to enable autonomous driving and advanced driver-assistance systems.

However, applications for full-sized 3D sensors are not limited to smartphones or other consumer devices; the technology can be used for any type of 3D scanning in which an object must be mapped with precision. Applications exist or are in the works involving 3D printing, design, mapping, object recognition, facial recognition, gesture-based control, and other industrial or commercial applications.

Jae-Yong Lee, a venture capital investment manager with ReWired, a $100-million robotic-focused venture studio based in London, says he sees great potential for the technology in retail analytics. “If you could install affordable [3D] sensors within a store so you could really measure the traffic—where the people are, where they’re walking toward, where they are hesitating—you could really optimize how you display your items and how you can redesign the store for more sales.”

In the near term, the potential for 3D sensors beyond consumer devices is likely to develop within the burgeoning autonomous vehicle market, says Pier-Olivier Hamel, a product manager with Leddartech, a Quebec, Canada-based company that manufactures LiDAR technology for the automotive market. The company’s technology is currently focused on using LiDAR to accurately capture the 3D profile of objects perceived by the sensor, and making this technology available to automobile OEMs and Tier-One suppliers to classify objects that can be used to train autonomous and semi-autonomous driving systems. While the company is not focusing on creating LiDAR technology for smaller devices now, it certainly is a possibility for the future.

“We can see with the miniaturization of everything,” Hamel explains. “I’m sure that there will be applications for personal devices and smartphones or augmented reality; those are all possibilities.”


In the near term, the potential for 3d sensors beyond consumer devices is likely to develop in the burgeoning autonomous vehicle market.


With all this activity, it is no surprise the market for 3D sensors and components is likely to rise. Tractica, which released its 3D Imaging Hardware and Software market study in early 2016, projected the market for 3D imaging technology would reach $24.9 billion globally by 2024, up from $3.2 billion in 2014, which reflects a compound annual growth rate (CAGR) of 23%.

Joshi, the study’s author, says that while the $24.9-billion market value may be overly bullish, as the technology did not take off quite as quickly as originally predicted, the consumer and mobile market will continue to be the largest sectors for 3D imaging technology, accounting for about $10.1 billion of the overall market by 2024.

*  Further Reading

3D Imaging Hardware and Software Market to Reach $24.9 Billion by 2024, Tractica LLC, January 14, 2016 http://bit.ly/2BDJIrF

Graham, L.A., Chen, H., Cruel, J., Guenter, J., Hawkins, B., Hawthorne, B., Kelly, D.Q., Melgar, A., Martinez, M., Shaw, E., and Tatum, J.A.
High-power VCSEL arrays for consumer electronics http://bit.ly/2BECC69

What Is Time-of-Flight? – Vision Campus, Basler AG, May 31, 2016 http://bit.ly/2BDKLrB

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More