News
Architecture and Hardware

A Camera the Size of a Grain of Salt Could Change Imaging as We Know It

The “meta-optics” camera is 500,000 times smaller than comparable imaging devices.

Posted
salt crystals on a stone slate

When it comes to cameras, size matters, but not in the way you think.

Any time a new smartphone is released, it is easy to drool over the latest, greatest, and biggest features that allow you to take even more stunning selfies composed of even more megapixels. However, in the world of cameras, smaller cameras could end up having a far greater impact on the world at large—and enable a ton of positive applications in society—than the next iPhone camera. Work from researchers at Princeton University and the University of Washington is pointing the way.

A team of researchers from both institutions has published work that uses innovative methods and materials to create a “meta-optics” camera that is the size of a single grain of salt.

ultracompact camera system
The ultracompact camera system developed by researchers at Princeton University and the University of Washington relies on a technology called a metasurface, which is studded with 1.6 million cylindrical posts and can be produced much like a computer chip.

The meta-optics camera is the first device of its kind to produce full-color images that are equal in quality to those produced by conventional cameras, which are an order of magnitude larger. In fact, the meta-optics camera is 500,000 times smaller than conventional cameras that capture the same level of image quality.

The approach the researchers used to create this meta-optics camera’s small form factor is a huge deal.

They used nano-structures called “metasurfaces” and novel approaches to hardware design to build a meta-optics camera far superior to past efforts, as well as implementing unique AI-powered image post-processing to create high-quality images from the camera.

Their work is impressive on its own for breaking through past limitations of meta-optics imaging devices. Yet it is also notable because it opens the door to the creation of extremely small cameras that can create high-fidelity images for a range of industries and use-cases (for instance, by enabling the use of less-invasive medical imaging without compromising image quality).

This work also unlocks the science-fiction-like possibilities of turning entire surfaces into cameras made up of thousands of such devices, and launching high-quality, ultra-light telescopes into space.

Here’s how they did it—and why it could change the world of imaging as we know it.

From conventional lenses to metasurfaces

All camera designers and engineers, no matter the type(s) of cameras they design, share the same challenge: they want to make their cameras as compact as possible while still allowing it to record as much light as possible.

Smartphone cameras present a great example of the trade-offs inherent in solving this challenge. Each new smartphone packs more computational firepower into smaller and thinner frames, to the point where the newest generations of smartphones look positively futuristic. However, smartphone cameras are still obviously large and obtrusive on otherwise sleek smartphone frames because camera designers are packing more and more lenses into them so they can take higher-quality pictures.

This means researchers are always on the hunt for ways to compress more optical power into smaller form factors, said Ethan Tseng, a researcher at Princeton who was part of the team that produced the salt-grain-sized meta-optics camera.

“Metasurfaces have emerged as a promising candidate for performing this task,” Tseng said.

A metasurface, Tseng explained, is an artificial, man-made material that allows us to affect light in unique ways. It is an ultrathin, flat surface just half a millimeter wide and studded with millions of cylindrical posts, which are called “nano-antennas.” These nano-antennas can be individually tuned by researchers to shape light in certain ways so that, together, they are capable of producing images just like standard refractive glass lenses—but in a device that is much, much smaller.

“Using metasurfaces enables us to open a large design space of optics that we only hardly were able to access before with conventional refractive optics,” said Felix Heide, a Princeton professor who is the senior author of the study that produced the salt-grain-sized meta-optics camera.

With a standard refractive lens, you can only really shape the surface of the lens and vary the material to get better results. However, with metasurfaces, researchers are able to modulate light at the sub-wavelength level, said Heide.

In the salt-grain-sized camera, the research team was able to create a single metasurface that has more light-steering power than a traditional lens, dramatically reducing the overall size of the camera while still achieving similar results. The meta-optic lens itself is 0.5 millimeters in size, while the sensor is 1 millimeter in size, making the entire camera much, much smaller than traditional lenses.

The researchers did not invent the concept of using metasurfaces for cameras, but they did determine how to make the approach work in a way that was actually useful in the real world. Meta-optics cameras have been designed before, but none of them can produce images that are of sufficient quality to deploy for imaging use cases.

“Existing approaches have been unable to design a meta-optics camera that can capture crisp, wide-field-of-view full-color images,” said Tseng.

The research team’s work changed that. Their meta-optics camera is the first high-quality, polarization-insensitive nano-optic imager for full-color, wide field-of-view imaging.

“We addressed the shortcomings of previous meta-optics imaging systems through advances in both hardware design and software post-processing,” said Tseng. To do that, the researchers used artificial intelligence to address two challenges: lens design and image processing.

First, the team used novel AI optimization algorithms to design the nano-antennas on the actual metasurface. Simulating the optical response of a metasurface and calculating the corresponding gradients can be quite computationally expensive, Tseng said, so the team created essentially fast “proxies” for metasurface physics that allowed them to compute how to design the metasurface very quickly.

Then, a physics-based neural network was used to process the images captured by the meta-optics camera. Because the neural network was trained on metasurface physics, it can remove aberrations produced by the camera.

“We were the first to treat the metasurface as an optimizable, differentiable layer that can perform computation with light,” said Heide. “This made it possible to effectively treat metasurfaces like layers in optical neural networks and piggyback on the large toolbox of AI to optimize these layers.”

Finally, the metasurface physics simulator and the post-processing algorithm were combined into a single pipeline to fabricate the actual meta-optic camera, and then to reconstruct the images it captures into high-quality, full-color images.

This innovative combination of hardware and software means that the researchers’ meta-optics camera produces images that could actually be used in real-world contexts, like medical imaging.

“Only combined with computation were we able to explore this design space and make our lenses work for broadband applications,” said Heide.

Better endoscopes, smartphone cameras, telescopes

The potential real-world applications of the research are vast.

The most obvious one is medical imaging, which directly benefits from cameras that are as small as possible so as not to be invasive. “We are very excited about miniaturized optics in endoscopes, which could allow for novel non-invasive diagnosis and surgery,” said Heide.

Ultra-compact endoscopes powered by a meta-optics camera could even image regions of the body that are difficult to reach with today’s technology.

Another major area of interest for using meta-optics cameras—or cameras that incorporate meta-optics techniques—is consumer hardware. The ability to design cameras and lenses that are an order of magnitude smaller than those in devices today opens up exciting possibilities across smartphones, wearables, and augmented reality (AR) and virtual reality (VR) headsets.

Your smartphone screen or the back of your phone itself could become a camera, says Heide. Wearables could bake high-quality cameras right into the surfaces of, say, eyeglasses. Or, VR headsets could become dramatically lighter and sleeker, leading to higher adoption and greater use of these devices on the go.

Drones also could benefit from significantly smaller cameras. All drones require cameras of some type to perform their work, whether for military purposes like reconnaissance or civilian ones like order delivery. Much smaller cameras would result in far lighter drones that consume far less battery power, said Tseng.

In fact, with a breakthrough like the meta-optics camera, the very nature of cameras can be rethought entirely.

“Our tiny cameras have also recently allowed us to rethink large cameras as flat arrays of salt-grain cameras—effectively turning surfaces into cameras,” said Heide. Larger metasurfaces could even replace the lenses needed for telescopes, making it not only easier to build them but also to send more powerful lenses into space.

While researchers are still in the early stages of brainstorming and engineering potential real-world applications for meta-optics cameras, the way in which metasurfaces are produced has them excited.

“Metasurfaces are especially interesting because they can be made using the same mature technology used to produce computer chips,” said Tseng. Today’s computer chips are produced on wafers, and each wafer contains hundreds of identical copies of the chip. Metasurfaces are produced in an identical way, which holds the promise of greatly reducing the individual cost per metasurface produced, he said.

Not to mention, while the exact materials used to make metasurfaces vary, the researchers used a silica wafer for their mounting surface and silicon nitride for their nano-antennas. Both materials are compatible with today’s semiconductor manufacturing techniques that pump out computer chips.

This means going from sophisticated computer chips to meta-optics cameras might be easier than we think. If so, the picture for how to use these devices in many different industries could get much, much clearer.

Further Reading:

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More