To build a high-quality, ultracompact imager, the researchers devised a machine learning approach that allows the imager to learn a metasurface physical structure along with a neural feature-based image reconstruction algorithm. The researchers used neural nano-optics for end-to-end learning.
They found that, compared to existing approaches, neural nano-optics produces high-quality, wide field-of-view reconstructions corrected for chromatic aberrations.
The camera’s metasurface, which is just half a millimeter wide, is studded with 1.6 million cylindrical nanostructures. Each nanostructure has its own geometry and functions like an optical antenna. The design of each structure is different, which ensures that the optical wavefront is shaped correctly. Based on direction received from machine learning-based algorithms, the nanostructures interact with light to produce images.
“It has been a challenge to design and configure these little nanostructures to do what you want,” Princeton researcher Ethan Tseng said. “For this specific task of capturing large field-of-view RGB images, it was previously unclear how to co-design the millions of nanostructures together with post-processing algorithms.”
In response to this challenge, UW professor Shane Colburn created a computational simulator to automate the testing of different nano-antenna configurations. Colburn also developed a model to efficiently approximate the metasurfaces’ image production capabilities with sufficient accuracy. The model helped streamline the amount of memory and time required for a simulation involving such a large number of antennas and light/antenna interactions.
The integration of the metasurface optical layer and the signal processing algorithms improved the camera’s performance in natural light conditions, compared to previous metasurface cameras that require ideal conditions to produce high-quality images, Princeton professor Felix Heide said.
UW researcher James Whitehead fabricated the metasurfaces using silicon nitride, a material that is compatible with standard semiconductor manufacturing methods. According to the researchers, a silicon-nitride-based metasurface design could be easily mass-produced at a lower cost than the lenses in conventional cameras.
The researchers compared images produced with their system to images from previous metasurface cameras and images captured by a conventional compound optic. Aside from a little blurring at the edges of the frame, the nanosize camera’s images were comparable to those of the conventional setup.

The imaging method is a step toward ultrasmall cameras that could enable applications in endoscopy and brain imaging or in a distributed fashion on object surfaces, the scientists said. The nano-camera could be used for minimally invasive endoscopy with medical robots to diagnose and treat diseases, and it could improve imaging for robots with size and weight constraints. Arrays of thousands of such nano-cameras could be used for full-scene sensing, turning surfaces into cameras.
“Although the approach to optical design is not new, this is the first system that uses a surface optical technology in the front end and neural-based processing in the back,” Joseph Mait, a consultant at Mait-Optik, said. “The significance of the published work is completing the Herculean task to jointly design the size, shape, and location of the metasurface’s million features and the parameters of the post-detection processing to achieve the desired imaging performance.”
The researchers are working to add more computational abilities to the camera. Beyond optimizing image quality, they hope to add capabilities for object detection and other sensing modalities relevant for medicine and robotics.
Heide envisions using ultracompact imagers to transform surfaces into sensors.
“We could turn individual surfaces into cameras that have ultrahigh resolution, so you wouldn’t need three cameras on the back of your phone anymore, but the whole back of your phone would become one giant camera. We can think of completely different ways to build devices in the future,” he said.
The research was published in Nature Communications (www.doi.org/10.1038/s41467-021-26443-0).
