Further, the photonic neural network, developed at the University of Pennsylvania, is scalable to allow it to classify images of increasing complexity.
The speed of the on-chip deep neural network derives from its ability to directly process the light it receives from an object of interest. It does not need to convert optical signals to electrical signals or change input data to a binary format before it can recognize images.

The researchers ensured that the training for the network was specific enough to result in accurate image classifications, but general enough to be useful to the network when presented with new data sets. The network can be scaled up by adding neural layers. As layers are added, the network’s ability to read data in more complex images, with higher resolution, grows.
Though current on-chip image classification technology can perform billions of computations per second, the computing speed is limited by a linear, clock-based processing schedule that in traditional systems requires computation steps to be performed one after another.
In contrast, the on-chip deep neural network directly processes optical waves as they propagate through the network’s layers. The nonlinear activation function is realized optoelectronically and allows a classification time of under 570?ps.
“Our chip processes information through what we call ‘computation-by-propagation,’ meaning that unlike clock-based systems, computations occur as light propagates through the chip,” said Farshid Ashtiani, a researcher on the work. “We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology.”

“To understand just how fast this chip can process information, think of a typical frame rate for movies,” professor Firooz Aflatouni said. “A movie usually plays between 24 and 120 frames per second. This chip will be able to process nearly 2 billion frames per second.”
In addition to eliminating analog-to-digital conversion, the chip’s direct, clockless optical data processing removes the need for a memory module, allowing faster and more energy-efficient neural networks. “When current computer chips process electrical signals, they often run them through a graphics processing unit, or GPU, which takes up space and energy,” Ashtiani said. “Our chip does not need to store the information, eliminating the need for a large memory unit.”
The elimination of a memory module can also increase data privacy. “With chips that read image data directly, there is no need for photo storage and thus, a data leak does not occur,” Aflatouni said.

“We already know how to convert many data types into the electrical domain — images, audio, speech, and many other data types,” Aflatouni said. “Now, we can convert different data types into the optical domain and have them processed almost instantaneously using this technology.”
The next steps for the team will be to further investigate the scalability of the chip and explore 3D object classification.
The research was published in Nature (www.doi.org/10.1038/s41586-022-04714-0).
