NTT Research and Cornell Scientists Introduce Deep Physical Neural Networks
NTT Research and Cornell University have developed a new algorithm that employs deep neural network training on unconventional hardware, as detailed in a Nature article published on January 26. The research showcases the application of a physics-aware training algorithm on optical, mechanical, and electrical systems, achieving impressive accuracy rates of 97%, 93%, and 87% for image classification tasks. This breakthrough aims to enhance energy efficiency in machine learning, potentially transforming applications in robotics and smart sensors.
- Achieved 97% accuracy with optical systems in image classification tasks.
- Developed a new algorithm to enhance energy efficiency in machine learning.
- Collaborative research with prestigious institutions like Cornell University.
- Potential application in innovative technologies like robotics and smart sensors.
- None.
Article in Nature Explains the Application of Physics-Aware Training Algorithm and Shares Results of Tests on Three Physical Systems
Deep learning, a subset of artificial intelligence (AI), uses neural networks that feature several layers of interconnected nodes. Deep neural networks are now pervasive in science and engineering. To train them to perform mathematical functions, such as image recognition, users rely upon a training method known as backpropagation (short for “back propagation of errors”). To date, this training algorithm has been implemented using digital electronics. The computational requirements of existing deep learning models, however, have grown rapidly and are now outpacing Moore’s Law, a longstanding observation regarding the miniaturization of integrated circuits over time. As a result, scientists have attempted to improve the energy efficiency and speed of deep neural networks.
“The backpropagation algorithm is a series of mathematical operations; there's nothing intrinsically digital about it. It just so happens that it's only ever been performed on digital electronic hardware,”
The team calls the trained systems physical neural networks (PNNs), to emphasize that their approach trains physical processes directly, in contrast to the traditional route in which mathematical functions are trained first, and a physical process is then designed to execute them. “This shortcut of training the physics directly may allow PNNs to learn physical algorithms that can automatically exploit the power of natural computation and makes it much easier to extract computational functionality from unconventional, but potentially powerful, physical substrates like nonlinear photonics,”
In the Nature article, the authors describe the application of their new algorithm, which they call physics-aware training (PAT), to several controllable physical processes. They introduce PAT through an experiment that encoded simple sounds (test vowels) and various parameters into the spectrum of a laser pulse and then constructed a deep PNN, creating layers by taking the outputs of optical transformations as inputs to subsequent transformations. After being trained with PAT, the optical system classified test vowels with 93 percent accuracy. To demonstrate the approach’s universality, the authors trained three physical systems to perform a more difficult image-classification task. They used the optical system again, although this time demonstrating a hybrid (physical-digital) PNN. In addition, they set up electronic and mechanical PNNs for testing. The final accuracy was 97 percent, 93 percent, and 87 percent for the optics-based, electronic, and mechanical PNNs, respectively. Considering the simplicity of these systems, the authors consider these results auspicious. They forecast that, by using physical systems very different from conventional digital electronics, machine learning may be performed much faster and more energy-efficiently. Alternatively, these PNNs could act as functional machines, processing data outside the usual digital domain, with potential uses in robotics, smart sensors, nanoparticles, and elsewhere.
“This article identifies a powerful solution to the problem of power-hungry machine learning,” said
The research in this article reflects the goals of the two labs represented by the co-authors. A large focus of the
In addition to Cornell, nine other universities have agreed to conduct joint research with the
About
View source version on businesswire.com: https://www.businesswire.com/news/home/20220131005187/en/
Vice President, Global Marketing
+1-312-888-5412
chris.shaw@ntt-research.com
Media:
For
+1-804-362-7484
srussell@wireside.com
Source:
FAQ
What is the new algorithm developed by NTT Research and Cornell University?
What accuracy did the optical system achieve in tests?
When was the research article published?
What types of systems did the algorithm test on?