STOCK TITAN

NTT Research and Cornell Scientists Introduce Deep Physical Neural Networks

Rhea-AI Impact
(Low)
Rhea-AI Sentiment
(Neutral)
Tags
Rhea-AI Summary

NTT Research and Cornell University have developed a new algorithm that employs deep neural network training on unconventional hardware, as detailed in a Nature article published on January 26. The research showcases the application of a physics-aware training algorithm on optical, mechanical, and electrical systems, achieving impressive accuracy rates of 97%, 93%, and 87% for image classification tasks. This breakthrough aims to enhance energy efficiency in machine learning, potentially transforming applications in robotics and smart sensors.

Positive
  • Achieved 97% accuracy with optical systems in image classification tasks.
  • Developed a new algorithm to enhance energy efficiency in machine learning.
  • Collaborative research with prestigious institutions like Cornell University.
  • Potential application in innovative technologies like robotics and smart sensors.
Negative
  • None.

Article in Nature Explains the Application of Physics-Aware Training Algorithm and Shares Results of Tests on Three Physical Systems

SUNNYVALE, Calif.--(BUSINESS WIRE)-- NTT Research, Inc., a subsidiary of NTT (TYO:9432), and Cornell University today announced that scientists representing their organizations have introduced an algorithm that applies deep neural network training to controllable physical systems and have demonstrated its implementation on three types of unconventional hardware. The team released its findings in an article titled “Deep physical neural networks trained with backpropagation,” published on January 26 in Nature, one of the world’s most cited scientific journals. The paper’s co-lead authors are Logan Wright and Tatsuhiro Onodera. Both are research scientists at the NTT Research Physics and Informatics (PHI) Lab and NTT Research visiting scientists in the School of Applied and Engineering Physics at Cornell. The project’s leader, Peter McMahon, assistant professor of Applied and Engineering Physics at Cornell, was one of five other co-authors of this paper. The paper offers an approach to deep learning not constrained by existing energy requirements and other limits to scalability.

Deep learning, a subset of artificial intelligence (AI), uses neural networks that feature several layers of interconnected nodes. Deep neural networks are now pervasive in science and engineering. To train them to perform mathematical functions, such as image recognition, users rely upon a training method known as backpropagation (short for “back propagation of errors”). To date, this training algorithm has been implemented using digital electronics. The computational requirements of existing deep learning models, however, have grown rapidly and are now outpacing Moore’s Law, a longstanding observation regarding the miniaturization of integrated circuits over time. As a result, scientists have attempted to improve the energy efficiency and speed of deep neural networks. The NTT Research/Cornell team has taken the approach of applying backpropagation to unconventional hardware, namely: to optical, mechanical, and electrical systems.

“The backpropagation algorithm is a series of mathematical operations; there's nothing intrinsically digital about it. It just so happens that it's only ever been performed on digital electronic hardware,” Dr. Wright said. “What we've done here is find a way to take this mathematical recipe and translate it into a physical recipe.”

The team calls the trained systems physical neural networks (PNNs), to emphasize that their approach trains physical processes directly, in contrast to the traditional route in which mathematical functions are trained first, and a physical process is then designed to execute them. “This shortcut of training the physics directly may allow PNNs to learn physical algorithms that can automatically exploit the power of natural computation and makes it much easier to extract computational functionality from unconventional, but potentially powerful, physical substrates like nonlinear photonics,” Dr. Wright said.

In the Nature article, the authors describe the application of their new algorithm, which they call physics-aware training (PAT), to several controllable physical processes. They introduce PAT through an experiment that encoded simple sounds (test vowels) and various parameters into the spectrum of a laser pulse and then constructed a deep PNN, creating layers by taking the outputs of optical transformations as inputs to subsequent transformations. After being trained with PAT, the optical system classified test vowels with 93 percent accuracy. To demonstrate the approach’s universality, the authors trained three physical systems to perform a more difficult image-classification task. They used the optical system again, although this time demonstrating a hybrid (physical-digital) PNN. In addition, they set up electronic and mechanical PNNs for testing. The final accuracy was 97 percent, 93 percent, and 87 percent for the optics-based, electronic, and mechanical PNNs, respectively. Considering the simplicity of these systems, the authors consider these results auspicious. They forecast that, by using physical systems very different from conventional digital electronics, machine learning may be performed much faster and more energy-efficiently. Alternatively, these PNNs could act as functional machines, processing data outside the usual digital domain, with potential uses in robotics, smart sensors, nanoparticles, and elsewhere.

“This article identifies a powerful solution to the problem of power-hungry machine learning,” said PHI Lab Director Yoshihisa Yamamoto. “The research from Drs. Wright, Onodera, and colleagues combines physical systems and backpropagation in a way that is theoretically much more efficient than the status quo and applicable in a range of intriguing applications. One exciting task going forward is to explore further which physical systems are best for performing machine-learning calculations.”

The research in this article reflects the goals of the two labs represented by the co-authors. A large focus of the NTT Research PHI Lab is on the Coherent Ising Machine (CIM), an information processing platform based on photonics oscillator networks, which falls under the Lab’s larger mission of “rethinking the computer from the principles of critical phenomena in neural networks.” For its part, Professor McMahon’s Lab at Cornell, which is collaborating with the PHI Lab on CIM research, is committed to addressing the question of “how physical systems can be engineered to perform computation in new ways.” The four other co-authors of the Nature article are Martin Stein, Ph.D. candidate, and Tianyu Wang, Mong Postdoctoral Fellow, both members of the McMahon Lab; and Zoey Hu and Darren Schachter, both undergraduate student researchers at Cornell when this research was conducted in 2020 and 2021. Hu was also an NTT Research intern in the summer of 2021.

In addition to Cornell, nine other universities have agreed to conduct joint research with the NTT Research PHI Lab. These include the California Institute of Technology (CalTech), Harvard University, Massachusetts Institute of Technology (MIT), Notre Dame University, Stanford University, Swinburne University of Technology, the Tokyo Institute of Technology, the University of Michigan, and the University of Tokyo. The NASA Ames Research Center in Silicon Valley and 1QBit, a private quantum computing software company, have also entered joint research agreements with the PHI Lab.

About NTT Research

NTT Research opened its offices in July 2019 as a new Silicon Valley startup to conduct basic research and advance technologies that promote positive change for humankind. Currently, three labs are housed at NTT Research facilities in Sunnyvale: the Physics and Informatics (PHI) Lab, the Cryptography and Information Security (CIS) Lab, and the Medical and Health Informatics (MEI) Lab. The organization aims to upgrade reality in three areas: 1) quantum information, neuroscience and photonics; 2) cryptographic and information security; and 3) medical and health informatics. NTT Research is part of NTT, a global technology and business solutions provider with an annual R&D budget of $3.6 billion.

NTT and the NTT logo are registered trademarks or trademarks of NIPPON TELEGRAPH AND TELEPHONE CORPORATION and/or its affiliates. All other referenced product names are trademarks of their respective owners. © 2020 NIPPON TELEGRAPH AND TELEPHONE CORPORATION

 

NTT Research:

Chris Shaw

Vice President, Global Marketing

NTT Research

+1-312-888-5412

chris.shaw@ntt-research.com

Media:

Stephen Russell

Wireside Communications®

For NTT Research

+1-804-362-7484

srussell@wireside.com

Source: NTT Research, Inc.

FAQ

What is the new algorithm developed by NTT Research and Cornell University?

The new algorithm developed applies deep neural network training on unconventional hardware, focusing on energy efficiency in machine learning.

What accuracy did the optical system achieve in tests?

The optical system achieved a 97% accuracy rate in image classification tasks.

When was the research article published?

The research article was published on January 26 in the journal Nature.

What types of systems did the algorithm test on?

The algorithm was tested on optical, mechanical, and electrical systems.

What are the potential applications of this new training algorithm?

Potential applications include robotics, smart sensors, and other technologies that require efficient data processing.

NIPPON TELEG&TEL CP S/ADS

OTC:NTTYY

NTTYY Rankings

NTTYY Latest News

NTTYY Stock Data

96.18B
52.30B
0.06%
0.01%
Telecom Services
Communication Services
Link
United States of America
Tokyo