Intel oneDNN AI Optimizations Enabled as Default in TensorFlow
Intel has partnered with Google to optimize the oneDNN library as the default backend CPU optimization for TensorFlow 2.9. This collaboration enables significant performance enhancements for TensorFlow developers, providing up to 3 times performance improvements in operations such as convolution and matrix multiplication. These optimizations benefit all Linux x86 packages and CPUs with neural-network-specific features, enhancing AI inference and training capabilities. The update will streamline productivity for millions of users without requiring code changes.
- Up to 3 times performance improvements for TensorFlow operations.
- Optimizations apply to all Linux x86 packages and neural-network-focused CPUs.
- Seamless experience for developers with no code changes required.
- None.
Intel and Google team up to enable the oneDNN library as the default backend CPU optimization for TensorFlow 2.9.
“Thanks to the years of close engineering collaboration between Intel and Google, optimizations in the oneDNN library are now default for x86 CPU packages in TensorFlow. This brings significant performance acceleration to the work of millions of TensorFlow developers without the need for them to change any of their code. This is a critical step to deliver faster AI inference and training and will help drive AI Everywhere.”
–Wei Li, Intel vice president and general manager of AI and Analytics
Why It’s Important: oneDNN performance improvements becoming available by default in the official TensorFlow 2.9 release will enable millions of developers who already use TensorFlow to seamlessly benefit from Intel software acceleration, leading to productivity gains, faster time to train and efficient utilization of compute. Additional TensorFlow-based applications, including TensorFlow Extended, TensorFlow Hub and TensorFlow Serving also have the oneDNN optimizations. TensorFlow has included experimental support for oneDNN since TensorFlow 2.5.
oneDNN is an open source cross-platform performance library of basic deep learning building blocks intended for developers of deep learning applications and frameworks. The applications and frameworks that are enabled by it can then be used by deep learning practitioners. oneDNN is part of oneAPI, an open, standards-based, unified programming model for use across CPUs as well as GPUs and other AI accelerators.
While there is an emphasis placed on AI accelerators like GPUs for machine learning and, in particular, deep learning, CPUs continue to play a large role across all stages of the AI workflow. Intel’s extensive software-enabling work makes AI frameworks, such as the TensorFlow platform, and a wide range of AI applications run faster on Intel hardware that is ubiquitous across most personal devices, workstations and data centers. Intel’s rich portfolio of optimized libraries, frameworks and tools serves end-to-end AI development and deployment needs while being built on the foundation of oneAPI.
What This Helps Enable: The oneDNN-driven accelerations to TensorFlow deliver remarkable performance gains that benefit applications spanning natural language processing, image and object recognition, autonomous vehicles, fraud detection, medical diagnosis and treatment and others.
Deep learning and machine learning applications have exploded in number due to increases in processing power, data availability and advanced algorithms. TensorFlow has been one of the world’s most popular platforms for AI application development with over 100 million downloads. Intel-optimized TensorFlow is available both as a standalone component and through the Intel® oneAPI AI Analytics Toolkit, and is already being used across a broad range of industry applications including the
More Context: Intel AI Software Tools | Intel AI | TensorFlow | oneAPI | oneDNN
About Intel
Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel’s innovations, go to newsroom.intel.com and intel.com.
Notices and Disclaimers
Performance varies by use, configuration and other factors. Learn more at www.intel.com/PerformanceIndex. Results may vary.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates.
No product or component can be absolutely secure.
Your costs and results may vary.
Intel technologies may require enabled hardware, software or service activation.
Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.
©
View source version on businesswire.com: https://www.businesswire.com/news/home/20220524005460/en/
1-312-228-6875
lindsey.barber@ketchum.com
Source: Intel
FAQ
What is the significance of the partnership between Intel and Google regarding TensorFlow 2.9?
How does the oneDNN optimization improve TensorFlow's performance?
What hardware benefits from the oneDNN optimizations in TensorFlow?
When was TensorFlow 2.9 released with oneDNN optimization?