STOCK TITAN

Ceva Extends its Smart Edge IP Leadership, Adding New TinyML Optimized NPUs for AIoT Devices to Enable Edge AI Everywhere

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Very Positive)
Tags
AI
Rhea-AI Summary

Ceva, a leader in silicon and software IP, has expanded its Ceva-NeuPro family of Edge AI NPUs with the introduction of Ceva-NeuPro-Nano NPUs. These NPUs are designed to execute TinyML workloads efficiently in consumer, industrial, and general-purpose AIoT products, offering low power consumption, optimal performance, and cost-efficiency. The Ceva-NeuPro-Nano NPUs come in two configurations, Ceva-NPN32 and Ceva-NPN64, providing tailored solutions for various AI applications. They boast enhanced capabilities like transformer computation and sparsity acceleration, significantly reducing memory footprints by up to 80% with Ceva-NetSqueeze technology. The launch includes Ceva-NeuPro Studio, an AI SDK supporting open frameworks like TensorFlow Lite for Microcontrollers. The Ceva-NeuPro-Nano NPUs are now available for licensing.

Positive
  • Introduction of Ceva-NeuPro-Nano NPUs designed for ultra-low power and optimal performance in TinyML workloads.
  • Ceva-NeuPro-Nano NPUs offer up to 80% memory footprint reduction with Ceva-NetSqueeze technology.
  • Ceva-NeuPro Studio AI SDK supports open AI frameworks like TensorFlow Lite for Microcontrollers.
  • Availability of two configurations, Ceva-NPN32 and Ceva-NPN64, to cater to diverse AI applications.
  • Enhanced performance features such as transformer computation and sparsity acceleration.
Negative
  • None.

Insights

The introduction of the Ceva-NeuPro-Nano NPUs is a noteworthy advancement in the field of TinyML and AIoT (Artificial Intelligence of Things). The technology is significant because it addresses key challenges such as power efficiency and silicon footprint, which are important for the integration of AI into low-power devices. By offering up to 80% memory footprint reduction and superior power efficiency, Ceva is making it more feasible for semiconductor companies and OEMs to implement AI capabilities in a broader array of products, from TWS earbuds to industrial sensors.

One of the standout features is the support for advanced machine learning data types and operators, including native transformer computation, which is increasingly becoming important in modern AI applications. This makes the Ceva-NeuPro-Nano not only future-proof but also versatile in handling various demanding tasks, from voice recognition to health monitoring.

For retail investors, this signifies that Ceva is strengthening its position in the growing AIoT market, which ABI Research projects to see a substantial uptake in TinyML hardware by 2030. This increased adoption can drive significant revenue streams for Ceva, making it an interesting prospect for those looking to invest in technology stocks with a strong growth trajectory.

Ceva's move to enhance its Smart Edge IP with the Ceva-NeuPro-Nano NPUs positions the company well within the rapidly growing TinyML market. The demand for specialized AI solutions in IoT devices is on the rise, driven by the need for more efficient, low-power operations. According to ABI Research, a substantial portion of TinyML deployments will be powered by dedicated hardware rather than general-purpose MCUs by 2030. This suggests a significant market opportunity for Ceva's new NPUs.

From an investor perspective, this not only bolsters Ceva's product portfolio but also indicates a strategic alignment with market trends. The comprehensive AI SDK, Ceva-NeuPro Studio, which supports open AI frameworks like TensorFlow Lite, further simplifies development for customers, potentially accelerating market adoption. These factors collectively enhance Ceva's growth prospects, making the stock appealing for long-term investors interested in the IoT and AI sectors.

Additionally, the flexibility and scalability of the NPU architecture can attract a broader customer base, from consumer electronics to industrial applications, thereby diversifying revenue streams and reducing market risks associated with dependency on a single industry.

- Ceva-NeuPro-Nano NPUs deliver optimal balance of ultra-low power and best performance in small area to efficiently execute TinyML workloads in consumer, industrial and general-purpose AIoT products

- Ceva-NeuPro Studio complete AI SDK for the Ceva-NeuPro NPU family supports open AI frameworks including TensorFlow Lite for Microcontrollers and microTVM to simplify the rapid development of TinyML enabled applications

- Optimized NPUs for embedded devices build on Ceva's market leadership in IoT connectivity and strong expertise in audio and vision sensing to help semiconductor companies and OEMs unlock the potential of edge AI

ROCKVILLE, Md., June 24, 2024 /PRNewswire/ -- Ceva, Inc. (NASDAQ: CEVA), the leading licensor of silicon and software IP that enables Smart Edge devices to connect, sense and infer data more reliably and efficiently, today announced that it has extended its Ceva-NeuPro family of Edge AI NPUs with the introduction of Ceva-NeuPro-Nano NPUs. These highly-efficient, self-sufficient NPUs deliver the power, performance and cost efficiencies needed for semiconductor companies and OEMs to integrate TinyML models into their SoCs for consumer, industrial, and general-purpose AIoT products.

TinyML refers to the deployment of machine learning models on low-power, resource-constrained devices to bring the power of AI to the Internet of Things (IoT). Driven by the increasing demand for efficient and specialized AI solutions in IoT devices, the market for TinyML is growing rapidly. According to research firm ABI Research, by 2030 over 40% of TinyML shipments will be powered by dedicated TinyML hardware rather than all-purpose MCUs. By addressing the specific performance challenges of TinyML, the Ceva-NeuPro-Nano NPUs aim to make AI ubiquitous, economical and practical for a wide range of use cases, spanning voice, vision, predictive maintenance, and health sensing in consumer and industrial IoT applications.

The new Ceva-NeuPro-Nano Embedded AI NPU architecture is fully programmable and efficiently executes Neural Networks, feature extraction, control code and DSP code, and supports most advanced machine learning data types and operators including native transformer computation, sparsity acceleration and fast quantization. This optimized, self-sufficient architecture enables Ceva-NeuPro-Nano NPUs to deliver superior power efficiency, with a smaller silicon footprint, and optimal performance compared to the existing processor solutions used for TinyML workloads which utilize a combination of CPU or DSP with AI accelerator-based architectures. Furthermore, Ceva-NetSqueeze AI compression technology directly processes compressed model weights, without the need for an intermediate decompression stage. This enables the Ceva-NeuPro-Nano NPUs to achieve up to 80% memory footprint reduction, solving a key bottleneck inhibiting the broad adoption of AIoT processors today.

"Ceva-NeuPro-Nano opens exciting opportunities for companies to integrate TinyML applications into low-power IoT SoCs and MCUs and builds on our strategy to empower smart edge devices with advanced connectivity, sensing and inference capabilities. The Ceva-NeuPro-Nano family of NPUs enables more companies to bring AI to the very edge, resulting in intelligent IoT devices with advanced feature sets that capture more value for our customers," said Chad Lucien, vice president and general manager of the Sensors and Audio Business Unit at Ceva. "By leveraging our industry-leading position in wireless IoT connectivity and strong expertise in audio and vision sensing, we are uniquely positioned to help our customers unlock the potential of TinyML to enable innovative solutions that enhance user experiences, improve efficiencies, and contribute to a smarter, more connected world."

According to Paul Schell, Industry Analyst at ABI Research, "Ceva-NeuPro-Nano is a compelling solution for on-device AI in smart edge IoT devices. It addresses the power, performance, and cost requirements to enable always-on use-cases on battery-operated devices integrating voice, vision, and sensing use cases across a wide array of end markets. From TWS earbuds, headsets, wearables, and smart speakers to industrial sensors, smart appliances, home automation devices, cameras, and more, Ceva-NeuPro-Nano enables TinyML in energy constrained AIoT devices."

The Ceva-NeuPro-Nano NPU is available in two configurations - the Ceva-NPN32 with 32 int8 MACs, and the Ceva-NPN64 with 64 int8 MACs, both of which benefit from Ceva-NetSqueeze for direct processing of compressed model weights. The Ceva-NPN32 is highly optimized for most TinyML workloads targeting voice, audio, object detection, and anomaly detection use cases. The Ceva-NPN64 provides 2x performance acceleration using weight sparsity, greater memory bandwidth, more MACs, and support for 4-bit weights to deliver enhanced performance for more complex on-device AI use cases such as object classification, face detection, speech recognition, health monitoring, and others.

The NPUs are delivered with a complete AI SDK - Ceva-NeuPro Studio - which is a unified AI stack that delivers a common set of tools across the entire Ceva-NeuPro NPU family, supporting open AI frameworks including TensorFlow Lite for Microcontrollers (TFLM) and microTVM (µTVM).

The Ceva-NeuPro-Nano Key Features

Flexible and scalable NPU architecture 

  • Fully programmable to efficiently execute Neural Networks, feature extraction, control code, and DSP code
  • Scalable performance by design to meet a wide range of use cases
    • MAC configurations with up to 64 int8 MACs per cycle
  • Future proof architecture that supports the most advanced ML data types and operators
    • 4-bit to 32-bit integer support
    • Native transformer computation
  • Ultimate ML performance for all use cases using advanced mechanisms
    • Sparsity acceleration
    • Acceleration of non-linear activation types
    • Fast quantization

Edge NPU with ultra-low memory requirements

  • Highly efficient, single core design for NN compute, feature extraction, control code, and DSP code eliminates need for a companion MCU for these computationally intensive tasks
  • Up to 80% memory footprint reduction via Ceva-NetSqueeze which directly process compressed model weights without the need for an intermediate decompression stage

Ultra-low energy achieved through innovative energy optimization techniques 

  • Automatic on-the-fly energy tuning
  • Dramatic energy and bandwidth reduction by distilling computations using weight-sparsity acceleration

Complete, Simple to Use AI SDK

  • Ceva-NeuPro Studio provides a unified AI stack, with an easy click-and-run experience, for all Ceva-NeuPro NPUs, from the new Ceva-NeuPro-Nano to the powerful Ceva-NeuPro-M
  • Fast time to market by accelerating software development and deployment
  • Optimized to work seamlessly with leading, open AI inference frameworks including TFLM and µTVM
  • Model Zoo of pretrained and optimized TinyML models covering voice, vision and sensing use cases
  • Flexible to adapt to new models, applications and market needs
  • Comprehensive portfolio of optimized runtime libraries and off-the-shelf application-specific software

Availability
Ceva-NeuPro-Nano NPUs are available for licensing today. For more information, visit: https://www.ceva-ip.com/product/ceva-neupro-nano/

About Ceva, Inc.
At Ceva, we are passionate about bringing new levels of innovation to the smart edge. Our wireless communications, sensing and Edge AI technologies are at the heart of some of today's most advanced smart edge products. From Bluetooth, Wi-Fi, UWB and 5G platform IP for ubiquitous, robust communications, to scalable Edge AI NPU IPs, sensor fusion processors and embedded application software that make devices smarter, we have the broadest portfolio of IP to connect, sense and infer data more reliably and efficiently. We deliver differentiated solutions that combine outstanding performance at ultra-low power within a very small silicon footprint. Our goal is simple – to deliver the silicon and software IP to enable a smarter, safer, and more interconnected world. This philosophy is in practice today, with Ceva powering more than 17 billion of the world's most innovative smart edge products from AI-infused smartwatches, IoT devices and wearables to autonomous vehicles and 5G mobile networks.

Our headquarters are in Rockville, Maryland with a global customer base supported by operations worldwide. Our employees are among the leading experts in their areas of specialty, consistently solving the most complex design challenges, enabling our customers to bring innovative smart edge products to market.

Ceva: Powering the Smart Edge™

Visit us at www.ceva-ip.com and follow us on LinkedIn, X, YouTube, Facebook, and Instagram.

https://mma.prnewswire.com/media/74483/ceva__inc__logo.jpg

 

Cision View original content:https://www.prnewswire.com/news-releases/ceva-extends-its-smart-edge-ip-leadership-adding-new-tinyml-optimized-npus-for-aiot-devices-to-enable-edge-ai-everywhere-302179990.html

SOURCE Ceva, Inc.

FAQ

What are Ceva-NeuPro-Nano NPUs?

Ceva-NeuPro-Nano NPUs are a new series of Edge AI neural processing units designed for efficient execution of TinyML workloads in consumer, industrial, and general-purpose AIoT products.

How do Ceva-NeuPro-Nano NPUs improve TinyML applications?

Ceva-NeuPro-Nano NPUs provide ultra-low power consumption, optimal performance, and cost-efficiency, making TinyML applications more practical for various AIoT use cases.

What are the configurations of Ceva-NeuPro-Nano NPUs?

Ceva-NeuPro-Nano NPUs are available in two configurations: Ceva-NPN32 and Ceva-NPN64, each optimized for different AI workloads and performance requirements.

What is the memory reduction capability of Ceva-NeuPro-Nano NPUs?

Ceva-NeuPro-Nano NPUs can reduce memory footprints by up to 80% using Ceva-NetSqueeze technology, which processes compressed model weights directly.

Which AI frameworks are supported by Ceva-NeuPro Studio?

Ceva-NeuPro Studio, the AI SDK for Ceva-NeuPro NPUs, supports open AI frameworks including TensorFlow Lite for Microcontrollers and microTVM.

When are Ceva-NeuPro-Nano NPUs available for licensing?

Ceva-NeuPro-Nano NPUs are available for licensing as of June 24, 2024.

What kinds of AI applications can Ceva-NeuPro-Nano NPUs handle?

Ceva-NeuPro-Nano NPUs can handle a variety of AI applications including voice, vision, object detection, health monitoring, and predictive maintenance in AIoT devices.

CEVA Inc.

NASDAQ:CEVA

CEVA Rankings

CEVA Latest News

CEVA Stock Data

636.20M
23.63M
2.61%
87.04%
3.49%
Semiconductors
Services-computer Programming, Data Processing, Etc.
Link
United States of America
ROCKVILLE