Nano Labs Launches FPU3.0 ASIC Design Architecture with 3D DRAM Stacking for AI and Blockchain Innovation
Nano Labs (Nasdaq: NA) has unveiled FPU3.0, a new ASIC architecture featuring advanced 3D DRAM stacking technology. The architecture achieves a fivefold improvement in power efficiency compared to its predecessor, FPU2.0. The design is optimized for AI inference and blockchain applications.
The FPU architecture consists of four core modules: Smart NOC (Network-on-Chip), high-bandwidth memory controller, chip-to-chip interconnect IOs, and FPU core. The FPU3.0 specifically incorporates stacked 3D memory with 24TB/s theoretical bandwidth and an upgraded Smart-NOC on-chip network, supporting various compute cores and traffic types.
Nano Labs (Nasdaq: NA) ha presentato FPU3.0, una nuova architettura ASIC che utilizza una tecnologia avanzata di impilamento della DRAM 3D. Questa architettura offre un miglioramento della potenza cinque volte superiore rispetto al suo predecessore, FPU2.0. Il design è ottimizzato per applicazioni di IA e blockchain.
L'architettura FPU è composta da quattro moduli principali: Smart NOC (Network-on-Chip), controller di memoria ad alta larghezza di banda, IO di interconnessione chip-to-chip e core FPU. L'FPU3.0 integra specificamente memoria 3D impilata con una larghezza di banda teorica di 24TB/s e una rete Smart-NOC on-chip aggiornata, supportando vari core di calcolo e tipi di traffico.
Nano Labs (Nasdaq: NA) ha presentado FPU3.0, una nueva arquitectura ASIC que cuenta con una tecnología avanzada de apilamiento de DRAM 3D. La arquitectura logra una mejora de cinco veces en eficiencia energética en comparación con su predecesora, FPU2.0. El diseño está optimizado para aplicaciones de inferencia de IA y blockchain.
La arquitectura FPU consta de cuatro módulos principales: Smart NOC (Network-on-Chip), controlador de memoria de alta ancho de banda, IO de interconexión chip-a-chip y núcleo FPU. El FPU3.0 incorpora específicamente memoria 3D apilada con un ancho de banda teórico de 24TB/s y una red Smart-NOC on-chip mejorada, que admite varios núcleos de computación y tipos de tráfico.
나노 랩스 (Nasdaq: NA)는 FPU3.0을 발표했습니다. 이 새로운 ASIC 아키텍처는 첨단 3D DRAM 스태킹 기술을 특징으로 합니다. 이 아키텍처는 이전 모델인 FPU2.0에 비해 전력 효율성이 5배 향상되었습니다. 디자인은 AI 추론 및 블록체인 응용 프로그램에 최적화되어 있습니다.
FPU 아키텍처는 네 가지 핵심 모듈로 구성됩니다: 스마트 NOC (Network-on-Chip), 고속 메모리 컨트롤러, 칩 간 인터커넥트 IO, 및 FPU 코어. FPU3.0은 24TB/s의 이론적 대역폭을 가진 스택형 3D 메모리를 통합하고 있으며, 여러 계산 코어와 트래픽 유형을 지원하는 업그레이드된 스마트 NOC 온칩 네트워크를 가지고 있습니다.
Nano Labs (Nasdaq: NA) a dévoilé FPU3.0, une nouvelle architecture ASIC incorporant une technologie avancée d'empilement de DRAM 3D. Cette architecture permet une amélioration de l'efficacité énergétique cinq fois supérieure par rapport à son prédécesseur, FPU2.0. La conception est optimisée pour les applications d'inférence IA et de blockchain.
L'architecture FPU est composée de quatre modules principaux : Smart NOC (Network-on-Chip), contrôleur de mémoire à large bande, interconnexions chip-à-chip, et cœur FPU. Le FPU3.0 intègre spécifiquement une mémoire 3D empilée avec une largeur de bande théorique de 24TB/s et un réseau Smart-NOC on-chip amélioré, prenant en charge différents cœurs de calcul et types de trafic.
Nano Labs (Nasdaq: NA) hat FPU3.0 vorgestellt, eine neue ASIC-Architektur mit fortschrittlicher 3D-DRAM-Stapeltechnologie. Diese Architektur erzielt eine fünffache Verbesserung der Energieeffizienz im Vergleich zu ihrem Vorgänger, FPU2.0. Das Design ist für KI-Inferenz- und Blockchain-Anwendungen optimiert.
Die FPU-Architektur besteht aus vier Hauptmodulen: Smart NOC (Network-on-Chip), Hochbandbreiten-Speicherkontroller, Chip-zu-Chip-Interkonnektionen und FPU-Kern. Das FPU3.0 integriert speziell gestapelte 3D-Speicher mit einer theoretischen Bandbreite von 24TB/s und einem aktualisierten Smart-NOC-On-Chip-Netzwerk, das verschiedene Rechenkerne und Verkehrstypen unterstützt.
- Achieved 5x power efficiency improvement over previous generation
- Implemented advanced 3D DRAM stacking technology
- Achieved 24TB/s theoretical bandwidth capability
- Modular design enables rapid product iteration
- None.
Insights
For simpler understanding: Imagine a highway system where instead of cars traveling only on a flat surface (2D), they can now move up and down through multiple levels (3D), dramatically increasing traffic flow. The 5x power efficiency improvement means doing the same work using only 20% of the previous energy consumption.
However, the announcement lacks important details about actual performance metrics, production timeline and partner foundries. The modular design approach, while enabling faster iterations, will need robust validation across different configurations to ensure reliability at scale.
This architectural advancement positions Nano Labs competitively in both the AI inference and blockchain markets, where power efficiency is a critical differentiator. The 5x improvement in power efficiency could translate to significant operational cost savings for data centers and mining operations.
The timing is strategic - as AI workloads become more compute-intensive and energy costs rise globally, the market for energy-efficient ASICs is expanding rapidly. The modular approach allows Nano Labs to adapt quickly to emerging market demands and specific customer requirements.
However, investors should note that while the architecture shows promise, success will depend on actual implementation, customer adoption and the company's ability to scale production. The small market cap of
The FPU series represents Nano Labs' proprietary set of ASIC chip design architectures, purpose-built for high-bandwidth High Throughput Computing (HTC) applications. Such ASIC chips are optimized for specific functions or applications, typically delivering lower power consumption and higher computational efficiency than general-purpose CPUs and GP-GPUs. These ASICs are increasingly utilized in AI inference, edge AI computing, data transmission processing under 5G networks, network acceleration, and more.
The Nano FPU architecture comprises four fundamental modules and IPs: the Smart NOC (Network-on-Chip), the high-bandwidth memory controller, the chip-to-chip interconnect IOs, and the FPU core. This modular provides remarkable flexibility, enabling rapid product iteration by updating the FPU core IP while reusing or upgrading other IPs and modules as needed - often sufficient to introduce new features.
Notably, the FPU3.0 architecture incorporates stacked 3D memory with a theoretical bandwidth of 24TB/s and an upgraded Smart-NOC on-chip network. This network supports a mix of large and small compute cores, full-crossbar, and feed-through traffic types on the bus. The FPU3.0 architecture holds the potentials to excel in various fields, delivering superior performance, lower power consumption, and faster product iteration cycles.
About Nano Labs Ltd
Nano Labs Ltd is a leading fabless integrated circuit ("IC") design company and product solution provider in China. Nano Labs is committed to the development of high throughput computing ("HTC") chips, high performance computing ("HPC") chips, distributed computing and storage solutions, smart network interface cards ("NICs") vision computing chips and distributed rendering. Nano Labs has built a comprehensive flow processing unit ("FPU") architecture which offers solution that integrates the features of both HTC and HPC. Nano Lab's Cuckoo series are one of the first near-memory HTC chips available in the market*. For more information, please visit the Company's website at: ir.nano.cn.
* According to an industry report prepared by Frost & Sullivan.
Forward-Looking Statements
This press release contains forward-looking statements within the meaning of Section 21E of the Securities Exchange Act of 1934, as amended, and as defined in the
For investor inquiries, please contact:
Nano Labs Ltd
ir@nano.cn
Ascent Investor Relations LLC
Tina Xiao
Phone: +1-646-932-7242
Email: investors@ascent-ir.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/nano-labs-launches-fpu3-0-asic-design-architecture-with-3d-dram-stacking-for-ai-and-blockchain-innovation-302339249.html
SOURCE Nano Labs Ltd
FAQ
What is the power efficiency improvement of Nano Labs' (NA) FPU3.0 compared to FPU2.0?
What is the theoretical bandwidth of Nano Labs' (NA) FPU3.0 3D memory?
What are the four fundamental modules of Nano Labs' (NA) FPU architecture?