Welcome to our dedicated page for SUPER X AI TECHNOLOGY news (Ticker: SUPX), a resource for investors and traders seeking the latest updates and insights on SUPER X AI TECHNOLOGY stock.
Our selection of high-quality news articles is accompanied by an expert summary from Rhea-AI, detailing the impact and sentiment surrounding the news at the time of release, providing a deeper understanding of how each news could potentially affect SUPER X AI TECHNOLOGY's stock performance. The page also features a concise end-of-day stock performance summary, highlighting the actual market reaction to each news event. The list of tags makes it easy to classify and navigate through different types of news, whether you're interested in earnings reports, stock offerings, stock splits, clinical trials, fda approvals, dividends or buybacks.
Designed with both novice traders and seasoned investors in mind, our page aims to simplify the complex world of stock market news. By combining real-time updates, Rhea-AI's analytical insights, and historical stock performance data, we provide a holistic view of SUPER X AI TECHNOLOGY's position in the market.
SuperX (NASDAQ: SUPX) launched the SuperX GB300 NVL72 System on October 16, 2025, a liquid-cooled, rack-scale AI platform built around the NVIDIA GB300 Grace Blackwell Ultra Superchip.
The NVL72 rack links 72 Blackwell Ultra GPUs and 36 Grace CPUs to deliver up to ≈1.8 exaFLOPS FP4 AI compute, ≈165TB HBM3E and ≈17TB LPDDR5X memory per rack, 800Gb/s InfiniBand XDR networking, and a 48U MGX rack form factor. SuperX positions the system for hyperscale, sovereign AI, exascale scientific computing and industrial digital twins, and bundles the rack with liquid cooling and 800VDC power infrastructure as part of a prefabricated modular AI factory solution.
SuperX (NASDAQ: SUPX) on October 3, 2025 launched the XN9160-B300 AI Server, an 8U flagship built around the NVIDIA Blackwell B300 (HGX B300) targeting large-scale AI training, inference and HPC.
Key specs: 8 Blackwell B300 GPUs (288GB HBM3E each; 2,304GB unified HBM3E), Dual Intel Xeon 6 CPUs, up to 32 DDR5 DIMMs, 8×800Gb/s OSFP networking, NVLink, eight Gen5 NVMe bays, and 12×3000W 80 PLUS Titanium redundant PSUs. NVIDIA cites +50% NVFP4 compute and +50% HBM per chip vs prior Blackwell. The server is positioned for hyperscalers, scientific research, finance, bioinformatics, and global systems modeling.
SuperX (NASDAQ:SUPX) has unveiled its groundbreaking SuperX Modular AI Factory, a revolutionary data center-scale solution designed to transform AI infrastructure deployment. The solution integrates compute, cooling, and power systems into pre-fabricated modules, reducing traditional data center deployment time from 18-24 months to under 6 months.
The system features impressive specifications including up to 20MW per module, supporting up to 144 NVIDIA GB200 NVL72 systems, and achieving a power efficiency (PUE) as low as 1.15, resulting in 23% energy savings compared to traditional systems. The solution's modular design allows for flexible 1-to-N scaling, while its high-density compute capabilities deliver 7 times higher rack power density than traditional data centers.
SuperX AI Technology (NASDAQ: SUPX) has announced the establishment of its wholly-owned U.S. subsidiary, SuperX AI Technology USA, in Silicon Valley, California. The subsidiary, incorporated in Nevada on September 18, 2025, is expected to be operational by Q4 2025.
The new Silicon Valley office will serve as a North American hub for co-innovation, focusing on joint research, development, and integrated design of full-stack AI solutions. Key objectives include driving joint innovation through integrated solution design, expanding the global partner ecosystem, and enhancing capital market engagement with U.S. investors.
This strategic expansion aims to strengthen SuperX's position within the global AI technology ecosystem and facilitate closer collaboration with U.S. partners.
SuperX (NASDAQ:SUPX) has formed a strategic joint venture with Zhonhen Electric (SZSE:002364) to establish SuperX Digital Power, focusing on advanced High-Voltage Direct Current (HVDC) solutions for AI data centers globally (excluding China, Hong Kong, and Macau).
The partnership aims to address the critical challenge of exponential energy consumption in AI computing. The HVDC technology offers significant advantages including: boosting efficiency from 85-90% to over 96%, reducing facility space by up to 50%, and lowering total cost of ownership by over 20%. This vertical integration extends SuperX's capabilities from AI compute infrastructure to core power technology, creating a comprehensive "Compute + Cooling + Power" solution.
SuperX (NASDAQ: SUPX) has launched its groundbreaking All-in-One Multi-Model Server (MMS) series, featuring pre-configured integration with OpenAI's latest GPT-OSS-120B and GPT-OSS-20B models. The enterprise-grade AI infrastructure offers out-of-the-box functionality and multi-model fusion capabilities.
The MMS series includes various editions ranging from $50,000 to $4,000,000, catering to different enterprise scales. Key features include advanced data security through NVIDIA Confidential Computing, rapid deployment capabilities, and support for over 60 pre-configured scenario-based agents.
The solution enables enterprises to implement AI applications immediately without lengthy integration processes, supporting multiple model types including inference, general-purpose, speech synthesis/recognition, and text-to-image models.
SuperX (NASDAQ:SUPX) has unveiled its groundbreaking XN9160-B200 AI Server, featuring NVIDIA's latest Blackwell B200 GPUs. The server delivers exceptional AI computing capabilities with 8 NVIDIA Blackwell B200 GPUs, 1440 GB of HBM3E memory, and 6th Gen Intel Xeon processors in a 10U chassis.
The system achieves remarkable performance metrics, including up to 15x faster inference compared to the H100 platform, processing 58 tokens per second per card on the GPT-MoE 1.8T model. The server features advanced reliability measures, including redundant power supplies and comprehensive quality control processes with a three-year warranty.
SuperX (Nasdaq: SUPX), a leading AI infrastructure solutions provider, has announced plans to establish its first regional supply center in Japan through its Singapore subsidiary. The facility, set to begin operations by late 2025, will serve as a key hub for integrating and delivering the company's AI Server and HVDC products.
The center will have an annual capacity of 10,000 high-performance AI servers and will focus on final assembly, system integration, and quality control of SuperX's core products, including AI servers, HVDC power systems, and liquid cooling solutions. The strategic expansion aims to enhance local service capabilities, reduce delivery times, and strengthen the company's presence in Japan's growing AI infrastructure market.
Super X AI Technology Limited (NASDAQ: SUPX) has appointed Kenny Sng as its new Chief Technology Officer, effective July 1, 2025. Sng brings over 20 years of experience, previously serving as CTO of Intel Singapore & Malaysia, with expertise in enterprise technology and data center engineering.
The company, formerly known as Junee Limited, has rebranded to reflect its focus on AI infrastructure solutions. SuperX's comprehensive offerings include AI servers, liquid cooling systems, HVDC systems, and end-to-end consulting services for AI data centers. The company aims to address the growing demand for AI computing power with fast-deployment, low-cost, and high-performance infrastructure solutions for institutional clients.