Cerebras Announces Six New AI Datacenters Across North America and Europe to Deliver Industry’s Largest Dedicated AI Inference Cloud
New datacenters catapult Cerebras to hyperscale capacity, offering over 40 million tokens/second to enterprises, governments, and developers worldwide
These new datacenters mark a critical milestone in Cerebras’ 2025 AI inference scaling plan, expanding aggregate capacity by 20x in order to serve surging customer demand. The
Cerebras AI Inference Data Centers:
-
Santa Clara, CA (online) -
Stockton, CA (online) -
Dallas, TX (online) -
Minneapolis, MN (Q2 2025) -
Oklahoma City, OK (Q3 2025) -
Montreal, Canada (Q3 2025) - Midwest / Eastern US (Q4 2025)
-
Europe (Q4 2025)
Since announcing its high-speed inference offering in August 2024, Cerebras has experienced surging demand from the world’s leading AI companies and enterprises. Mistral, France’s leading AI startup, uses Cerebras to power its flagship Le Chat AI assistant. Perplexity, the world’s leading AI search engine, uses Cerebras to provide instant search results. This month, HuggingFace and AlphaSense, the GitHub of AI and the leading market intelligence platform respectively, both announced they are also adopting Cerebras for its lightning-fast inference capability.
"Cerebras is turbocharging the future of
Scale Datacenter in
"We are excited to partner with Cerebras to bring world-class AI infrastructure to Oklahoma City,” said Trevor Francis, CEO of Scale Datacenter. “Our collaboration with Cerebras underscores our commitment to empowering innovation in AI, and we look forward to supporting the next generation of AI-driven applications."
In July 2025, the Enovum Montreal facility will be fully operational. Enovum is a division of Bit Digital, Inc. (Nasdaq: BTBT), that operates the
“Enovum is thrilled to partner with Cerebras, a company at the forefront of AI innovation, and to further expand and propel Canada’s world-class tech ecosystem,” said Billy Krassakopoulos, CEO of Enovum Data Centers. “This agreement enables our companies to deliver sophisticated, high-performance colocation solutions tailored for next-generation AI workloads.”
Reasoning models such as DeepSeek R1 and OpenAI o3 are the next wave in AI, but can take minutes to generate answers. Cerebras fundamentally solves this challenge by accelerating AI inference speed by 10x, enabling near instant results for the latest reasoning models. With hyperscale capacity coming online starting from Q3 2025, Cerebras is poised to be the market leader in real-time AI inference.
About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit cerebras.ai or follow us on LinkedIn or X.
Forward-Looking Statements
This press release contains forward-looking statements, including but not limited to: statements regarding Cerebras’ new AI datacenters that will become operational in 2025; the locations of the datacenters; the timing of when those datacenters will come online; the capacity of those datacenters measured by tokens per second; Cerebras becoming the world’s #1 provider of high-speed inference and the largest domestic high speed inference cloud; Cerebras playing a key role in advancing our nation’s AI infrastructure and leadership; the enormous demand for Cerebras industry-leading AI inference capabilities; and ensuring worldwide access to sovereign, high-performance AI infrastructure that will fuel critical research and business transformation. You can identify forward-looking statements by the fact that they do not relate strictly to historical or current facts. These statements may include words such as "anticipate", "estimate", "expect", "project", "plan", "intend", “target”, “aim”, "believe", "may", "will", "should", “becoming”, “look forward”, “could”, “can,” "can have", "likely" and other words and terms of similar meaning in connection with any discussion of these datacenters. Forward-looking statements give our current expectations and projections relating to the information in this press release. Neither we nor any other person assumes responsibility for the accuracy and completeness of any of these forward-looking statements. The forward-looking statements included in this press release relate only to events and information as of the date hereof. Cerebras undertakes no obligation to update or revise any forward-looking statement as a result of new information, future events or otherwise, except as otherwise required by law. All forward-looking statements are subject to risks and uncertainties that may cause actual results to differ materially from those that we expected.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250311115186/en/
Media Contact
Source: Cerebras Systems