STOCK TITAN

Fastly AI Accelerator Helps Developers Unleash the Power of Generative AI

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Positive)
Tags
AI

Fastly (NYSE: FSLY) announced the general availability of Fastly AI Accelerator, a semantic caching solution designed to enhance performance and reduce costs for Large Language Model (LLM) generative AI applications. The solution delivers 9x faster response times and now supports both OpenAI ChatGPT and Microsoft Azure AI Foundry.

The implementation requires developers to simply update their application to a new API endpoint, typically changing just one line of code. The solution leverages Fastly Edge Cloud Platform to cache responses for repeated queries, eliminating the need to make individual calls to AI providers.

Fastly (NYSE: FSLY) ha annunciato la disponibilità generale di Fastly AI Accelerator, una soluzione di caching semantico progettata per migliorare le prestazioni e ridurre i costi per le applicazioni di intelligenza artificiale generativa basate su modelli di linguaggio grande (LLM). La soluzione offre tempi di risposta 9 volte più veloci e ora supporta sia OpenAI ChatGPT che Microsoft Azure AI Foundry.

L'implementazione richiede agli sviluppatori di aggiornare semplicemente la loro applicazione a un nuovo endpoint API, di solito modificando solo una riga di codice. La soluzione sfrutta la Fastly Edge Cloud Platform per memorizzare nella cache le risposte per le query ripetute, eliminando la necessità di effettuare chiamate individuali ai fornitori di intelligenza artificiale.

Fastly (NYSE: FSLY) anunció la disponibilidad general de Fastly AI Accelerator, una solución de almacenamiento en caché semántico diseñada para mejorar el rendimiento y reducir costos para aplicaciones de inteligencia artificial generativa basadas en Modelos de Lenguaje Grande (LLM). La solución proporciona tempos de respuesta 9 veces más rápidos y ahora es compatible tanto con OpenAI ChatGPT como con Microsoft Azure AI Foundry.

La implementación requiere que los desarrolladores actualicen simplemente su aplicación a un nuevo punto final de API, cambiando típicamente solo una línea de código. La solución utiliza la Fastly Edge Cloud Platform para almacenar en caché las respuestas para consultas repetidas, eliminando la necesidad de realizar llamadas individuales a los proveedores de IA.

패스트리(Fastly) (NYSE: FSLY)패스트리 AI 가속기(Fastly AI Accelerator)의 일반 가용성을 발표했습니다. 이는 대형 언어 모델(LLM) 생성 AI 애플리케이션의 성능을 향상시키고 비용을 줄이기 위해 설계된 의미론적 캐싱 솔루션입니다. 이 솔루션은 9배 더 빠른 응답 시간을 제공하며, 이제 OpenAI ChatGPTMicrosoft Azure AI Foundry를 모두 지원합니다.

구현은 개발자가 애플리케이션을 새 API 엔드포인트로 간단히 업데이트하고, 일반적으로 단 한 줄의 코드만 변경하면 됩니다. 이 솔루션은 Fastly 엣지 클라우드 플랫폼을 활용하여 반복 쿼리에 대한 응답을 캐시하여 AI 공급자에 대한 개별 호출의 필요성을 제거합니다.

Fastly (NYSE: FSLY) a annoncé la disponibilité générale du Fastly AI Accelerator, une solution de mise en cache sémantique conçue pour améliorer les performances et réduire les coûts des applications d'IA générative basées sur des modèles de langage de grande taille (LLM). La solution offre des temps de réponse 9 fois plus rapides et prend désormais en charge à la fois OpenAI ChatGPT et Microsoft Azure AI Foundry.

L'implémentation nécessite que les développeurs mettent simplement à jour leur application vers un nouveau point de terminaison API, modifiant généralement une seule ligne de code. La solution s'appuie sur la Fastly Edge Cloud Platform pour mettre en cache les réponses aux requêtes répétées, éliminant ainsi la nécessité d'effectuer des appels individuels aux fournisseurs d'IA.

Fastly (NYSE: FSLY) hat die allgemeine Verfügbarkeit des Fastly AI Accelerators bekannt gegeben, einer semantischen Caching-Lösung, die entwickelt wurde, um die Leistung zu verbessern und die Kosten für generative KI-Anwendungen mit großen Sprachmodellen (LLM) zu senken. Die Lösung bietet 9-mal schnellere Antwortzeiten und unterstützt jetzt sowohl OpenAI ChatGPT als auch Microsoft Azure AI Foundry.

Die Implementierung erfordert, dass Entwickler ihre Anwendung einfach auf einen neuen API-Endpunkt aktualisieren, wobei in der Regel nur eine Zeile Code geändert werden muss. Die Lösung nutzt die Fastly Edge Cloud Platform, um Antworten für wiederholte Anfragen zwischenzuspeichern, wodurch die Notwendigkeit entfällt, individuelle Aufrufe an KI-Anbieter zu tätigen.

Positive
  • 9x faster response times for AI applications
  • Easy implementation requiring only one line of code change
  • Expanded support for both OpenAI ChatGPT and Microsoft Azure AI Foundry
  • Cost reduction through cached responses
Negative
  • None.

Insights

The release of Fastly AI Accelerator represents a significant technical advancement in the edge computing space. The 9x faster response times through semantic caching is a substantial performance improvement that directly addresses one of the biggest challenges in LLM applications - latency. The integration with major players like OpenAI ChatGPT and Microsoft Azure AI Foundry positions this product strategically in the growing AI infrastructure market. The simplicity of implementation, requiring just a single line of code change, reduces adoption barriers while the semantic caching approach could significantly reduce API costs for high-volume applications. This solution could drive increased platform adoption and revenue through both new customer acquisition and expanded usage from existing customers.

This product launch strengthens Fastly's competitive position in the rapidly growing AI infrastructure market. The timing is strategic, as organizations are actively seeking solutions to manage the high costs and performance challenges of LLM applications. By addressing both performance and cost efficiency, Fastly is targeting two critical pain points that could drive adoption. The partnership with industry leaders OpenAI and Microsoft adds credibility and expands market reach. The immediate availability to existing customers through their Fastly accounts creates a clear path to monetization. This could positively impact Fastly's revenue streams, particularly as AI application deployment continues to accelerate across industries.

Fastly expands support to include OpenAI ChatGPT and Microsoft Azure AI Foundry

SAN FRANCISCO--(BUSINESS WIRE)-- Fastly Inc. (NYSE: FSLY), a global leader in edge cloud platforms, today announced the general availability of Fastly AI Accelerator. A semantic caching solution created to address the critical performance and cost challenges faced by developers with Large Language Model (LLM) generative AI applications, Fastly AI Accelerator delivers an average of 9x faster response times.1 Initially released in beta with support for OpenAI ChatGPT, Fastly AI Accelerator is also now available with Microsoft Azure AI Foundry.

"AI is helping developers create so many new experiences, but too often at the expense of performance for end-users. Too often, today’s AI platforms make users wait,” said Kip Compton, Chief Product Officer at Fastly. “With Fastly AI Accelerator we’re already averaging 9x faster response times and we’re just getting started.1 We want everyone to join us in the quest to make AI faster and more efficient.”

Fastly AI Accelerator can be a game-changer for developers looking to optimize their LLM generative AI applications. To access its intelligent, semantic caching abilities, developers simply update their application to a new API endpoint, which typically only requires changing a single line of code. With this easy implementation, instead of going back to the AI provider for each individual call, Fastly AI Accelerator leverages the Fastly Edge Cloud Platform to provide a cached response for repeated queries. This approach helps to enhance performance, lower costs, and ultimately deliver a better experience for developers.

"Fastly AI Accelerator is a significant step towards addressing the performance bottleneck accompanying the generative AI boom,” said Dave McCarthy, Research Vice President, Cloud and Edge Services at IDC. “This move solidifies Fastly's position as a key player in the fast-evolving edge cloud landscape. The unique approach of using semantic caching to reduce API calls and costs unlocks the true potential of LLM generative AI apps without compromising on speed or efficiency, allowing Fastly to enhance the user experience and empower developers."

Existing Fastly customers can add AI Accelerator directly from their Fastly accounts. To learn more and get started, visit fastly.com/ai.

About Fastly, Inc.

Fastly’s powerful and programmable edge cloud platform helps the world’s top brands deliver online experiences that are fast, safe, and engaging through edge compute, delivery, security, and observability offerings that improve site performance, enhance security, and empower innovation at global scale. Compared to other providers, Fastly’s powerful, high-performance, and modern platform architecture empowers developers to deliver secure websites and apps with rapid time-to-market and demonstrated, industry-leading cost savings. Organizations around the world trust Fastly to help them upgrade the internet experience, including Reddit, Neiman Marcus, Universal Music Group, and SeatGeek. Learn more about Fastly at https://www.fastly.com, and follow us @fastly.

Forward-Looking Statements

This press release contains “forward-looking” statements that are based on our beliefs and assumptions and on information currently available to us on the date of this press release. Forward-looking statements may involve known and unknown risks, uncertainties, and other factors that may cause our actual results, performance, or achievements to be materially different from those expressed or implied by the forward-looking statements. These statements include, but are not limited to, those regarding the ability of Fastly AI Accelerator to help developers enhance performance, deliver faster response times, reduce costs, and improve user experience. Except as required by law, we assume no obligation to update these forward-looking statements publicly or to update the reasons actual results could differ materially from those anticipated in the forward-looking statements, even if new information becomes available in the future. Important factors that could cause our actual results to differ materially are detailed from time to time in the reports Fastly files with the Securities and Exchange Commission (“SEC”), including without limitation Fastly’s Annual Report on Form 10-K for the year ended December 31, 2023 and our Quarterly Reports on Form 10-Q. Copies of reports filed with the SEC are posted on Fastly’s website and are available from Fastly without charge.

Source: Fastly, Inc.

1 Responses from Fastly AI Accelerator semantic cache were served 9 times faster on average compared to those served without the AI Accelerator, calculated using all beta customer and demo traffic between October 15, 2024 and November 27, 2024.

Media Contact

Spring Harris

press@fastly.com

Investor Contact

Vernon Essi, Jr.

ir@fastly.com

Source: Fastly, Inc.

FAQ

What performance improvement does Fastly AI Accelerator (FSLY) deliver?

Fastly AI Accelerator delivers an average of 9x faster response times for LLM generative AI applications.

Which AI platforms does Fastly (FSLY) AI Accelerator support?

Fastly AI Accelerator supports OpenAI ChatGPT and Microsoft Azure AI Foundry.

How complex is the implementation of Fastly (FSLY) AI Accelerator?

Implementation is simple, requiring developers to only update their application to a new API endpoint by changing a single line of code.

How does Fastly (FSLY) AI Accelerator reduce costs for developers?

It reduces costs by using semantic caching through the Fastly Edge Cloud Platform to provide cached responses for repeated queries, eliminating the need for multiple AI provider calls.

Fastly, Inc.

NYSE:FSLY

FSLY Rankings

FSLY Latest News

FSLY Stock Data

1.44B
130.39M
7.15%
72.2%
7.27%
Software - Application
Services-prepackaged Software
Link
United States of America
SAN FRANCISCO