Elasticsearch Open Inference API Now Supports Mistral AI Embeddings
Mistral AI embeddings on Elasticsearch benefit from native chunking via a single API call
“We are invested in delivering open-first, enterprise-grade GenAI tools to help developers build next generation search applications,” said Shay Banon, founder and chief technology officer at Elastic. “Through our collaboration with the Mistral AI team, we’re simplifying the process of storing and chunking embeddings in Elasticsearch to a single API call.”
“Mistral AI has always been committed to open-weights and making AI accessible to all,” said Arthur Mensch, co-founder and CEO of Mistral AI. “Working with Elastic allows us to bring Mistral’s tools to more developers through the Elastic open inference API, and gives us the opportunity to work with a company that shares our value of accessibility. We’re excited to see what developers will create.”
Support for Mistral’s AI embedding model is available today, read the Elastic blog to get started.
About Elastic
Elastic (NYSE: ESTC), the Search AI Company, enables everyone to find the answers they need in real-time using all their data, at scale. Elastic’s solutions for search, observability and security are built on the Elastic Search AI Platform, the development platform used by thousands of companies, including more than
Elastic and associated marks are trademarks or registered trademarks of Elastic N.V. and its subsidiaries. All other company and product names may be trademarks of their respective owners.
View source version on businesswire.com: https://www.businesswire.com/news/home/20240801450338/en/
Elastic PR
PR-team@elastic.co
Source: Elastic N.V.