{"id":1848,"date":"2025-05-29T12:22:25","date_gmt":"2025-05-29T12:22:25","guid":{"rendered":"https:\/\/violethoward.com\/new\/mistral-launches-new-code-embedding-model-that-outperforms-openai-and-cohere-in-real-world-retrieval-tasks\/"},"modified":"2025-05-29T12:22:25","modified_gmt":"2025-05-29T12:22:25","slug":"mistral-launches-new-code-embedding-model-that-outperforms-openai-and-cohere-in-real-world-retrieval-tasks","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/mistral-launches-new-code-embedding-model-that-outperforms-openai-and-cohere-in-real-world-retrieval-tasks\/","title":{"rendered":"Mistral launches new code embedding model that outperforms OpenAI and Cohere in real-world retrieval tasks"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>With demand for enterprise retrieval augmented generation (RAG) on the rise, the opportunity is ripe for model providers to offer their take on embedding models.\u00a0<\/p>\n\n\n\n<p>French AI company Mistral threw its hat into the ring with Codestral Embed, its first embedding model, which it said outperforms existing embedding models on benchmarks like SWE-Bench.<\/p>\n\n\n\n<p>The model specializes in code and \u201cperforms especially well for retrieval use cases on real-world code data.\u201d The model is available to developers for $0.15 per million tokens.\u00a0<\/p>\n\n\n\n<p>The company said the Codestral Embed \u201csignificantly outperforms leading code embedders\u201d like Voyage Code 3, Cohere Embed v4.0 and OpenAI\u2019s embedding model, Text Embedding 3 Large.\u00a0<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">Super excited to announce <a href=\"https:\/\/twitter.com\/MistralAI?ref_src=twsrc%5Etfw\">@MistralAI<\/a> Codestral Embed, our first embedding model specialized for code. <\/p><p>It performs especially well for retrieval use cases on real-world code data. <a href=\"https:\/\/t.co\/ET321cRNli\">pic.twitter.com\/ET321cRNli<\/a><\/p>\u2014 Sophia Yang, Ph.D. (@sophiamyang) <a href=\"https:\/\/twitter.com\/sophiamyang\/status\/1927737869256331369?ref_src=twsrc%5Etfw\">May 28, 2025<\/a><\/blockquote>\n<\/div><\/figure>\n\n\n\n<p>Codestral Embed, part of Mistral\u2019s Codestral family of coding models, can make embeddings that transform code and data into numerical representations for RAG.\u00a0<\/p>\n\n\n\n<p>\u201cCodestral Embed can output embeddings with different dimensions and precisions, and the figure below illustrates the trade-offs between retrieval quality and storage costs,\u201d Mistral said in a blog post. \u201cCodestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors. The dimensions of our embeddings are ordered by relevance. For any integer target dimension n, you can choose to keep the first n dimensions for a smooth trade-off between quality and cost.\u201d<\/p>\n\n\n\n<p>Mistral tested the model on several benchmarks, including SWE-Bench and Text2Code from GitHub. In both cases, the company said Codestral Embed outperformed leading embedding models.\u00a0<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXfLEE9sQNKk5nUJA-yZPI0xRD-dwxHa3LvM08nU9hczBdAW7VbJP_3B0lC25KDm0Oo9qXu1Vmv5dDdeZ9lI2DipVaFfNH_Lbz6txHl2qbR1fL1WVTMvjQ08C6paZKCM-K1LBJ4ODw?key=0HS8Htz8s3avlxP5gbD6Gw\" alt=\"\"\/><\/figure>\n\n\n\n<p><em>SWE- Bench<\/em><\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXfmALjwHKGRm0dLFpd62DGAXPogjMT8evsh41eYEkCoa8crL34me6xG6NI3deUpBcWdgSEuXALSONfWixIZ98RS8-OhT4o9wr4MeutREuK2Y4vImB7bqn_zpUqSNXY0dZRCDc_9?key=0HS8Htz8s3avlxP5gbD6Gw\" alt=\"\"\/><\/figure>\n\n\n\n<p><em>Text2Code<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-use-cases\">Use cases<\/h2>\n\n\n\n<p>Mistral said Codestral Embed is optimized for \u201chigh-performance code retrieval\u201d and semantic understanding. The company said the code works best for at least four kinds of use cases: RAG, semantic code search, similarity search and code analytics.\u00a0<\/p>\n\n\n\n<p>Embedding models generally target RAG use cases, as they can facilitate faster information retrieval for tasks or agentic processes. Therefore, it\u2019s not surprising that Codestral Embed would focus on that.\u00a0<\/p>\n\n\n\n<p>The model can also perform semantic code search, allowing developers to find code snippets using natural language. This use case works well for developer tool platforms, documentation systems and coding copilots. Codestral Embed can also help developers identify duplicated code segments or similar code strings, which can be helpful for enterprises with policies regarding reused code.\u00a0<\/p>\n\n\n\n<p>The model supports semantic clustering, which involves grouping code based on its functionality or structure. This use case would help analyze repositories, categorize and find patterns in code architecture.\u00a0<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-competition-is-increasing-in-the-embedding-space\">Competition is increasing in the embedding space<\/h2>\n\n\n\n<p>Mistral has been on a roll with releasing new models and agentic tools. It released Mistral Medium 3, a medium version of its flagship large language model (LLM), which currently powers its enterprise-focused platform Le Chat Enterprise.\u00a0<\/p>\n\n\n\n<p>It also announced the Agents API, which allows developers to access tools for creating agents that perform real-world tasks and orchestrate multiple agents.\u00a0<\/p>\n\n\n\n<p>Mistral\u2019s moves to offer more model options to developers have not gone unnoticed in developer spaces. Some on X note that Mistral\u2019s timing in releasing Codestral Embed is \u201ccoming on the heels of increased competition.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">Mistral AI Just Dropped a Game-Changer: Codestral Embed Crushes OpenAI and Google in Code Search Race<\/p><p>French AI startup Mistral AI has quietly unleashed what could be the most significant breakthrough in code intelligence this year. Their brand-new Codestral Embed model isn&#8217;t\u2026<\/p>\u2014 Rahul Khorwal (@rkrahulkhorwal) <a href=\"https:\/\/twitter.com\/rkrahulkhorwal\/status\/1927753359949287451?ref_src=twsrc%5Etfw\">May 28, 2025<\/a><\/blockquote>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">Mistral on a delivery mission<\/p>\u2014 Joel Basson (@joelbasson) <a href=\"https:\/\/twitter.com\/joelbasson\/status\/1927743105052049821?ref_src=twsrc%5Etfw\">May 28, 2025<\/a><\/blockquote>\n<\/div><\/figure>\n\n\n\n<p>However, Mistral must prove that Codestral Embed performs well not just in benchmark testing. While it competes against more closed models, such as those from OpenAI and Cohere, Codestral Embed also faces open-source options from Qodo, including Qodo-Embed-1-1.5 B. <\/p>\n\n\n\n<p>VentureBeat reached out to Mistral about Codestral Embed\u2019s licensing options.\u00a0<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div><template id="yuUk6FAF6UCb0hKnkjYu"></template><\/script>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/mistral-launches-new-code-embedding-model-that-outperforms-openai-and-cohere-in-real-world-retrieval-tasks\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More With demand for enterprise retrieval augmented generation (RAG) on the rise, the opportunity is ripe for model providers to offer their take on embedding models.\u00a0 French AI company Mistral threw its hat into the ring with [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1849,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-1848","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/05\/a-large-robot-named-mistral-standing-next-to-the-e-9PGOCDtWTk29VN1iwdU-pQ-NPlBR7EJSBSOvTbX5n7Now.jpe","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/1848","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=1848"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/1848\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/1849"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=1848"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=1848"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=1848"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 08:56:27 UTC -->