The new model’s smaller footprint and higher precision may attract enterprise customers even as licensing remains a concern. Credit: Phalexaviles/Shutterstock Mistral AI has launched a 123-billion-parameter large language model (LLM) called Mistral Large 2 (ML2), strengthening its position as a significant competitor to OpenAI, Anthropic, and Meta. In a statement, the company said that ML2 has a 128k context window and support for dozens of languages including French, German, Spanish, Arabic, Chinese, Japanese, and Korean. It also supports over 80 coding languages, including Python, Java, C, C++, JavaScript, and Bash. The announcement follows Meta’s unveiling of the Llama 3.1 family of LLMs, which includes its most advanced model, 405B. Meta claims its models also feature a 128K context length and support eight languages. Last week, OpenAI released GPT-4o mini, its most affordable small AI model. Mistral AI said that benchmarking shows ML2 performs on par with leading models such as GPT-4o, Claude 3 Opus, and Llama 3 405B in areas like coding and reasoning. On the popular benchmarking test MMLU, ML2 achieved an 84% score, while Llama 3.1 405B scored 88.6% and GPT-4o scored 88.7%. GPT-4o mini scored 82%. Mistral AI models are available on Vertex AI, Azure AI Studio, Amazon Bedrock, and IBM watsonx.ai, the company said. Key attractions for enterprises Analysts point out that the AI battle has shifted to conversational and multimodal models, each striving to excel in complex mathematics, advanced reasoning, and efficient code generation. According to Neil Shah, partner and co-founder at Counterpoint Research, key AI players like Mistral AI are focusing on minimizing hallucinations, enhancing reasoning capabilities, and optimizing the performance-to-size ratio of their models. “This is where Mistral Large 2 excels in terms of packing more performance per size, requiring just 246GB of memory at full 16-bit precision during training,” Shah said. “Mistral Large 2’s smaller footprint compared to the competition, while maintaining higher precision, is advantageous for enterprises. It allows them to produce more accurate and concise contextual responses faster than other larger models, which require more memory and computing.” Moreover, enterprises heavily dependent on Java, TypeScript, or C++ will benefit from the superior code-generation performance and accuracy that Mistral’s benchmarks claim, Shah added. Open-source models like Mistral’s can also enable users to create specialized LLMs tailored for specific industries or locations, according to Faisal Kawoosa, chief analyst at Techarc. “Eventually, these kinds of specialized LLMs will emerge over time,” Kawoosa said. “While generative AI is useful, in many cases, a specialized understanding of the domain is necessary, which can only come from creating such LLMs. Therefore, it is crucial to have an open-source platform that not only provides LLMs to use AI models but also allows for tweaking and further development to create those very specific platforms.” Charlie Dai, VP and principal analyst at Forrester, also noted that Mistral LLM-2’s advanced features in code generation, mathematics, reasoning, performance, and cost efficiency — designed to run efficiently on a single H100 node — along with its multilingual support and availability on major cloud platforms, will significantly enhance its competitiveness for enterprise clients in their AI initiatives. Licensing and other concerns A potential concern for users is that Mistral is releasing ML2 under the Mistral Research License, allowing usage and modification only for research and non-commercial purposes. For commercial use that requires self-deployment, users must obtain a separate Mistral Commercial License from the company. “Since Mistral AI must have incurred significant data and training costs for Large 2, they have rightly reduced the scope for commercial usage without a license, requiring a strict commercial license, which drives up the pricing and could be an inhibitor,” Shah said. “This may be a deal breaker in certain areas like emerging markets.” Prabhu Ram, VP of Industry Research Group at Cybermedia Research, added that while Mistral AI has shown promise and potential, certain concerns persist. These include data transparency, model interpretability, and the risk of bias, which remain critical areas for improvement. Related content analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks news Microsoft rebrands Azure AI Studio to Azure AI Foundry The toolkit for building generative AI applications has been packaged with new updates to form the Azure AI Foundry service. By Anirban Ghoshal Nov 19, 2024 4 mins Microsoft Azure Generative AI Development Tools feature A GRC framework for securing generative AI How can enterprises secure and manage the expanding ecosystem of AI applications that touch sensitive business data? Start with a governance framework. By Trevor Welsh Nov 19, 2024 11 mins Generative AI Data Governance Application Security Resources Videos