Microsoft working on its own large language model codenamed MAI-1

  • In a surprise announcement to some, Microsoft says it is working on a large language model (LLM) of its own.
  • The LLM is codenamed MAI-1 and could one day rival the capabilities of OpenAI or Google’s generative AI offerings.
  • The company adds that it has been leveraging its supercomputers to build bigger and better LLMs for years now.

While Microsoft has invested a sizeable amount of money in OpenAI in recent years, it looks like the company may one day have a large language model (LLM) of its own to implement across its different ecosystems instead of the current skew of ChatGPT that exists on Copilot.

That LLM is called MAI-1, with those at Microsoft close to projects of this nature feeling the need to step forward and clarify what the company has been working on following a report by The Information (paywall).

In fact, some executives at Microsoft were shocked that the company’s ambitions to build an LLM of its own one day to rival that of OpenAI and Google is doing the rounds in the news cycle.

So much so that Microsoft CTO Kevin Scott took to LinkedIn in order to share his thoughts on the matter.

He first explained the fact that Microsoft has been working with OpenAI to help train the startup’s models with the supercomputers that the company builds.

“We build big supercomputers to train AI models; our partner OpenAI uses these supercomputers to train frontier-defining models; and then we both make these models available in products and services so that lots of people can benefit from them. We rather like this arrangement. We’ve been at it for almost five years now,” he noted.

“Each supercomputer we build for OpenAI is a lot bigger than the one that preceded it, and each frontier model they train is a lot more powerful than its predecessors. We will continue to be on this path–building increasingly powerful supercomputer for Open AI to train the models that will set pace for the whole field–well into the future. There’s no end in sight to the increasing impact that our work together will have,” wrote Scott.

The CTO then went on to highlight the fact that Microsoft has also been training its own LLM, with one of them called MAI-1, for a number of years now.

“We also, for years and years and years, have built AI models in MSR and in our product groups.  AI models turn out to be interesting things to work on, and our researchers do great work studying and building them. AI models are used in almost every one of our products, services, and operating processes at Microsoft, and the teams making and operating things on occasion need to do their own custom work, whether that’s training a model from scratch, or fine tuning a model that someone else has built,” Scott outlined.

“There will be more of this in the future too. Some of these models have names like Turing, and MAI.  Some, like Phi for instance, we even open source,” he concluded.

While there does not seem to be a timeline or timeframe on when MAI-1 or some other Microsoft-made LLM will see the light of day, it is clear that this has been the plan that the company has been working on for some time.

For now, however, those consumers and enterprises running instances of Copilot or other OpenAI-powered AI solutions will likely not encounter MAI-1 in the near future.

[Image – Photo by Marcus Urbenz on Unsplash]


About Author


Related News