advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

AMD looks to break Intel and NVIDIA’s stranglehold on AI hardware market

  • AMD has announced its all-new AMD Instinct MI300 series of accelerators for AI platforms.
  • The accelerators have already sparked the interest of Microsoft, Oracle and even OpenAI.
  • With NVIDIA and Intel dominating the AI hardware space, AMD is looking to wrestle some of the market share into its corner.

When it comes to the hardware powering artificial intelligence (AI), NVIDIA and Intel have something of a stranglehold on the market. The two firms are well ahead of other hardware manufacturers but AMD looks to be challenging that dominance.

On Wednesday AMD announced its AMD Instinct MI300 series of accelerators and already the hardware maker has a list of high-profile customers that want to deploy the accelerators including Microsoft and Oracle. The firm notes that both Dell and HPE are deploying Instinct accelerators in their servers and supercomputers.

“AMD Instinct MI300 Series accelerators are designed with our most advanced technologies, delivering leadership performance, and will be in large scale cloud and enterprise deployments,” president of AMD, Victor Peng, said in a statement.

“By leveraging our leadership hardware, software and open ecosystem approach, cloud providers, OEMs and ODMs are bringing to market technologies that empower enterprises to adopt and deploy AI-powered solutions,” Peng added.

AMD Instinct MI300 accelerators come in two flavours namely the MI300X and MI300A. Both feature HBM3 memory which is said to be primed for handling large AI models and transfers data at a faster rate.

AMD Instinct™ArchitectureGPU CUsCPU CoresMemoryMemory Bandwidth
(Peak theoretical)
Process Node3D Packaging w/ 4th Gen AMD Infinity Architecture
MI300AAMD CDNA™ 322824 “Zen 4”128GB HBM35.3 TB/s5nm / 6nmYes
MI300XAMD CDNA™ 3304N/A192GB HBM35.3 TB/s5nm / 6nmYes
PlatformAMD CDNA™ 32,432N/A1.5 TB HMB35.3 TB/s per OAM5nm / 6nmYes

The “Platform” mentioned in the table above is said to contain eight MI300X accelerators. The goal of this platform is to give adopters an easy way to deploy AI hardware without a fuss.

As for how AMD’s platform compares to the likes of NVIDIA, the chipmaker .

“Compared to the Nvidia H100 HGX, the AMD Instinct Platform can offer a throughput increase of up to 1.6x when running inference on LLMs like BLOOM 176B and is the only option on the market capable of running inference for a 70B parameter model, like Llama2, on a single MI300X accelerator; simplifying enterprise-class LLM deployments and enabling outstanding TCO,” AMD said in a press release.

Alongside the hardware, AMD also revealed its ROCm 6 open software platform. When paired with MI300 accelerators AMD claims an 8x performance bump for accelerators training Llama 2. The software platform is open-source and as such can leverage other open-source AI software, models, algorithms and frameworks including Hugging Face, PyTorch, and TensorFlow.

However, the position NVIDIA and Intel have in the market means that those with AI deployments are already entrenched in those ecosystems. Moving to a new platform is a big step and it’s risky especially when AI is developing rapidly and any downtime could mean losing ground to a competitor.

As mentioned though, Oracle and Microsoft are deploying Team Red’s AI hardware and according to CNBC, so are Meta and OpenAI.

With those big names on AMD’s side, perhaps it can wrestle some of the expected $400 billion market value in 2027, into its coffers.

advertisement

About Author

advertisement

Related News

advertisement