advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

16 tech companies commit to develop safer AI

  • The UK government is hosting an AI Summit in South Korea where 16 tech companies have committed to developing safer AI.
  • The companies come from multiple regions, including the US, China, UAE, and South Korea.
  • The commitment will see companies each publish safety frameworks on how they will measure risks of their frontier AI models.

With May turning into a month where multiple companies have debuted new plans for generative AI, the UK government this week together with South Korea are hosting an AI Summit in Seoul. Two days of discussions are set to take place, but the Summit has already gotten 16 tech companies to commit to the development of safer AI.

It still remains to be seen what kinds of check and balances will be put in place, but the UK government is of the opinion that the commitment will go a long way towards having safer AI, which has accelerated at a pace that no regulator has been able to keep up with.

“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety. These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” enthused UK Prime Minister, Rishi Sunak.

“It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology,” he added in an official press statement.

The 16 tech companies come from across the globe, including the US, China, UAE, and South Korea.

Those which have committed are:

  • Amazon,
  • Anthropic,
  • Cohere,
  • Google / Google DeepMind,
  • G42,
  • IBM,
  • Inflection AI,
  • Meta,
  • Microsoft,
  • Mistral AI,
  • Naver,
  • OpenAI,
  • Samsung Electronics,
  • Technology Innovation Institute,
  • xAI,
  • Zhipu.ai.

The above tech companies have committed to, “Each publish safety frameworks on how they will measure risks of their frontier AI models, such as examining the risk of misuse of technology by bad actors.”

“The frameworks will also outline when severe risks, unless adequately mitigated, would be ‘deemed intolerable’ and what companies will do to ensure thresholds are not surpassed. In the most extreme circumstances, the companies have also committed to ‘not develop or deploy a model or system at all’ if mitigations cannot keep risks below the thresholds,” the UK government continued.

This is not the first time that the same companies rapidly developing new AI platforms and solutions have outlined a commitment to more ethical practices around the technology, but as was the case when an open letter was signed by the likes of OpenAI and Google around AI risk, little has been done to put sufficient guardrails in place.

We will need to see what happens at another Summit next year in terms of what risks will be shared via the aforementioned frameworks, and then what steps will be taken from there.

“On defining these thresholds, companies will take input from trusted actors including home governments as appropriate, before being released ahead of the AI Action Summit in France in early 2025,” the UK government’s release highlighted.

Whether this is just another show for optics, or whether these companies take safer AI tediously remains to be seen, but for right now, the pace at which it is being developed has not slowed down or paused to consider the consequences, at all.

[Image – Photo by Igor Omilaev on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement