advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

OpenAI and Google sign single sentence statement about AI risk

  • The Center for AI Safety has published a single sentence meant to set the bar for how conversations about AI risk mitigation are approached.
  • The statement says that the risk of extinction from AI should be treated the same as nuclear war or a pandemic.
  • The likes of Google DeepMind and OpenAI have signed the statement but have not said they will cease development of their AI models.

In a Spider-Man pointing at Spider-Man moment, Silicon Valley leaders in the field of artificial intelligence (AI) have agreed to a call for caution in developing artificial intelligence. The signatories include Google DeepMind chief executive officer (CEO) Demis Hassabis, CEO at OpenAI Sam Altman, the very companies that have drawn criticism about their continuous unchecked development of AI platforms.

The Center for AI Safety (CAIS) has published a single sentence statement calling for caution in the development of AI.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

As laconic as the statement is, that is by design says CAIS.

“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously,” writes the non-profit organisation.

Earlier this year the Future of Life Institute penned a robust statement in which it called for a half year pause on the training of AI systems more powerful than OpenAI’s GPT-4 platform. That open letter has garnered some 31 810 signatories but none from the leaders of DeepMind or OpenAI.

With that having been said, leaders at OpenAI last week published a blog last week calling for the mitigation of the risks of a so-called super intelligence. However, in the same breath the leaders called for no need to mitigate the risk of models “below a significant capability threshold”.

“Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate,” wrote the big guns at OpenAI.

Except, already there is talk of companies replacing humans with AI in some capacity in an effort to cut costs. AI is pervasive in everything from education to the arts. Even the law isn’t safe from the touch of glorified chatbots that are prone to making things up.

But worryingly, AI is only going to get better the more it is developed and as that happens, the “capability threshold” definition will have evolved. Governments are also infamously bad at keeping up with developments in technology, just look at the way cryptocurrency was handled. To expect governments to properly grasp and understand the threats that AI presents, perhaps not now but in the future, is a very worrying scenario.

The CAIS statement also sets a rather high-bar for developing AI that isn’t harmful. While not on the level of a nuclear war, replacing folks in the entertainment industry with AI puts millions out of work so how do the likes of DeepMind and OpenAI factor that into their ethical development?

There’s also no mention of AI platforms scrapping the internet and using the original work of creators to train their models.

What really needs to happen is that development on AI needs to be stopped until lawmakers and the public can figure out what this technology is and how it can be used best, or rather if at all.

Until such time as OpenAI and Google DeepMind cease development of their models, their signatures on CAIS’ statement are just more fodder for their AI to suck up as they continue to learn.

advertisement

About Author

advertisement

Related News

advertisement