- The Freedom of Life Institute has called on AI labs to hit pause on development of AI more advanced that GPT-4 for at least six months.
- During this time robust AI governance systems should be developed in tandem with policymakers.
- There is growing concern among experts that AI will advance so rapidly millions will lose their jobs.
An open letter addressed to those developing generative artificial intelligence platforms is starting to garner a fair amount of attention.
The open letter was penned by the Future of Life Institute (FLI) and has garnered signatories including the co-founder of Pinterest, Evan Sharp, the chief executive officer of Getty Images, Craig Peters and the Chief Twit himself, Elon Musk. You can find out more about the FLI’s donors here.
The FLI is an organisation that says it hopes to “steer transformative technologies away from extreme, large-scale risks and towards benefiting life”. The board of directors includes Jaan Tallinn, the co-founder of Skype and Victoria Krakovna, a senior research scientist at DeepMind.
The letter calls on all artificial intelligence labs working on technology more advanced than OpenAI’s GPT-4 to pause their training for six months. To be clear, FLI isn’t asking that all AI development be stopped. Given how pervasive the technology is in something as simple as matching Uber drivers with Uber riders, a total pause on anything AI is nigh on impossible.
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” reads the letter.
The organisation goes on to say that the labs developing this technology shouldn’t be the ones to decide how the tech is reigned in. However, FLI goes on to state that during the six month pause, labs together with independent experts should develop a set of shared safety protocols for advanced AI design and development.
But is six months long enough? FLI goes on to say that AI developers should work with policymakers to “dramatically accelerate development of robust AI governance systems”. The problem here is that governments don’t move very fast. Case in point, Microsoft’s proposed acquisition of Activision Blizzard which is still being debated by lawmakers around the world.
More so, we’ve also seen that lawmakers in some countries, such as the US, struggle to understand how apps connect to the internet. Whether these folks would be able to understand how AI works, why it presents a danger and how to control it within six months seems like a Herculean task.
More than this, there isn’t really anything compelling the likes of OpenAI to stop its development of AI platforms. In fact, the firm is likely more focused on generating revenue than ever before as there is a wave of interest and not riding that would be a foolish business decision.
As the video above presents however, the rise of AI is very similar to the rise of the automobile and if we don’t pay attention, we might become the horses in this scenario. Investment bank Goldman Sachs predicts that AI could replace as many as 300 million full-time jobs.
Should you want to add your voice to the growing number who think it’s a good idea to reign AI in, you can add your signature to FLI’s letter here.
[Image – CC 0 Pixabay]