advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Turing test defeated by advanced AI

A computer programme has finally passed the Turing test by fooling 33% of researchers into believing that it’s actually a 13-year-old boy called Eugene Goostman according to The Independent, an online newspaper based in the UK.

The Turing test was first proposed by Alan Turing in an academic paper written back in 1950. The requirement for passing it is that a computer programme must fool at least 30% of humans into thinking they might actually be talking to a real human being and not a computer, in a five-minute text-based chat conversation.

The test was arranged by academics at the University of Reading, and it took place at the Royal Society in London with “Eugene” being evaluated by three human judges. Only one of the judges was taken in, but as one out of three is technically 33% of the total number of humans tested, “Eugene” became the first-ever computer programme to pass the Turing test.

The breakthrough came on Saturday the seventh of June, the 60th anniversary of Turing’s death.

Gaming the system

But before you get too excited, you should know that Eugene’s success had more to do with the Russian team’s approach than the brilliance of their coding, good though it was: Eugene was presented as a 13-year-old Ukrainian boy who spoke English as a second language, a back story that explained away any mistakes me might have made as well as any gaps in his knowledge.

Even though the team gamed the system to affect the win, academics say this is still a momentous occasion in computing history and one that has potential consequences both good and bad.

“In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human,” Kevin Warwick, one of the visiting professors from the University of Reading told The Independent. “Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.”

Warwick went on to say that understanding how a person can come to be duped by an artificial intelligence is a key step to fighting cybercrime. Knowing that will help anti-cybercrime organisations come up with methods to help regular folks know when they’re being targeted, much like the format and content of spam emails are known to the average computer user, something that helps them avoid falling into the virtual trap being laid for them.

[Source – The Independent UK, Image – Shutterstock]

advertisement

About Author

advertisement

Related News

advertisement