advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Intel develops software it says can detect deepfakes in milliseconds

  • Intel has turned FakeCatcher, its deepfake detector into a product.
  • The detector measures blood flow by analysing how the colour of pixels change as blood flows through the body.
  • Intel claims this detection method has an accuracy of 96 percent.

Deepfakes can be incredibly convincing. Take our header image for example. While all of those people look real, they were actually generated by an AI. The ability to harness the power of artificial intelligence to fake an entire human down to their voice opens up dangerous avenues for nefarious individuals and groups.

With deepfakes cybercriminals can launch all manner of attacks and misinformation campaigns and detecting whether a video is fake has been tricky and slow. However, Intel says it has a solution it calls FakeCatcher.

This solution was designed by Ilke Demir, a senior staff research scientist at Intel Labs and Umur Ciftci from the State University of New York at Binghamton. FakeCatcher has been in development for a while and now Intel has turned it into a product.

The deepfake detector runs on a server powered by third-generation Intel Xeon scalable processors which interface with a range of tools developed by Intel. The solution is said to work in real-time and can detect a fake within milliseconds.

“Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human— subtle ‘blood flow’ in the pixels of a video. When our hearts pump blood, our veins change colour,” Intel explains.

“These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake.”

The firm says that the solution can be employed by social media platforms to prevent the upload of harmful deepfake videos. Newsrooms could also use it to avoid amplifying fake videos.

Intel says FakeCatcher has a 96 percent accuracy rate though that depends largely on which dataset is being tested.

For example, in the FaceForensics++ dataset, the solution scored a 92.48 percent accuracy rating but in the DeeperForensics dataset this rose to 99.27 percent accuracy.

As novel as this approach is, there are a few questions we have. The first is how accurate this detection tool is across people of different races and skin tones. In the past, tools that use artificial intelligence for detection have been flawed thanks to the datasets used to train them.

Of course, Intel hasn’t said how its sauce is made which is understandable but assurances that the solution was tested with a variety of people would be nice.

There’s also a question of how long this detection method will be viable. We aren’t sure how one could fake blood flow in a deepfake but where there’s a will, there’s a way.

For now, though, this looks like a good solution but we’ll have to see how it’s deployed and how effective it is in the wild.

[Image – This Person Does Not Exist]

advertisement

About Author

advertisement

Related News

advertisement