New OpenAI Tool That Spots Fake Images is 99% Accurate

TLDR

  • AI images are becoming hard to tell apart, but it’s not impossible.
  • OpenAI has claimed that its yet-to-launch tool can detect AI-generated images with 99% accuracy.
  • The likelihood that AI image tools could be used by bad actors underscores the need for detection tools.

Detecting images generated by AI technology is becoming quite difficult. Over the recent months, many AI image-generating tools have upgraded to newer and advanced versions, with the ability to produce images that are hard to tell apart, but it’s not impossible. 

In a previous article, Cryptopolitan disclosed some of the things to look out for when trying to distinguish between AI-generated images and real images. Some of the clues include filename and metadata, watermark traces, funny and distorted faces, especially for images with multiple faces, and so on. 

OpenAI Claims 99% Accuracy for Its AI Image Detection Tool

In addition to these, OpenAI, the company behind ChatGPT, has promised a new solution that could help detect AI-generated images with a high degree of accuracy. 

Business Post reported Wednesday that OpenAI is developing an AI image detection tool that is “99 per cent reliable,” according to the AI company’s chief technology officer, Mira Murati. 

The company had previously hinted in July it was working on developing ways to detect AI-generated content, including images and audio. At the moment, the tool is still being tested internally ahead of a planned public release. However, Murati did not disclose a specific timeline for the release.

The likelihood that AI image tools could be harnessed by malicious actors as well to mislead or cause harm underscores the need to have detection tools. 

The Dangers of AI Deepfake

In March, an AI deepfake of Pope Francis in a puffy coat went viral and sent many people into a frenzy. In a subsequent statement, the Pope said that AI is “endowed with disruptive possibilities and ambivalent effects.”

The Pope noted that biases in AI algorithms are a serious problem, calling for assurances that “a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded.”

In August, a similar incident occurred where some AI-generated images surfaced on the internet, showing piles of trash on the streets of Paris. A video compilation of the images was viewed more than 400,000 times in the week.

It created a false narrative about Paris, in which some said, “This is what the French capital city, Paris, looks like. The dream city… now turned into this in reality.”

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Source: https://www.cryptopolitan.com/new-openai-tool-spots-fake-images-accurate/