Watching the detectives: Suspicious marketing claims for tools that spot AI-generated content
A common trope crossing the science fiction and mystery genres is a human detective paired with a robot. Think I, Robot, based on the novels of Isaac Asimov, or Mac and C.H.E.E.S.E., a show-within-a-show familiar to Friends fans. For our purposes, consider a short-lived TV series called Holmes & Yoyo, in which a detective and his android partner try to solve crimes despite Yoyo’s constant malfunctions. Let’s take from this example the principle – it’s elementary – that you can’t assume perfection from automated detection tools. Please keep that principle in mind when making or seeing claims that a tool can reliably detect if content is AI-generated.
In previous posts, we’ve identified concerns about the deceptive use of generative AI tools that allow for deepfakes and voice cloning and for manipulation-by-chatbot. Researchers and companies have been working for years on technological means to identify images, video, audio, or text as genuine, altered, or generated. This work includes developing tools that can add something to content before it is disseminated, such as authentication tools for genuine content and ways to “watermark” generated content.
Another method of separating the real from the fake is to use tools that apply to content after dissemination. In a 2022 report to Congress, we discussed some highly worthwhile research efforts to develop such detection tools for deepfakes, while also exploring their enduring limitations. These efforts are ongoing with respect to voice cloning and generated text as well, though, as we noted recently, detecting the latter is a particular challenge.
With the proliferation of widely available generative AI tools has come a commensurate rise in detection tools marketed as capable of identifying generated content. Some of these tools may work better than others. Some are free and some charge you for