Google says Android apps must allow you to report AI-generated content

Google has announced that early next year it will require that AI apps on the Google Play Store include an option to “report or flag offensive AI-generated content.” The user must be able to report the content within the app itself without navigating to some kind of external form or website.

Google also reminded developers that AI apps must still comply with all other developer policies. That means that the AI cannot generate anything Google has deemed “restricted content,” including hate speech, gratuitous violence, terrorist content, sexualization of minors and health misinformation.

Not all apps with AI features will be affected. Google’s AI-Generated Content policy states it is only meant to cover generative AI apps that produce text-to-image, voice-to-image, image-to-image, etc. It excludes “limited scope AI apps at this time,” for which it gives three examples.

The first is apps that use AI to summarize non-AI-generated content. The second is “productivity apps” where AI improves an existing feature, like suggested responses to emails. The third is apps that host AI-generated content, but doesn’t generate it, like a social media app. However, those are still subject to the User Generated Content policy, which does require “an in-app system for reporting objectionable [user-generated content] and users.”

“This is a fast-evolving app category and we appreciate your partnership in helping to maintain a safe experience,” Google’s post said.

Source: Google Via: The Verge


link … Read More ...

Watching the detectives: Suspicious marketing claims for tools that spot AI-generated content

A common trope crossing the science fiction and mystery genres is a human detective paired with a robot. Think I, Robot, based on the novels of Isaac Asimov, or Mac and C.H.E.E.S.E., a show-within-a-show familiar to Friends fans. For our purposes, consider a short-lived TV series called Holmes & Yoyo, in which a detective and his android partner try to solve crimes despite Yoyo’s constant malfunctions. Let’s take from this example the principle – it’s elementary – that you can’t assume perfection from automated detection tools. Please keep that principle in mind when making or seeing claims that a tool can reliably detect if content is AI-generated.

In previous posts, we’ve identified concerns about the deceptive use of generative AI tools that allow for deepfakes and voice cloning and for manipulation-by-chatbot. Researchers and companies have been working for years on technological means to identify images, video, audio, or text as genuine, altered, or generated. This work includes developing tools that can add something to content before it is disseminated, such as authentication tools for genuine content and ways to “watermark” generated content.

Another method of separating the real from the fake is to use tools that apply to content after dissemination. In a 2022 report to Congress, we discussed some highly worthwhile research efforts to develop such detection tools for deepfakes, while also exploring their enduring limitations. These efforts are ongoing with respect to voice cloning and generated text as well, though, as we noted recently, detecting the latter is a particular challenge.

With the proliferation of widely available generative AI tools has come a commensurate rise in detection tools marketed as capable of identifying generated content. Some of these tools may work better than others. Some are free and some charge you for

Read More ...