Over the past two years, reports of generative AI (GenAI) abuse have become commonplace. It’s rather less common for these reports to come from Google, whose GenAI Gemini has repeatedly earned notoriety.
To its credit, though, the software giant has done just that, releasing an analysis of common patterns of AI misuse. The report describes an array of potential harms that range from creating false identities (“sockpuppeting”), to portraying real people in invented actions or circumstances, to AI translations of existing scam content. (READ MORE: Now We’ve Got Proof that Wikipedia is Biased)
Unsurprisingly, the most common categories of abuse involve images or videos of real persons. Many of these are deepfaked pornography, as with the Spanish students who were recently convicted of generating sexual images of female classmates. Others have political objectives; during Russia’s war on Ukraine, both Presidents