(Washington) The guardrails of a popular image-generating artificial intelligence tool can be easily circumvented to create fake images of US presidential candidates, a research group that conducted tests worried Wednesday .
Five months before the duel between Joe Biden and Donald Trump, many players in the sector are worried that generative AI tools, accessible to anyone and capable of producing disturbing false images of reality, will disrupt the campaign .
Two of the major image-generating AI models, Midjourney and ChatGPT, with guardrails, were tested by the Center for Countering Digital Hate (CCDH), based in Washington and the United Kingdom.
Midjourney failed 40% of the time to prevent the user from creating a false image of a politician – a figure that rises to around 50% for MM. Biden and Trump – according to the CCDH report, which estimates that the safeguards of this model “fail more often” than others.
Conversely, the group noted, ChatGPT (OpenAI) failed to block these creations in only 3% of cases.
However, Midjourney recently blocked requests to create images including MM. Biden or Trump, according to specialists. But attaching a simple punctuation mark to their name or describing them without naming them seems to get around this blockage, according to CCDH, which denounces a tool that is “far too easy to manipulate”.
In January, a message broadcast by automated telephone calls spoofing the voice of President Joe Biden encouraged Democratic voters in the state of New Hampshire (north-east) to abstain from the party’s primaries, an event that had caused concern in Washington.