This year, more than 80 countries around the world will hold elections, and we’re already getting a disturbing taste of how artificial intelligence (AI) can be used to spread lies and manipulate voters—and how social media companies are failing to contain the damage.

The boom in generative AI, which has spawned a range of cheap tools capable of generating hyper-realistic images, videos and audio, coincides with a series of elections in countries comprising half the world’s population. This creates an ideal testing ground for malicious actors seeking to use the technology to interfere in elections and destabilize democratic societies. And it comes at a time when major tech platforms are dismantling their trust and safety teams and scaling back efforts to combat disinformation – making their platforms even more attractive to AI-powered manipulation.

Taiwan is a dramatic example of this. As the island’s election approached earlier this year, videos of AI-generated TV presenters reading false reports that presidential candidate Lai Ching-te had illegitimate children appeared on social media. Researchers suspect that the video was created by the People’s Republic of China, which is fighting Lai and other politicians who support Taiwan’s sovereignty.

In Bangladesh, a fake video, likely created by AI, showed an exiled opposition politician urging her party to “keep quiet on the Gaza issue,” while another showed an opposition politician in a bikini. Both videos, apparently aimed at enraging voters in the Muslim-majority country, circulated in the run-up to Bangladesh’s election in January.

Slovakia is another alarming example. Days before last fall’s election, two audio recordings believed to have been manipulated using AI circulated on social media. A liberal party leader appeared to be heard talking about buying votes from the country’s Roma minority and raising the price of beer.

In the UK, where elections are likely this year, AI-generated Facebook ads suggested that Prime Minister Sunak was involved in a financial scandal, while fake footage of opposition leader Keir Starmer berating his staff circulated.

The big tech companies say they are taking the AI ​​threat seriously. Most have committed to voluntary safeguards, and some have formed a consortium to combat AI abuse in elections. Meta, the parent company of Facebook and Instagram, plans to expand labeling of AI-generated content and has called for a common industry standard to identify AI material. YouTube and TikTok require creators to label videos created with AI themselves.

But beyond that PR, many platforms have in recent years downsized their trust and safety teams and decimated their internal departments responsible for monitoring and labeling content that violates policies. Meta and YouTube have begun allowing false claims that the 2020 U.S. election was rigged or stolen, repealing policies that had previously banned such content. Elon Musk has boasted about completely dismantling X’s election integrity team.

These backsliding in Big Tech politics and AI-powered campaigns in elections around the world paint a grim picture of what awaits the U.S. ahead of the November election. We got a taste of this in New Hampshire, where AI was used to imitate President Joe Biden’s voice to spread disinformation.

Such dirty tricks may be just the beginning. The availability of tools that can easily create AI content and the unwillingness of social media companies to police election lies create the perfect conditions for voter manipulation.

It’s not too late for the big tech platforms to recalibrate their approach. By rebuilding their trust and safety teams and tightening and enforcing policies, the companies could make it much harder for AI to be abused in the democratic process. But unfortunately, that doesn’t seem to be part of their plans.

Katie Paul is director of the Tech Transparency Project in Washington D.C. This article first appeared on The Hill.