Fake AI news

Big tech vows action on ‘fake’ AI in elections

Most of the world’s largest tech companies, including Microsoft, Amazon and Google have agreed to tackle what they are calling deceptive artificial intelligence (AI) in elections

The tech accord

The twenty companies have signed an accord committing them to fighting voter-deceiving content. They say they will deploy technology to detect and counter the material.

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections was announced at the Munich Security Conference on Friday 16th February 2024.

The issue has come into sharp focus because it is estimated up to four billion people will be voting this year in countries such as the U.S., UK and India.

Technology to mitigate risk

Among the accord’s pledges are commitments to develop technology to mitigate risks related to deceptive election content generated by AI, and to provide transparency to the public about the action firms have taken.

Other steps include sharing best practice with one another and educating the public about how to spot when they might be seeing manipulated content.

Signatories include social media platforms X, Snap, Adobe and Meta, the owner of Facebook, Instagram and WhatsApp.


However, the accord has some shortcomings, according to computer scientist Dr Deepak Padmanabhan, from Queen’s University Belfast, who has co-authored a report on elections and AI.

But he reportedly said they needed to take more proactive action instead of waiting for content to be posted before then seeking to take it down.

That could mean that realistic AI content, that may be more harmful, may stay on a platform for longer compared to obvious fakes which are easier to detect and remove, he suggested.


The accord’s signatories say they will target content which deceptively fakes or alters the appearance, voice, or actions of key figures in elections.

It will also seek to deal with audio, images or videos which provide false information to voters about when, where, and how they can vote.

We have a responsibility to help ensure these tools don’t become weaponised in elections, Brad Smith, the president of Microsoft is reported to have said.

These measures, in my opinion, are a sticking plaster and will not stop the spread of dishonest and fake news!

Leave a Reply

Your email address will not be published. Required fields are marked *