Print Article

AI-4/5: On violence against women in politics, identify best practice and share it

The rapid evolution of artificial intelligence (AI) has prompted many experts to warn about its impacts on democracy. To explore this issue further, the IPU is preparing a series of articles on the topic. In this fourth piece, British MP and Vice-Chair of the British Group of the IPU, Rt Hon Vicky Ford MP, talks about violence against women in politics (VAWIP), as well as the risks and opportunities of AI.

With the UK heading for an election likely in 2024, British MP Vicky Ford worries that the accelerating levels of online violence will discourage even more women from political life, distorting gender equality and democracy alike.

Online threats of death, rape, and beatings have become a regular occurrence for MPs, especially women, all around the world, and this online aggression can have a real world impact too, Ms. Ford says. Within the last eight years, for example, two British politicians – Jo Cox and Sir David Amess – have been murdered and others have been lucky to escape with their lives.

Many now worry that AI will accelerate such trends.

“This is a concern not just for women, but also because it perverts our democracy,” Ms. Ford says, adding that she expects fewer women to run for parliamentary seats at the UK’s next election.

“When I talk to parliamentarians – especially women – from all major parties, the online abuse is a significant issue,” she says. “Anecdotally it is clear that fewer women are applying.”

The UK is not alone, of course. In the first of its kind, a 2016 IPU study showed how sexism, harassment and violence against women MPs are a global problem, with some 80 percent of women parliamentarians having experienced psychological violence. The IPU followed up with more regional studies on Europe and Africa which confirmed the same figures. The IPU then published comprehensive guidelines in 2019, but the technology is changing at speed.

“What worked in 2019 may not be advanced enough for elections in 2024,” Ms. Ford says.

The anonymity and distance of social media enables some people with extreme views to express themselves online, while some social media – notably X, once known as Twitter – have reduced their levels of self-regulation. Many British politicians have now stopped using that particular channel, Ms. Ford says.

Meanwhile, AI will likely accelerate the extraordinary scope and scale of online abuse. It can automate the generation of online abuse, analyse data to personalise the attacks, find ways to bypass content moderation, and help abusers take their attacks to scale.

It also introduces new ways to stir divisions and hatred, for example through the creation of deep fake photos or videos to pretend that politicians have done heinous things. In the UK this year, AI generated videos were used to undermine opposition leader Keith Starmer as well as London mayor Sadiq Khan. Both videos went viral before being shown to be fake.

“But by that time the images and videos were already in the public domain,” Ms. Ford says. She notes that the staging of false events to undermine political leaders is not a new phenomenon. Hostile actors also use social media to stir up instability.

“What is new is the speed at which that type of hybrid warfare can happen,” she says. Some people are learning to distrust online content, however, since they now recognise that few guarantees exist about whether or not the content is true.

“But as we know content can go very, very viral very, very quickly and maybe there is not enough of that scepticism.”


In search of solutions for the violence against women in parliament, the UK – which hosted an AI Summit this month – has introduced new protection measures for parliamentarians, including increased police resources and tougher sanctions for offenders. When they receive death threats online or in writing, many British politicians now go immediately to the police.

“One colleague told me just last night that a man who threatened her online now has a criminal record.”

Between February 2020 and September 2021, Ms. Ford was Parliamentary Under Secretary of State for Children and Families at the Department for Education, and therefore closely involved in development of the UK’s new Online Safety Bill

Focused at the time on protecting children and younger people, Ms. Ford welcomes the Bill’s introduction of criminal liability for the heads of social media companies if, for example, their platforms are promoting self-harm, eating disorders, or even child pornography.

“It’s important that we as politicians are thinking about such issues, and take action” she says. “We see the potential harm of online content ourselves,  but so do our constituents.”


AI could yet be a force for good, however. Ms. Ford says AI can be used to hunt down the fake news and online hate.

She is also hopeful that AI and online hate will not automatically destroy fair and peaceful elections, pointing for example to Kenya where elections have often been violent in the past but the 2022 election was largely peaceful.

“There are some lessons about how civil society and others can help by encouraging non-violence during an election,” Ms. Ford says.

The IPU has taken a lead in documenting and identifying the best ways to tackle the phenomenon of violence against women in parliament, and Ms. Ford says she would like the IPU to see what works in different parts of the world and to continue sharing that best practice with more MPs.

“Global technology is a global phenomenon and constantly evolving” she says. “I think we should be looking urgently at best practice and sharing our ideas.”

Read more from the IPU series on AI

AI-1/5: Democracy is resilient, but AI needs regulation and careful management

Martin Chungong, IPU Secretary General, discusses the risks and opportunities for democracy and argues that we need more than regulation.

AI-2/5: MPs need to engage with scientists, says Denis Naughten

The IPU asks Denis Naughten, Chair of the IPU’s Working Group on Science and Technology, what advice he would have for parliamentarians around the world.

AI-3/5: On peace and security, parliaments must keep AI in check

The IPU asks Christophe Lacroix, co-rapporteur of the IPU’s IPU Standing Committee on Peace and International Security about the military applications of AI as well as the associated risks.

IPU Secretariat, Geneva