The 2020s so far have witnessed a boom in generative AI, especially since the launch of ChatGPT in 2022. This technology has enabled the widespread generation of text, images, and videos – including deepfakes: images and videos of individuals that have been digitally altered with AI. Studies reported a 245% increase in deepfakes in 2024, with women and girls disproportionately targeted, often through nonconsensual pornographic content.
Recent cases involving Grok, X’s AI, highlight how easily such images can be produced. This surge is part of broader technology-facilitated gender-based violence (TFGBV), with up to 58% of women and girls experiencing some form of it. As generative AI evolves, it amplifies gendered harm and power hierarchies, increasing insecurity for women, girls, and gender-diverse people in digital and physical spaces, deterring them from participating in public life and politics.
When AI and deepfakes are used to humiliate targets based on gender or sexual identity, they become tools of gendered disinformation – a form of violence recognised in feminist security studies.
Gendered disinformation as a political tool
The low cost and scalability of content production enable a wide range of users to use these tools for gendered harassment and political intimidation. Beyond private individuals, authoritarian regimes, state-aligned actors, and other illiberal entities have leveraged deepfake technologies and AI to disinform the public and suppress minority populations, particularly women, LGBTQI+ communities, and other marginalised groups. This behaviour reinforces feminist security studies’ central insight that insecurity is produced through everyday practices, infrastructures, and power relations rather than solely through formal institutions or military force.
Deepfakes have targeted female political leaders worldwide, from across the ideological spectrum. In the US, after Vice President Kamala Harris’s 2024 nomination, AI-generated sexualised content using her image circulated online, as it did with Congresswoman Alexandria Ocasio-Cortez. In Europe, a member of Northern Ireland’s Legislative Assembly, Cara Hunter was targeted with a pornographic deepfake video. Similarly, Italian Prime Minister Giorgia Meloni pursued legal action after sexually explicit deepfake content depicting her face was distributed online in 2020, before her election.
Deepfakes in a conservative cultural context
Consequences in traditional conservative societies, such as the Middle East and North Africa (MENA), where religion and patriarchal social structures enforce traditional gender norms are even more severe.
Studies of digital violence against women and girls in Arab states have shown that 49% of women internet users report they ‘do not feel safe from online harassment.’ Of those who have experienced online violence, 36% were told to ignore it, 23% were blamed for it, and 12% were subjected to physical violence from their families.
Furthermore, in environments where systematic barriers and gender biases have historically barred women from representation and leadership roles, deepfakes and the use of generative AI have extreme weight. This can take the form of silence through threat and forced withdrawal: Syrian feminist and human rights activist Hiba Ezzideen Al-Hajji faced a smear campaign in 2023 involving deepfakes, rape and death threats, and online defamation, believed to be orchestrated by jihadists in Idlib.
Deepfakes also facilitate the cross-border repression of women in political life; in Iran, feminist activists Azam and Samaneh have been targeted by transnational deepfake campaigns aimed at silencing dissidents abroad. Azam, now a refugee in Canada, and Samaneh, based in the UK, have both faced rape threats, sexually explicit deepfakes, and online abuse as retaliation for their activism against the Iranian regime.
In Pakistan, Azma Bukhari, the information minister of the province of Punjab, was a victim of a pornographic deepfake. A video of Bukhari’s face superimposed onto the body of a sexualised Indian actor circulated on social media in 2024. More content was digitally altered to depict Bukhari and her family, insinuating that she appeared publicly with boyfriends outside her marriage.
In some conservative, Muslim-majority societies, even seemingly mundane actions like a hug or dancing can ruin the reputation of politically ambitious women, as happened to Pakistani lawmaker Meena Majeed, who was featured in a deepfake video hugging the male chief minister of Balochistan province in 2025. Similarly, doctored images and videos of Punjab Chief Minister Maryam Nawaz Sharif were shared online throughout 2024 and early 2025, also suggesting inappropriate contact while dancing with opposition leaders.
Attacks can also take the form of extortion and coercion. In June 2024, Aisha Gaddafi, daughter of the late Libyan leader, Muammar Gaddafi, was a victim of threats centred around AI-generated content that sexualised her image. The attackers used photos from Gaddafi’s social media to create pornographic videos and photos targeting her. They demanded $3 million in exchange for not releasing the videos, an action that escalated from previous passport forgeries and voice manipulation. Gaddafi turned to social media to expose the blackmailers, publicly posting screenshots of the video and garnering sympathy.
Existing legal frameworks and their limitations
At the international level, legal and regulatory responses to deepfakes remain fragmented. While some jurisdictions, most notably within the European Union, have begun to develop rules addressing synthetic media, enforcement is uneven, and many states lack the institutional capacity to pursue perpetrators or compel platform compliance.
In the MENA region, AI governance has largely emphasised innovation, economic development, and soft power, with limited attention paid to the role of generative technologies in gender-based harm and political intimidation. Regional and continental instruments, such as the Arab Declaration on Combating All Forms of Violence Against Women and Girls (2022) and the African Union Convention on Ending Violence against Women and Girls (2025), signal growing recognition of technology-enabled violence, but their effectiveness depends on meaningful implementation and integration into domestic law.
Addressing a lack of accountability
Deepfakes and nonconsensual image generation constitute a rapidly expanding form of TFGBV with significant political and governance implications. By exploiting gendered norms, sexuality, and reputational harm, deepfakes disproportionately target women and gender-diverse people, particularly in regions where patriarchal social structures and moral policing amplify their consequences. These harms are embedded within broader digital infrastructures and regulatory failures that enable their production, circulation, and persistence.
Central to this problem is the lack of accountability on the platform. Social media companies continue to profit from engagement-driven algorithms, and recent moves away from professional fact-checking toward community-based moderation further weaken protections against gendered disinformation, particularly against coordinated attacks or cross-border harassment. Platform governance frameworks remain ill-equipped to address synthetic media harms proactively, often responding only after content has gone viral and irreversible damage has occurred.
Although large language models and generative AI systems are not always designed for deception, the content they generate can shape perceptions, reinforce hierarchies, and produce harm irrespective of disclosure or watermarking. From a Feminist Security Studies standpoint, the persistence of harm, despite transparency mechanisms, highlights the limits of technocratic solutions in addressing fundamentally political and gendered forms of insecurity.
Addressing deepfakes requires coordinated international frameworks that move beyond technocratic fixes and place responsibility squarely on platforms, states, and transnational governance mechanisms. A feminist, intersectional approach is essential – it must consider who is being targeted, and how race, sexuality, gender, and location impact experiences. In this way, we can understand how power, gender, and politics shape digital harm and develop regulatory responses that protect those most vulnerable to abuse. Mitigating technological harm will protect vulnerable users, allowing them to participate in the digital sphere, which now forms an important extension of public and political life.







