Events like Binding Hook Live matter because they give space to think differently. Amid conversations about ransomware, national resilience, and the future of cyber strategy last October, I found myself drawn to a question we rarely explore: what if some of our biggest security risks are rooted in the social dynamics we still overlook? We treat misogyny as a social-harms issue and cybercrime primarily as a technical pursuit, yet both rely on the same human competencies: manipulation, deception, and control. These may not be separate problems so much as expressions of a shared skill set.
We often imagine cybercrime as a contest of code, a duel between attacker and firewall. In practice, many successful attacks begin not with an exploit but with persuasion – convincing someone to trust a false identity or bypass a safeguard. These are the same kinds of conversational manipulation that thrive in certain toxic corners of the internet. Manipulation is central to social engineering, the psychological dimension of cyber-offending.
Shared skills link misogyny and cybercrime
The internet already provides arenas to rehearse those interpersonal tactics. Misogynistic online communities do not simply host abusive content; they appear to normalise and reward coercive behaviour. These spaces allow young men to test boundaries, manipulate empathy, build false intimacy, and sustain deception. Law enforcement associates these same interpersonal mechanisms with social-engineering attacks. The Institute of Strategic Dialogue identifies these misogynistic online spaces as early-warning environments where manipulation and false identity thrive. The behavioural parallels with cyber-offending are clear, but a potential direct pathway remains largely unexamined.
The National Crime Agency (NCA)’s recent warning on the rise of ‘Com networks’ highlights why this matters. According to the NCA, these online harm groups, often comprising teenage boys, coordinate grooming, impersonation, and blackmail. They are not disorganised offenders but structured communities using deception and emotional leverage to obtain sensitive information. The NCA has classified them as serious and organised crime. Strip away the gendered context, and the functional skills look strikingly similar to those associated with cyber-offending: targeting, persuasion, and exploitation of trust.
Academic research corroborates the link between misogyny and cybercrime, showing that cybercrime forums often reproduce traditional masculinities. They tie technical mastery to dominance and humiliation, framing hacking as essentially masculine, using misogynistic humour to sideline women’s technical ability and reinforce a male-coded hacker identity. Further studies demonstrate how digital crimes mirror offline gender hierarchies through performances of control and transgression. Men and women systematically perceive psychosocial cybercrimes (such as cyberstalking, revenge porn, and online harassment) differently, with women rating these harms as more serious. These findings do not prove that misogyny leads to cybercrime, but they do show that gendered norms have long shaped cyber-offending cultures. The emerging Com networks reflect a continuation of that pattern, coming from spaces where misogyny provides identity, a measure of control to young men who may in other areas feel a lack of agency, and perhaps an informal environment where manipulative skills are practised.
The leaked Conti ransomware chats revealed a work culture steeped in ridicule, contempt, and misogyny where participants discussed topics such as child sexual abuse content and jokes about rape. Lapsus$, the teenage hacking collective, ran its Telegram channels in a similar tone. The way in which these groups operate, including behaviours such as grooming, impersonation, and manipulation, closely resemble those of misogynistic online spaces, suggesting a shared set of practices.
To be clear, not every participant in misogynistic communities becomes a cybercriminal, just as not every hacker engages in gendered harassment. The competencies, however, are remarkably similar. The possibility of a skills-based overlap warrants attention, as these communities may function as unregulated apprenticeships in social engineering.
What can government do?
Law-enforcement agencies are beginning to signal related concerns. Europol’s recent report on recruitment of minors to criminal networks notes the use of emotionally charged messaging, coded slang, and gamified challenges. The means of recruitment – belonging through performance – mirrors social researchers’ findings in misogynistic subcultures: training is social before it becomes criminal. A direct pathway remains under researched.
Policy has yet to catch up with these emerging questions. The UK’s Online Safety Act recognises misogyny and abuse of women and girls as significant online harms, giving the Office of Communications (Ofcom) the tools to regulate platforms. Yet the act treats misogyny as a matter of content moderation and user welfare, not as a potential national-security issue. Harm and threat remain administratively separate, even when the behaviours involved may overlap.
By contrast, the new Cyber Security and Resilience Bill, introduced in November, places cybercrime firmly within the national-security domain. Technology Secretary Liz Kendall declared during its announcement that ‘cyber security is national security.’ If that is true, we must understand the human ecosystems that teach the skills used to breach national systems. A strategy focused solely on technical deterrence risks overlooking the social environments that practise, normalise, and reward manipulation.
Training grounds for cybercrime
If we approach cybercrime not only in terms of technical capacity but also through the lens of skill formation, the omission becomes clear. Misogynistic communities act as informal incubators for interpersonal competencies, constructing false narratives, sustaining deception, manipulating empathy, and exploiting digital intimacy. These are recognisable components of the social-engineering playbooks documented by agencies like the US Cybersecurity and Infrastructure Agency (CISA) and the European Union Agency for Cybersecurity (ENISA). Yet no existing cyber-policy framework accounts for where perpetrators develop those capabilities.
The behaviours associated with exploitation may link misogyny and cybercrime more closely than any specific toolset, but this remains a hypothesis. If the NCA is right that Com networks represent a new class of organised online-harm group, then the way young offenders learn manipulative skills requires urgent research.
For all the rhetoric about human factors in cybersecurity, policy still avoids addressing the environments that shape them. It is easier to fund software hardening and awareness campaigns than to confront the online cultures that may be teaching exploitation as a skill. If misogynistic online spaces are training people in the same social-engineering skills that power these attacks, they belong in that same policy conversation.
Misogyny may not yet appear in cyber threat models, but perhaps it should. At the very least, we need the research capable of telling us whether these patterns are coincidence, correlation, or the early signs of a causal pathway. Spaces like Binding Hook Live are valuable for allowing these questions to be surfaced and taken seriously. Bringing the social drivers of cybercrime into view is the first step toward policy that can address the national security risks we may currently be missing.






