Hooked! #6: Build, baby, build! vs please tell us what you are building and how

Trump, flanked by Michael Kratsios and David Sacks, displays an executive order at the 23 July 2025 AI Summit in Washington, DC.

Trump, flanked by Michael Kratsios and David Sacks, displays an executive order at the 23 July 2025 AI Summit in Washington, DC.
Photo: The White House

Hello! 

As both someone skeptical of claims about AI’s utility and someone who saves hours every month by having ChatGPT generate the long lists of keywords that accompany each Binding Hook article, as someone deeply concerned by AI’s environmental impacts while still hopeful that it could free humans from mundane drudgery (like writing lists of SEO-optimised keywords), I clearly have mixed feelings on the subject. So, apparently, do regulators. 

The last few weeks have seen European and US approaches to AI diverge dramatically. As the EU’s AI Act obligations enter into force, the Trump administration announced an AI Action Plan to strip ‘bureaucratic red tape’ and ‘Build, Baby, Build!’ in order to win the ‘race to achieve global dominance in artificial intelligence.’

The plan includes nearly 100 policy recommendations. These range from creating formal guidelines for deepfakes and promoting secure-by-design AI technologies to evaluating AI models for ‘alignment with Chinese Communist Party talking points’ and removing references to misinformation, climate, and diversity, equity, and inclusion from certain regulatory frameworks.

Tech companies celebrated the plan, having spent millions of dollars on lobbying and lots of executive time visiting the White House to ensure a favourable regulatory environment. In Europe, meanwhile, things are a little less cozy. 

One Meta official, announcing that the company would not sign the EU’s voluntary AI Act code of practice, called new legislation an ‘overreach’ that will stunt AI development in Europe and companies working there. The code of practice calls for, among other things, companies to abide by EU copyright law and maintain high-quality documentation of their models, and for some ‘providers of general-purpose AI models [GPAI] with systemic risk’ to adopt certain safety and security protocols.

Cat Easdon discusses in Binding Hook how to create – and get companies to adopt – AI principles for a human-centred world. 

While the code of practice is voluntary, the AI Act itself is not. Several of the act’s obligations took effect this month: an EU-level advisory board and an expert panel have been created and EU members must have designated authorities for internal AI Act implementation. Providers of certain GPAI models must comply with EU laws relating to intellectual property as well as provide technical documentation and information about training data, though they have two years to get everything in order. Riskier models have even more requirements. The AI Act’s penalties have also come into effect, with fines for non-compliance reaching up to €35 million (US$41 million). 

Omree Wechsler, writing for Binding Hook, highlights how the AI Act’s high-risk classification will affect AI-powered cyber threat intelligence – a field already wrestling with the trade-offs between accuracy, transparency, and privacy.

As these rules take shape, they will also set the tone for how AI is developed and marketed for cyber defence. 

In a recent Binding Hook article, Jamie Collier and Max Smeets caution against framing this as an ‘AI arms race’, noting that the most effective defences against AI-enabled threats often come from reinforcing existing security fundamentals. 

It is a useful reminder for regulators and buyers alike: compliance frameworks should encourage what works in practice, not just what looks cutting-edge on paper.

Until next month,

Katharine Khamhaengwong
Binding Hook Editor


For more Binding Hook on AI: