Long dismissed as a technological follower, even a copycat, China has rapidly become a global leader in innovation. It is now racing ahead in fields ranging from electric vehicles and robotics to 5G and artificial intelligence. Nowhere is this shift more striking than in generative AI. Chinese companies such as DeepSeek are producing advanced language models that reportedly rival ChatGPT at a fraction of the cost. This has prompted a domestic ‘AI frenzy’ in sectors from smart cars and smart air conditioners to government services and hospitals.
This transformation is not only technological but also regulatory. Historically, China borrowed heavily from global norms – its 2021 Personal Information Protection Law was clearly inspired by the EU’s General Data Protection Regulation (GDPR). But with generative AI, there are no ready-made international templates. Instead, Chinese regulators had to devise solutions of their own. In doing so, they may have flipped the script: instead of learning from Europe, China is now developing tools and frameworks that European policymakers could learn from.

How China regulates AI
Typical discussions of China’s digital governance often focus narrowly on censorship. Certainly, this is a major driver of China’s regulation of AI. Yet just as significant is the urgency created by real harm from unregulated technologies. High-profile incidents, such as the 2022 leak of a Shanghai police database exposing data on one billion citizens and a 2016 case where a student died after being misled by paid-for search engine results on cancer treatment, have underscored the dangers of data misuse and digital misinformation. These are not uniquely Chinese challenges; they mirror the kinds of risks also confronting Europe.
To address these challenges, European regulators have taken a comprehensive, top-down approach to digital legislation. Frameworks like the GDPR, the Digital Services Act, and the recent AI Act took years to draft, debate, and adopt. The AI Act, first proposed in April 2021, was only finalised in mid-2024 after extensive revisions that resulted in a 144-page document. This reflects the complexity of achieving consensus across 27 member states.
China’s approach could hardly be more different. Rather than formulating a single overarching law, Chinese authorities have opted for a modular, piecemeal strategy. Since 2021, regulators have issued targeted rules for recommendation algorithms, deep synthesis technologies (deepfakes), and, more recently, generative AI services. These rules are issued not by China’s legislature but by powerful executive agencies like the Cyberspace Administration of China (CAC). They are also much shorter: China’s current generative AI regulations run to just four pages in English translation.
This approach allows for speed and flexibility. For instance, the CAC introduced its generative AI rules within only nine months of ChatGPT’s release. Rather than pre-emptively regulating the entire field, China addresses urgent risks through targeted interventions. For instance, it requires conspicuous labelling of deepfakes on all platforms. China’s regulators only consider more comprehensive legislation once technological developments mature and stabilise. Although this regulatory model can be opaque and lacks democratic oversight – rules can be introduced with little warning – it is undeniably agile. Therefore, while such a model would rightly never fly in Europe, it has allowed China to innovate in AI governance.
Among the most notable innovations is the CAC’s algorithm registry. Any company offering generative AI to the public must register its algorithms with the CAC and conduct a security assessment that evaluates potential harms, including misinformation and politically sensitive content. By April 2025, this registry included over 3,700 such algorithms from more than 2,300 companies, growing by hundreds every month. The registry gives regulators visibility into the rapidly evolving market and sets clear expectations for developers.
Another key innovation lies in the standards for training large language models (LLMs). In 2024, a CAC-affiliated body, the TC260 (the National Technical Standardisation Committee 260 on Cybersecurity), released guidelines for security assessments of LLMs and generative AI services. For instance, these require that for every corpus of data developers wish to include in their model, they must randomly sample 4,000 entries. Of these, at least 96% must meet predefined security standards. These standards include that the corpus must not include discriminatory content or content that promotes overthrowing China’s socialist system. If the dataset fails this test, it cannot be used.
On the output side, they introduced norms for ‘red teaming’ LLMs, basically probing them to identify vulnerabilities. In this case, developers must build a databank of questions designed to trigger harmful or illegal responses and ensure the model either declines to answer or responds appropriately. These frameworks are notable not just for their specificity but for their pragmatism. They prioritise information integrity and public accountability without attempting to ban or tightly constrain the underlying technology itself.
Effective regulation does not have to inhibit development
A key lesson from China’s approach is that regulation and innovation are not mutually exclusive – a conclusion increasingly pertinent given the EU’s current ‘deregulation wave’. One reason that regulation and innovation have gone hand in hand is that China’s regulations are strategically targeted. They apply primarily to services offered to the public, such as chatbots or virtual assistants. They do not cover research and development or internal enterprise uses like industrial automation and manufacturing optimisation.
In this way, China’s regulators have fostered an environment where there is less abstract pursuit of ‘general artificial intelligence’ and more of tangible applications. They enforce a two-track regulatory approach: platform companies creating yet another chatbot – like Meta integrating generative AI into WhatsApp – are subject to tighter controls, while companies using generative AI to solve productivity or sector-specific problems face far fewer constraints. In practice, this steers capital and talent toward ‘real economy’ use cases, such as education, healthcare, and industry. Crucially, China’s clear (if strict) compliance path has not prevented the development of frontier models. DeepSeek and other Chinese LLMs have made major strides despite the regulations targeting public deployments. For European policymakers, this underscores a valuable point: precision regulation – focused on real-world risks and sector-specific needs – can coexist with, and even catalyse, technological leadership.






