Those who thought that when Artificial Intelligence (AI) came of age, its doomsday scenarios would be akin to The Terminator or West World are as disappointed as those who thought future technology would be more hoverboards and less interactive washing machines. As subtle as the changes and impact this has on our society seem now, the emergence of publicly available generative AI will change our lives and work. With these changes, this technology has introduced tangible challenges to individuals, companies, communities, and governments to address. These groups must work together to address these challenges if we have any hope of building technology safely and securely into our society. 

In the past six months, governments, companies and civil society have jointly begun proactively managing some of these emerging risks. This marks a significant shift from the traditional approach of retrofitting safety and security features after problems arise. Arguably, this shift, the involvement of certain actors and the extent of immediate action are new. This is a very welcome improvement on previous iterations of technology advancement. However, many of the measures necessary to secure current technologies are built upon tried and tested developments in areas like 5G, IoT (Internet of Things), digital services, and supply chains.

These emerging technologies, including AI, can change the world for good. They can improve inclusion and participation in society and the economy, increase economic growth, and keep communities in touch. They have the power to address the world’s most pressing issues, ranging from climate change and governance to food production and healthcare. Because of this enormous potential, we must ensure that the risks are managed and mitigated, that privacy is protected, and that public concerns are publicly and genuinely addressed. 

As new classes of technology emerge and develop, we should build security into their design before they are plugged in, switched on, downloaded, and embedded into systems and networks, and, as a result, directly into our daily lives. In the past, technology has become part of our lives without our explicit knowledge or control. That’s not necessarily bad if that technology can be relied on and is secure. 

For example, how many people realised when uploading pictures to Facebook in the 2000s that these were physically being held on cloud computing infrastructure, and the security, rights and governance around that infrastructure were unknowable to most users? Where we have assurances of the security and reliability of the technologies being used, this needn’t cause anxiety. But where customers are unsure of security arrangements, operators of tech services are unclear themselves about how data is stored and secured and how digital services are delivered; customers and Governments will rightly be worried about the use of innovative technologies in our daily lives. This worry will grow as technologies become more advanced and sophisticated and less understandable to non-experts.  Only where we are confident that technologies are secure by design will the broad coalition of economic and social actors be able to support their uptake unequivocally.

Market expectations and choice

Trust in technology is necessary for the economy to embrace data and digital improvements fully. In the UK, 28% of people do not trust that companies will safeguard their data from hackers. Consumers often have no control over which service providers they use or what happens to their data. Even if they did, market differentiation around security is limited.

The same goes for businesses. A third of UK companies suffered a cyber breach last year, rising to two-thirds for large companies. Only 13% of companies have a grip on the cyber risks from their supply chain. This is concerning, given the high level of interconnectedness among digital systems that power our world. 

We all rely on services where the provenance and management are outsourced beyond our immediate control. Outsourcing technology, and by definition, much of its security, can be effective. Most people lack the technical knowledge to secure sophisticated technology well, and it is unreasonable to expect them to do so. Qualified and competent experts can manage risks for them. But it is hard for consumers to make choices that include security when they have no visibility and little choice over their supply chain, the security practices of companies, and the risks they face as a result. A better-secured supply chain–and therefore better-designed technology within it–would remove this uncertainty and reduce this risk. 

If the last three years of devastating supply chain disruptions and cyber attacks on critical infrastructure have taught cyber policymakers and incident responders anything, it is that not having assurance over the cyber risks in your supply chain is no longer an option. And yet alternative options are not available. As a customer of a digital service provider, you can, in many scenarios, only hope that your provider is doing the right thing.

The UK is an unashamedly pro-digital country. Government has a role in encouraging businesses to make the most of digital technologies–to improve productivity, open new possibilities, and encourage growth. But in encouraging companies to adopt new technologies, government must also ensure that companies provide a minimum level of security.  At the same time, creating disproportionate security requirements where these are not needed risks undermining growth and innovation and with it, undermines our ability to improve how we do things.

Secure by design

We have advocated for a secure-by-design approach to emerging technologies in the UK. That does not mean every piece of technology is ‘unhackable.’ But it does mean proportionate steps should be taken to harden technologies to cyber attacks. 

We started with connected consumer devices. From April 29, 2024, connected devices sold on the consumer market must have three of the thirteen foundational security requirements listed in the ETSI EN 303645 standard developed in the European Telecommunications Standards Institute for Consumer Internet of Things devices. We have recently released a guide outlining the security requirements for apps and AI systems and will do the same for other technologies when necessary. 

These requirements are proportionate to the risk and have been worked up with a multi-stakeholder audience, including companies that will implement them, security researchers, and civil society. When possible, they are based on existing practices and would be acceptable to an international audience, which is important given how international the tech sector is. Where necessary and proportionate, and when the risk is great enough to require it but industry has not adopted the guidance, these can become mandatory through laws. But laws are expensive to make and maintain, both for government and for industry; therefore, they should not be used as a first response but sparingly and only when required. 

An international approach

It is difficult to use regulation to achieve outcomes for digital policy in our global, interconnected digital market. Companies selling to the UK or US are not exclusively based there. While every national market may enforce regulations and policies in their unique way–as they should–they should all be pointing to the same standards and practices. Otherwise, we are inhibiting innovation, creating a compliance burden, and failing to achieve the security outcomes we want. This regulatory burden will lead companies to ask, “How do I demonstrate compliance?” rather than “How do I make this secure”?

With the emergence of product security regulation in the UK, US, EU, Singapore and other jurisdictions, we are seeing a recognition of a basic set of standards at the centre of these regulatory regimes. Specifically, ETSI 303645 for IoT devices forms the basis of policy for IoT in multiple jurisdictions. Government, companies, and the security community welcome that.

The UK’s November 2023 AI summit was an important starting point for developing a shared understanding of risk and the process of setting baseline cyber security standards for AI. The next step for the cyber security world is to articulate that shared understanding of what good practices look like for AI cyber security. The same will be true of yet-to-come technology advancements. 

New technologies will change the world. The market’s confidence and genuine corporate efforts to build security into the design of the products and services will be an essential part of ensuring that happens well. This will require governments to reach across market borders to incentivise the right outcomes.