Developers given new tools to boost cyber security in AI models as cyber security sector sees record growth


UK Government Unveils New Cybersecurity Measures to Protect AI Models

On May 15th, the UK government revealed new measures aimed at establishing a global standard for enhancing the protection of AI models against hacking and sabotage. Technology Minister Saqib Bhatti announced these measures during the CYBERUK conference, the government’s flagship cybersecurity event.

The introduction of two new codes of practice is expected to significantly bolster cybersecurity in AI and software development, thereby boosting the UK economy’s security and growth prospects. These codes outline requirements for developers to ensure their products are resilient against tampering, hacking, and sabotage. By adhering to these standards, developers can enhance confidence in the use of AI models across various industries, leading to improved efficiencies, growth, and innovation.

The unveiling of these measures comes in response to the rising incidence of cyber breaches and attacks, with half of businesses and a third of charities reporting incidents in the past year. The new codes aim to prevent attacks such as the one on MoveIT software in 2023, which compromised sensitive data in numerous organizations globally.

Minister Bhatti emphasized the importance of fostering a safe environment for the digital economy to thrive. He highlighted the role of the new measures in making AI models resilient from the design phase, thereby contributing to the nation’s economic resilience and prosperity.

The announcement coincides with the release of a new report showing a 13% growth in the cybersecurity sector over the past year, with the sector now valued at almost £12 billion. This growth underscores the UK’s commitment to strengthening its resilience against cyber threats and driving sustainable economic growth.

In addition to improving cybersecurity, the government’s initiatives will also focus on developing the cyber workforce to ensure the UK has the necessary talent to protect the nation online. Felicity Oswald, CEO of the National Cyber Security Centre (NCSC), emphasized the importance of these standards in supporting the growing cybersecurity industry and ensuring resilience against malicious attacks.

The AI cybersecurity code is intended to serve as the basis for a future global standard, reflecting the UK’s leadership in cybersecurity. Rosamund Powell, a Research Associate at The Alan Turing Institute, highlighted the significance of establishing inclusive working groups and incentives to ensure the success of global standards like this.

In conjunction with these measures, the government announced initiatives to professionalize the cybersecurity sector and inspire young people to pursue careers in cybersecurity. This includes incorporating cyber roles into government recruitment policies and launching a national cyber skills competition for 18–25-year-olds.

Overall, the introduction of these measures represents a significant step forward in enhancing cybersecurity for AI models and strengthening the UK’s position as a global leader in cybersecurity. Through collective efforts and ongoing collaboration, the UK aims to create a safer and more resilient digital environment for businesses and individuals alike.