San Antonio News 360

collapse
Home / Daily News Analysis / UK businesses must face up to AI threat, says government

UK businesses must face up to AI threat, says government

May 16, 2026  Twila Rosenbaum  7 views
UK businesses must face up to AI threat, says government

The UK government has issued a stark warning to business leaders about the escalating threat posed by experimental frontier artificial intelligence (AI) models that are rapidly developing the ability to discover and exploit software vulnerabilities autonomously. In an open letter published on 15 April, Technology Secretary Liz Kendall emphasised that the nature of cyber threats is fundamentally shifting, and corporate responses must adapt accordingly.

“For years, the most serious cyber attacks have relied on a small number of highly skilled criminals. That is now shifting,” Kendall wrote. “AI models are becoming capable of doing work that previously required rare expertise: finding weaknesses in software, writing the code to exploit them, and doing so at a speed and scale that would have been impossible even a year ago.”

The warning follows the recent debut of Anthropic’s frontier model, Mythos, and its accompanying Project Glasswing – an initiative designed to give major technology companies a head start in addressing vulnerabilities uncovered by the AI. The UK’s AI Security Institute (AISI), operated by the Department for Science, Innovation and Technology (DSIT), has been evaluating Mythos’s capabilities and found it to be “substantially more capable at cyber offence than any model we have previously assessed.”

According to the AISI, the pace of advancement in frontier model capabilities is accelerating: they are now doubling every four months, down from eight months in the recent past. This acceleration means that the window for businesses to prepare is shrinking rapidly. “This finding is significant both for what it means today, but also because it highlights the speed at which AI capabilities are increasing and the threats they potentially pose,” Kendall noted. She also pointed to OpenAI’s expansion of its Trusted Access for Cyber programme as evidence that the trend is not isolated to one company.

Understanding the Evolving Threat Landscape

The ability of AI models to autonomously identify and exploit software vulnerabilities marks a paradigm shift in cybersecurity. Traditional attacks often require extensive human expertise to craft exploits, test them against target systems, and deploy them at scale. Frontier AI models, however, can perform these tasks in seconds, scanning thousands of lines of code for weaknesses and generating exploit scripts that can be executed immediately. This capability lowers the barrier for malicious actors, potentially enabling even relatively unskilled attackers to launch sophisticated campaigns.

Security researchers have long warned that AI could be used to supercharge cyberattacks. Early demonstrations showed AI models capable of writing phishing emails that are nearly indistinguishable from legitimate correspondence, and later experiments proved they could generate malicious code snippets. The new generation of models like Mythos goes a step further: they can autonomously navigate software environments, identify zero-day vulnerabilities, and produce working exploits without human guidance. This represents a direct threat to critical infrastructure, financial systems, healthcare networks, and the vast array of digital services that modern economies rely on.

The implications are particularly concerning for small and medium-sized enterprises (SMEs), which often lack the resources to maintain dedicated cybersecurity teams. Many SMEs already struggle to implement basic security measures, and the prospect of facing AI-powered attacks that can adapt in real-time is daunting. Similarly, larger enterprises may find that their existing defences – built around signature-based detection and human incident response – are no longer sufficient against a constantly evolving AI threat.

Government Response and Recommended Actions

Kendall stressed that the UK government is not standing still. The AISI, established two-and-a-half years ago, now boasts what the government describes as the most advanced capabilities in the world for understanding frontier AI models. The National Cyber Security Centre (NCSC) continues to develop practical guidance for user organisations, and upcoming legislation – including the Cyber Security and Resilience Bill and the National Cyber Action Plan – will further strengthen the national posture. However, Kendall emphasised that government action alone is insufficient: “Every business in the UK has a part to play. Criminals will not just target government systems and critical infrastructure. They will target ordinary companies, of every size, in every sector. Attackers go where defences are weakest.”

To help businesses prepare, Kendall urged board members and business leaders to prioritise cybersecurity as a core strategic issue. She recommended that they regularly discuss cyber risks at board level, rather than delegating all technical matters to IT teams. She also encouraged organisations to sign up to the Cyber Governance Code of Practice, which provides a framework for integrating cybersecurity into corporate governance. Smaller businesses can avail themselves of the NCSC’s Cyber Action Toolkit, a free resource that offers step-by-step guidance on improving resilience.

Beyond governance, Kendall called on all businesses to plan and rehearse incident response procedures regularly. Cybersecurity insurance, she noted, can provide a financial safety net but should not replace proactive measures. The Cyber Essentials certification scheme, which helps organisations implement basic security policies such as firewalls, secure configuration, and user access controls, was also highlighted as a valuable starting point. Additionally, the NCSC’s Early Warning service can help organisations detect potential threats before they escalate.

“We are entering a period in which the pace of technological change may test every institution in the country,” Kendall concluded. “The businesses that act now – that treat cyber security as an essential part of running a modern company, not an optional extra – will be the ones best placed to thrive through it and seize its advantages. We urge you to be among them.”

Historical Context and Broader Implications

The current warning builds on a decade of increasing concern about the intersection of AI and cybersecurity. In the early 2010s, AI was primarily used for defensive purposes—detecting anomalies, filtering spam, and automating incident triage. By the late 2010s, adversarial AI research demonstrated that models could be tricked or manipulated. The 2020s saw the rise of generative AI, which brought both productivity gains and new attack vectors. Today, the balance of power between defenders and attackers is being reshaped by the very technology that was once seen as a defensive panacea.

Anthropic’s Mythos represents the latest escalation. The company’s decision to launch Project Glasswing – a programme that provides early access to vulnerability data to major technology firms – reflects the growing recognition that AI developers have a responsibility to mitigate the risks their creations pose. However, Kendall’s letter suggests that the race between innovation and regulation is tightening. With capabilities doubling every four months, the gap between the emergence of a new AI capability and its weaponisation by malicious actors could become dangerously short.

The international dimension cannot be ignored. Other governments, including the United States and members of the European Union, have also issued warnings about AI-enabled cyber threats. The G7 and OECD are developing frameworks for responsible AI development, but enforcement remains uneven. The UK’s decision to invest in the AISI and to push for domestic legislation is seen as a proactive step, but business leaders must take ownership of their own cybersecurity posture. The global nature of cyberspace means that vulnerabilities in one country can quickly become threats to others, making cooperation and information sharing essential.

For UK businesses, the message is clear: the era of AI-powered cyberattacks is already here, and the pace of change will only accelerate. Those that ignore the warning risk becoming easy targets for criminals who can now weaponise frontier AI models with minimal effort. Those that act decisively – by embedding cybersecurity into boardroom discussions, adopting recognised standards, and investing in adaptive defences – will not only survive but thrive in the new environment. The government’s letter serves as both a wake-up call and a roadmap, but the responsibility for implementation lies with every leader in the country.


Source: ComputerWeekly.com News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy