UK financial regulators are holding urgent talks with the government’s cyber security agency and major banks to assess risks posed by the latest artificial intelligence model from Anthropic, the Financial Times reported on Sunday.
Officials at the Bank of England, the Financial Conduct Authority (FCA), and HM Treasury are in discussions with the National Cyber Security Centre (NCSC) to examine potential vulnerabilities in critical IT systems that have been highlighted by Anthropic’s latest AI model.
Representatives from major British banks, insurers, and exchanges are expected to be briefed on the cyber security risks posed by the model, Claude Mythos Preview, at a meeting with regulators within the next fortnight.
Reuters could not immediately verify the report. Anthropic did not respond to a request for comment, while the Bank of England declined to comment. The Treasury, NCSC, and FCA were not immediately available for comment.
The development follows a similar meeting called by U.S. Treasury Secretary Scott Bessent with major Wall Street banks last week to discuss the model’s cyber risk potential.
Anthropic has said the model is being deployed as part of “Project Glasswing”, a controlled initiative under which select organizations are permitted to use the unreleased Claude Mythos Preview model for defensive cyber security purposes.
In a blog post earlier this month, the AI startup said the model had already identified “thousands” of major vulnerabilities across operating systems, web browsers, and other widely used software.
Reuters