Anthropic: AI security model only for the USA?

Anthropic promotes an AI model for detecting security vulnerabilities as too dangerous for the public. Access to Europe is denied.

listen Print view
Anthropic logo on a giant display with people walking in front of it

(Image: PhotoGranary02 / Shutterstock.com)

3 min. read

Myth or reality? European security authorities apparently have little access to Anthropic's mysterious ‘Claude Mythos’. The AI model for detecting security vulnerabilities is classified by the US company as so effective that it is too dangerous to make it publicly available. However, European security authorities can apparently only learn whether this is really the case from the media and at best from conversations with colleagues in the USA. Anthropic has largely left Europe out in the cold regarding limited access to the model, reports the magazine Politico.

In the USA, Anthropic is said to have included twelve technology companies, including Apple, Microsoft, and Amazon, in the inner circle who are allowed to check their software with Mythos. Another 40 organizations are also said to have received access, but Anthropic has not disclosed their names. And US government agencies are also said to be involved. They immediately convened a crisis meeting with the heads of systemically important banks to warn them of the dangers.

According to Politico, of eight European cybersecurity authorities surveyed, only the German Federal Office for Information Security (BSI) has had contact with Anthropic. However, this has so far remained limited to discussions. The BSI also has no direct model access. Some EU authorities are also said to have received partial information. According to the research, only the AI Security Institute in Great Britain received direct access to the model and has also published initial results.

Anthropic's favoritism towards its home market is once again sparking discussions about whether, given the dangers to national economies, it should be left solely to private companies to decide who they trust with such a critical model and who they don't. AI researchers and politicians expressed concern to Politico. The question becomes even more urgent in view of China's technological advances, some say.

Videos by heise

EU states apparently have no legal recourse so far. The EU's AI Act and the Cyber Resilience Act only apply to models that are available on the EU market. However, Anthropic has not done this – the EU authorities are formally powerless – but the EU Commission has now stated that it is actively monitoring the security implications and is in dialogue with Anthropic (Bloomberg Law). There is also a lack of a global body that can review the decisions of private AI companies regarding highly risky models. This was already discussed when the fear arose that the achievement of Artificial General Intelligence could bring significant dangers.

(mki)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.