Google founds working group for safe AI

The Coalition for Secure AI aims to develop principles and standards to reduce the risks posed by AI. Initially, there are three working groups.

Save to Pocket listen Print view
Rough sketch of a brain made of luminous dots, with the letters "AI" to the right, both illuminated with blue light from the left

(Image: incrediblephoto / Shutterstock.com)

2 min. read

"Artificial intelligence needs a secure framework and applied standards that can keep pace with its rapid growth," says Google. These challenges are to be addressed by a new organization: the Coalition for Secure AI (CoSAI), which was launched on Thursday. In addition to Google, the founding members are Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, Nvidia, OpenAI, PayPal and Wiz.(Wiz is an IT security start-up that Google is interested in acquiring).

CoSAI wants to collaborate with academia, other organizations and the IT industry in general and will initially have three working groups: For securing the AI supply chain (software supply chain security for AI systems), for preparing IT security for the changing threat scenarios posed by AI and for the management of AI security (AI security governance).

The latter group is to develop a system of term definitions, for example, so that people in the industry talk past each other less. The list of tasks also includes checklists and standardized evaluation systems for assessing the preparation for security problems in AI applications, their management and monitoring, as well as reporting on security problems.

To strengthen IT security in the fight against AI-winged attackers, the second CoSAI working group will develop a framework to help IT defenders decide whether to invest in mitigation measures. The first working group will develop guidelines based on experience with traditional software. These guidelines should help to find out how a particular AI was created (origin and development) and how it could interact with third-party offerings to draw conclusions about threats.

Other CoSAI initiatives In the autumn, Google extended its bug bounty program to AI products to encourage third parties to hunt for AI security vulnerabilities. In June last year, the data company published its own Secure AI Framework and established voluntary AI commitments together with partners from the industry.

(ds)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.