Shadow AI in companies is increasing significantly

An increasing number of employees are using private AI tools like ChatGPT for their work. However, many companies do not provide their own AI offerings.

listen Print view
Five robots on a black hill, yellow background

(Image: heise medien)

3 min. read

The use of private AI tools in the workplace is significantly increasing in German companies. As the industry association Bitkom has determined based on a current study, four out of ten companies now assume that employees are using private access to ChatGPT and similar services for professional tasks—for example, for drafting emails, code generation, or research tasks.

According to their assessment, private AI applications are widespread in 8 percent of companies, with isolated cases in another 17 percent. Another 17 percent are not sure but assume such use. The figures come from a representative survey of 604 companies with 20 or more employees.

Bitkom President Ralf Wintergerst warns of the risks of this shadow AI (based on the already widespread shadow IT): “Companies should avoid AI wild growth and prevent the development of shadow AI. To achieve this, they must establish clear rules for AI use and provide employees with AI technologies.”

So far, however, only a quarter of companies (26 percent) provide their access to generative AI. There are significant differences depending on company size: for companies with 20 to 99 employees, it is only 23 percent, while for companies with 500 or more employees, it is already 43 percent. Another 17 percent of all surveyed companies plan to provide their AI applications, and 30 percent can imagine doing so. However, 14 percent fundamentally rule out company-internal AI offerings.

Videos by heise

After all, almost a quarter of companies (23 percent) have now established rules for AI use—an increase from 15 percent in the previous year. Another 31 percent have firmly planned the introduction of such guidelines. However, 16 percent also want to forgo regulations in the future, and 24 percent have not yet dealt with the topic.

Bitkom recommends in its analysis of the study to define in company guidelines which AI tools employees may use for what purpose. In addition, those responsible should define specifications for the labeling of AI-generated content as well as rules for the protection of business secrets and for the prevention of copyright and data protection violations.

Without clear guidelines, companies face significant risks: uncontrolled disclosure of sensitive data to external AI services, possible copyright infringements through AI-generated content, and compliance violations with personal data. Bitkom provides companies with a detailed guide with checklists and recommendations for action.

(fo)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.