AI and Quantum Computers: How Security Experts Want to Secure the Future

Quantum computers could crack common encryption in five years. Security experts are working on solutions, but AI is causing greater concern.

listen Print view
Server-Hardware

(Image: Konstantin Yolshin/Shutterstock.com)

3 min. read
By
  • Jörn Brien
Contents

Quantum computers are not yet capable of cracking common encryption methods. But in five years, that could be the case. Then even today's messages sent could be decrypted retroactively will be possible.

Therefore, security experts have long been working on finding quantum-safe solutions. The goal is therefore to develop cryptographic methods "that effectively protect us against attackers from the future", as security expert Yael Tauman Kalai, professor at MIT, explained to taz.

According to Kalai, there are now "quantum-safe solutions for practically all areas of cryptography." Several such methods have already been certified in the USA. But: The upgrade from conventional to quantum-safe encryption is a challenge given the large, complex systems – ultimately, however, it is an economic question, according to Kalai.

Videos by heise

The hype surrounding Artificial Intelligence (AI) is currently causing the security researcher much greater concern. This is not only about hallucination, which can be quite dangerous, for example, with nutritional and health tips, but also about data protection.

Here, one must ask what the AI companies are doing with the information that users entrust to chatbots. How can misuse be prevented? And: How can it be ensured that AI systems are not used by terrorists to to develop chemical or biological weapons, for example?

In the latter case, approaches from cryptography could help reduce the risks, Kalai told taz. AI must be considered a potential adversary, and protection must be developed. Understanding the goals and developing a strategy that prevents attackers from succeeding will help.

For Kalai, AI certainly has a lot of positive potential. However, there is a risk that development is progressing too quickly. "The technology is still very young, we are not socially ready for it, and we don't know if AI is truly safe," says the security expert.

According to her statements, Kalai has many friends who work at Anthropic, OpenAI, or Google; her husband is employed at OpenAI. According to her, the people there are afraid. They are very worried "that AI is not safe."

One way out would be if all parties involved worldwide would hold back and limit their computing power, similar to the Nuclear Non-Proliferation Treaty. "This would take the tempo down and give us time to deal better with the risks," says Kalai.

The security expert sees a chance that all major companies involved could agree on such an action, possibly led by a corresponding political initiative. After all, they all have the same goal: "We want to leverage the benefits of AI while minimizing the risks."

This article first appeared on t3n.de.

(jle)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.