Turing Award winners warn against current AI developments
Too fast, too untested, too profit-oriented: Turing Award winners Andrew Barto and Richard Sutton criticize AI developments.
(Image: Ground Picture/Shutterstock.com)
You receive the most important prize in computer science – the Turing Award. And they also use the attention to warn against current AI developments. This is not about a doomsday scenario or a killer AGI, but about the approach of some AI companies. They do not test their AI applications sufficiently before unleashing them on humanity, say Andrew Barto and Richard Sutton. In addition, the companies are too interested in profits.
Andrew Barto is a professor emeritus at the University of Massachusetts. Richard Sutton worked at the University of Alberta and for Google's AI division, DeepMind. Both received the one million US dollar prize for their achievements in the field of basic research. They were instrumental in developing reinforcement learning – in which models react to feedback. By means of continuous feedback loops, in which the AI models ultimately strive for a positive evaluation, reasoning models are created, i.e., models that are supposed to be particularly good at logical thinking.
Lack of safety precautions for AI models
“Making software available to millions of people without safety precautions is not good engineering practice,” said Barto, comparing it to building a bridge that is then used by people, quoted by the Financial Times. Technical practice should actually be such that the negative consequences of technology are mitigated as far as possible, but he does not see this with the companies that are currently bringing AI models onto the market.
Videos by heise
Both award winners warned against the pace of AI development and the race by companies to bring ever more powerful, but also error-prone models onto the market. They would also raise unprecedented amounts of funding for this. “The idea of having huge data centers and then charging a certain amount of money to use the software motivates things, and that's not the motive I would agree with,” Barto said.
AGI – strange term and hype
Sutton rejects OpenAI's explanation that you have to think so big for superhuman AI (AGI) to benefit everyone. He was talking about hype. According to the Financial Times, he said: “AGI is a strange term because there has always been AI and humans trying to understand intelligence.” Systems that are better than humans will eventually be achieved through a better understanding of the human brain, he said. In their research, the prizewinners used how humans learn as a model. They learn better when they are rewarded instead of receiving no feedback or even the threat of punishment.
Nevertheless, Barto and Sutton expect AI to have a positive impact on the world. “We have the potential to become less greedy and selfish and more aware of what's going on in others … There are many things going wrong in the world, but too much intelligence is not one of them,” said Sutton.
The two also criticized Donald Trump for wanting to cut federal spending on scientific research. This would have devastating consequences for the USA's scientific lead.
(emw)