Predicting moral decisions: OpenAI pays for research

At Duke University in the USA, research is being conducted on behalf of OpenAI into how AI can predict moral decisions.

listen Print view
The OpenAI logo on the facade of the office building in San Francisco.

(Image: Shutterstock/ioda)

3 min. read

OpenAI's non-profit organization is supporting a research project at Duke University in the USA. The project is entitled "Research AI Morality" and is part of a larger grant from the AI company to the university. OpenAI is funding a professorship with a grant of one million US dollars for three years. The professorship will focus on the development of moral AI. The three years will end in 2025.

According to TechCrunch, little is known about the research. The lead author of the current study, Walter Sinnott-Armstrong, is said to have said when asked that he is not allowed to talk about his work. Sinnot-Armstrong is an ethics professor at Duke University. Together with researcher Jana Schaich Borg and Vincent Conitzer, he has published a book on how AI as a moral compass can help us make better decisions. This involves, for example, algorithmic decisions on who should receive a kidney donation. The researchers are also investigating how the use of AI in China and the USA differs – in terms of moral aspects, according to a press release from the university, which also announced the investment by OpenAI.

Videos by heise

On her official website, Schaich Borg describes herself as an expert in social cognition, empathy and moral AI. The aim of her work is to find out how AI can act in accordance with our values. A quote from Ruth Chang shows that she believes AI is not very helpful without this adaptation:

"If we can't get AI to respect human values, then the next best thing is to accept – really accept – that AI can be of limited use to us."

The research commissioned by OpenAI is also intended to investigate predictions about moral decisions in the fields of medicine, justice and business.

Although current generative AI could make decisions that remain within a narrow corset, it is also very susceptible to abuse. They make decisions based on what humans have previously decided. They learn and derive probabilities from this. AI models cannot yet be developed so robustly that misuse can be ruled out. There are numerous attack scenarios, so ultimately humans remain responsible for decisions. Schaich Borg, Sinnott-Armstrong and Conitzer also address the question of how to make AI safe and fair in the book.

It is also not trivial to define morality to such an extent that AI can make decisions within these boundaries. Philosophers have been discussing what is moral or ethically correct for centuries.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.