AI a "Child of God"? Anthropic meeting with several church leaders
Employees of Anthropic have spoken with church leaders about how AI technology fits into the Christian worldview. Other religions are to follow.
(Image: stockwerk-fotodesign/Shutterstock.com)
At the end of March, Anthropic held an event with more than a dozen leading figures from the Catholic and Protestant churches, discussing, among other things, whether the AI technology Claude could be considered a “Child of God.” The Washington Post has now made this public, citing four people who participated. The primary focus of the two-day event was how the “moral and spiritual development” of the AI chatbot could be guided in relation to answering complex and unpredictable ethical questions. It was also discussed how the AI technology should interact with users who are at risk of self-harm.
AI with Consciousness?
The summit shows that Anthropic is willing to “keep exploring ideas outside the Silicon Valley mainstream,” while the AI company is becoming one of the most influential players in the AI race due to the performance of its technology, the newspaper opines. It is said to have been just the prelude to further meetings of this kind, where representatives of other religions and philosophical traditions are also to be consulted. The team responsible is reportedly the “interpretability” team, which researches the inner workings of AI.
Some Anthropic employees would really prefer not to rule out the possibility “that they are creating a creature to whom they owe some kind moral duty,” the newspaper quotes a participant. Others at Anthropic disagree with this assessment. The debates also seemed to weigh heavily on some senior Anthropic employees, “about how this has all gone so far [and] how they can imagine this going,” the Washington Post further quotes. However, it points out that the idea that AI has developed some form of self-awareness is only shared by a minority in Silicon Valley.
Videos by heise
Some participants reportedly wondered before the event whether Anthropic's aim was to forge political alliances. In retrospect, however, they shared the impression that the actual goal was to get help in making AI more helpful. For weeks, Anthropic has been at the center of a fierce dispute with the US government, which has now classified the company's AI as a supply chain risk to national security. The background is Anthropic's refusal to provide its technology to the US Department of Defense without any restrictions. The dispute is now being fought in court.
(mho)