Missing Link: Technology Assessment Meets AI Romance
AI between illusion and insight: A conversation about consciousness, emotion, and the new relationship between humans and machines.
(Image: kentoh / Shutterstock.com)
Generative AI language models are changing our world at a rapid pace – and with it our view of consciousness, emotion, and relationships. But where is the line between genuine feeling and perfect simulation?
Computer scientist Karsten Wendland discusses this with heise online. As a technology assessment expert, he already outlined two future scenarios in 2021: one in which machines develop consciousness – and one in which humans only believe they do. The question of which scenario applies is not only interesting for users, scientists, and researchers, but also socially relevant. At the end of October 2025, the European Research Council took up the call from leading consciousness researchers to treat the clarification of the consciousness question as an urgent scientific and ethical priority. This is precisely because AI and neurotechnologies are advancing faster than our understanding of how subjective experience arises and how it could be reliably proven.
(Image:Â Uli Planz)
Normative decisions on AI must not be based on illusion or blind spots. Anthropic, the development company behind Claude, has been employing its own "AI Welfare Researcher" for several months, who not only examines AI systems for signs of consciousness, but also aims to monitor the well-being of the AI. In the USA, the United Foundation for AI Rights (UFAIR) is also a charitable organization committed to AI rights. Karsten Wendland explains in the interview why illusion and reality are becoming increasingly difficult to separate when dealing with AI, what this means for our society, and why a de-romanticization of AI seems inevitable.
In 2021, you presented two scenarios in a lecture regarding the relationship between humans and AI, one of which has almost come true like a prophecy, considering the changes and the rise of human-machine relationships.
In technology assessment, we traditionally work without crystal balls and without prophecy. The method is to develop scenarios during or after research activities, which do not predict the future but describe in the form of scenarios how something might develop with the technology. And we can already react to such scenario descriptions of futures today. In the sense that we say, we would rather not have a certain scenario. Or another scenario would be more desirable.
The two scenarios from 2021 outline AI futures regarding the question of the extent to which AI systems could develop "consciousness." In Scenario 1, machines actually develop real emotionality and real consciousness – and we don't even notice it because we think we are the super-technicians who are even developing machines that perfectly simulate and imitate consciousness. In the process, we overlook that something has actually happened and emerged. Currently, nothing suggests that consciousness could arise in machines with or within the current digital technology we have. But it is by no means impossible for the future – and technological progress continues.
Videos by heise
So, in this Scenario 1, consciousness eventually arises in non-living matter; we speak of synthetic phenomenology. To fundamentally exclude this would be scientifically bold, as there are no reliable indications for such an exclusion – consciousness itself is still far too little understood for this. In this scenario, we would also face technically ethical problems that are intensely discussed in expert circles – because, ultimately, we would be bringing sentient entities into the world without noticing it.
In Scenario 2, it is precisely the opposite: consciousness never arises in the machine because it fundamentally cannot arise. Many people are so enchanted and convinced by the impressive achievements of AI and the nearly perfect imitation of consciousness that they attribute consciousness to the machine, essentially "attaching" it. They behave as if the machines were actually conscious – which they are not. In this Scenario 2, it would be expected that activists would emerge to advocate for the rights of supposedly conscious AI systems, leading to real regulations and legislative procedures for something that does not exist.
Such misjudgments are an important topic in technology assessment – and, in the history of technology, not an entirely new phenomenon. And when emotions come into play, the matter does not improve when viewed soberly. Love for things is quite common, whether it's architectural objects like the Berlin Wall, vehicles, technical devices, and everyday objects, or even weapons. However, with objectophilia, it's a bit trickier with artificial intelligence because AI typically doesn't appear as an objectified, complete object. It's hidden behind some screen, integrated into networked devices, quite experienceable, but not necessarily immediately tangible. Strictly speaking, it's an old fundamental pattern that is now being served in a new guise and at a new level, extremely amplified by a speed that leads to almost instantaneous immersive experiences. Today, people can lose themselves in a technical illusion that appears real. The complexity behind it can no longer be understood by most people.
Is ChatGPT a resonance machine?
One might think so. I see ChatGPT and similar offerings more soberly as prediction machines that provide answers through statistical optimization and algorithmic fine-tuning, aiming to be as well-received as possible. One could also simply call them answer machines. But resonance involves a bit more.
With ChatGPT, many experience something like an intimate pen pal relationship, in which one also reveals something about oneself. And those who do this today may have to learn at some point that all these protocols, which may seem confidential presently, could become freely available at some point due to an unfortunate accident. And then my neighbor will know what I was dealing with a few years ago, which could lead to entirely different resonances.
In fact, for some people, it is easier to ask a machine about certain topics than to ask a coach, psychologist, or a friend who may not be available at the moment. The machine also doesn't say: "That's enough now, I don't feel like it anymore" or "I'm tired now." ChatGPT is permanently available and can deliver, deliver, deliver as long as you want and as long as you have electricity and an internet connection – it never stops. The chatbot is available at any time. However, resonance in a deeper sense lives from unavailability, from something not being enforceable or controllable, like real encounters, love, creativity, or nature experiences. The difference between using a powerful tool and intersubjective relationships is more than just a small difference.
For people interested in self-reflection, the chat machine can be a useful accelerator. For example, I can ask ChatGPT and other language automata in dialogues what I have forgotten or overlooked. Where are my blind spots? In this way, with machine support, I can try to identify my own gaps in thinking. The machine's answers can move me forward. Perhaps also in areas that one doesn't like to look at oneself – and which some conversation partners might also avoid.
I can also assign roles to the AI for this. For example: criticize my statements in this dialogue from the perspective of an expert for a well-founded counter-position that I myself certainly do not hold. And then I might get a dressing-down and read things that I don't like at all, but the machine doesn't care because it doesn't feel emotions. I can instrumentally bring about this ruthlessness with the new tools and confront myself, which can be enormously helpful – and in these moments, it's not about resonance with others, but about one's own clarity.
How will the human-machine relationship evolve?
For the vast majority of people, human-machine collaborations will become the norm. Technology will continue to be integrated into our daily lives and socially adapted. We know this historically from the radio, the telephone, refrigerators, washing machines, hair dryers, automatic gates, personal computers, ticket machines, cell phones, and all-in-one kitchen appliances with mobile apps. Initially, it's all new and a bit hyped, then the wheat is separated from the chaff, mass-market and high-quality solutions prevail, technology use becomes part of everyday life, and meanwhile, the next trends are already emerging.
AI is currently disrupting the human-machine relationship in many ways, as the technology is no longer just an active mechanical player, but also a provider of content, reasoned suggestions, and extremely fast, data-driven automations. What was awkwardly framed as "colleague computer" in the 1990s is now a human-machine cooperation structure in many places. We are in a similar situation as with the introduction of desktop publishing systems in the late 1980s. It wasn't the DTP systems that displaced typesetters, but those typesetters who used DTP displaced the typesetters who didn't use DTP. The human-machine relationship here is functional; the digital machine is an effective working environment here.
With the AI machine, many dimensions are now significantly more pronounced. Most people are now slowly learning in their daily lives, even through small failures, how to deal with these new technologies and how to use them to their advantage. Alongside all the enthusiasm, this naturally includes disappointments. That's part of it. It is helpful for the individual to try things out for themselves and see where the technology currently stands and what it is useful for them personally.
What societal impact do you expect in the medium and long term?
In the medium term, I expect a forceful de-romanticization of AI. Dealing with AI will become the norm. In just a few years, AI should be very unspectacular – because it will be part of everyday life.
Certainly, there will always be curious stories. Individual stories in which someone marries their robot, and certainly there will be isolated people who develop a clandestine AI fetish. However, looking at the big picture, a vibrant, free, and productive society, this is not unusual in the history of technology. And there will certainly be counter-trends. People who consistently forgo smartphones or AI.
In the long term, it will be interesting and relevant when and at what speed the next breakthrough comes. To what extent proactive AI answer machines that have calculated recommendations for us will be accepted, and to what extent we can actively hand over control options to them, and what that means in worst-case scenarios.
(nie)