Missing Link: "The grand plan"
Page 2: Catastrophic risk: the obsession with the end of the world
The preoccupation with existential risks – known as "xrisks" in the jargon of the scene – has developed into a field of research in its own right since the early 2000s. Institutions such as the Future of Humanity Institute in Oxford are dedicated exclusively to the question of which events could wipe out humanity.
The prioritization is remarkable: while climate change is classified as "merely" a threat to civilization, hypothetical scenarios such as out-of-control AI are considered existentially threatening. This weighting can be explained by the logic of long-termism: an event that wipes out humanity prevents the existence of all potential future humans.
This focus has concrete consequences: Billionaires like Dustin Moskovitz or Jaan Tallinn invest millions in researching AI risks, while more immediate threats to living humans are considered less urgent.
Effective altruism and earn to give
Some transhumanists, such as Nick Bostrom, developed the philosophy of"Effective Altruism" (EA), which derives its name from the fact that it is essentially about acting in a way that maximizes the benefits for all of humanity - but under the premise of a neoliberal economy. The basic assumption is that the "scarce commodity" of aid must be used as "profitably" as possible.
Among other things, this leads to the principle of "earn to give": because everyone only has limited time and energy, it is ethically imperative to make as much money as possible as quickly as possible in order to donate some of it. Traditional ethical considerations are overridden by this principle.
A prominent example is Sam Bankman-Fried, the founder of the collapsed cryptocurrency exchange FTX. He explicitly followed the "earn to give" principle and became a billionaire before his company collapsed under suspicion of fraud.
However, it should be noted that the EA movement is more diverse than often portrayed. While some advocates prioritize existential risks, many EA initiatives focus on current problems such as global health or poverty alleviation.
Longtermism: securing the existence of humanity
While the EA movement initially focused on "evidence-based" aid projects, an ideological branch called "long-termism" has become increasingly important. The idea: because significantly more people will live in the future than in the past, maximizing human happiness means first and foremost securing the existence of mankind.
However, long-termism should not be confused with long-term thinking. Anyone who believes that a decisive fight against climate change can be derived from thinking about existential risks is mistaken. As climate change is unlikely to lead to the extinction of humanity, it is not considered an existential threat in EA circles. A nuclear war, a man-made pandemic, the eruption of a supervolcano, cascading system failure and, of course, an out-of-control super-intelligence, on the other hand, are certainly existential crises. They can therefore be avoided at all costs – if humanity can manage it. According to AI pioneer Geoffrey Hinton, a suitably intelligent AI can and will manipulate humans in such a way that it achieves greater autonomy. This idea stems from the so-called AI box experiment, which has been discussed in xrisk circles since the beginning. 2000s.
Unsurprisingly, – EA has developed into a movement since the early 2000s, mainly in Silicon Valley, with a lot of money and influence because it has attracted tech bros such as Peter Thiel, Elon Musk and Sam Bankman-Fried. Thiel has publicly stated that he no longer believes freedom and democracy are compatible – a personal political belief that does not necessarily represent the stance of everyone associated with TESCREAL.
Proponents of these ideologies argue that they are rationally trying to address humanity's greatest challenges. Critics such as Gebru and Torres, on the other hand, point out that these seemingly rational approaches often reproduce existing power relations and neglect pressing problems such as climate change or social injustice.
Relevance of ideology today
At first glance, it seems as if all these ideological elements no longer play a role. Instead, tech elites are now focusing on the neoliberal narrative of the incompetent and inefficient state, which needs to be abolished and replaced by a dynamic structure modeled on a high-tech company.
The practical effects of this ideology are far-reaching: they influence investment decisions worth billions and shape the direction of development of future technologies. In practice, it often leads to a privatization of future social issues, in which democratic processes are replaced by the visions of individual tech billionaires. However, this assessment reflects a certain critical perspective that is not universally recognized.
Musk's commitment reflects this ideology in many ways: his space company SpaceX is pursuing the transhumanist vision of a multiplanetary humanity with its Mars colonization project, Neuralink aims to connect brains and computers directly, and his AI activities follow the logic of long-termism.
"What is Musk really up to?" asks Torres. "I think the obvious answers are true but incomplete. Obvious answers are, for example, you don't become a billionaire unless you have this kind of megalomaniacal self-perception where you feel superior in every way. And these people are extremely greedy. If they make friends with the Trump administration, it's good for business, for further entrenching their power."
"But I think for people like Musk, and I would probably say the same for Jeff Bezos, there's a higher purpose. Musk has given so many hints that power is not the end goal. Even in this speech that he gave, the very short speech after Trump's inauguration, where he gave the Hitler salute. In that little speech, he says, "Thanks to you, the future of civilization is secure." And so I think there's a very strong argument that behind all of this is his deeper goal of realizing his transhumanist projects."
(vza)