When the Past Rolls into the Future

When the Past Rolls into the Future

29 September 2025 Matthias Rang 301 views

What makes us special as human beings is our bodies. We experience reality through our body. AI has no body and no overarching spiritual organism that can change its habitat, as living beings do. The following is a comparative analysis from a scientific perspective.


The trend that led humans to live increasingly in a world of information did not originate in recent decades but rather stretches back centuries. With the advent of writing, a cultural technology emerged that allowed human beings to create a world—a “corpus of knowledge.” This corpus, or body, is so large that it is impossible to take in more than a tiny fraction of this vast world of knowledge and information within a single lifetime. So the question arises: on what basis do we consider ourselves capable of making judgments at all?

Let’s assume, for example, that we could read all the texts ever written on the question of how a stone falls: the texts of Aristotle, the treatises of the scholastics, the modern representatives of mechanical philosophy up to Galileo, Newton’s work, the multi-world quantum theories, and finally today’s string theories. Would all this knowledge make us capable of judgment on this question? I don’t think so. Ultimately, it is our physicality and how this physicality gives us the ability to make an assessment with reality (in the form of experiments, for example) that allows us to grasp the meaning of these writings. Disembodied beings, regardless of their intelligence, cannot comprehend the question. Whenever a question concerning our world, be it scientific, cultural, or political, is ultimately answered, it is not by way of pure logic but through a comparison with reality. The natural sciences are not based on pure knowledge but on experience that enables us to make judgments.

Emancipated Information

However, it’s becoming increasingly difficult to verify reality amid the flood of information. The globalization of the media landscape—especially through digital media—is increasingly leading readers of news to admit that they’re not able to make judgments because it is impossible to verify the reality of a situation in regard to space and time. Information is difficult to verify; it’s becoming “emancipated” from the context of our lives. As a physicist, I’m familiar with the phenomenon that almost any statement can be found in huge amounts of data and logically supported by further contexts, without it being clear if this statement has anything to do with reality. The emergence of fact checkers and terms such as “fake news” and “alternative facts” is based on the implicit admission that it’s becoming increasingly difficult to verify reality. AI is simply a continuation of this developing trend.

Large language models such as ChatGPT are among the most remarkable technical developments of recent years; the linguistic and content quality of the generated texts can be admirable. And since they are based solely on the “corpus of knowledge” and cannot establish any direct connection to reality, they provide an opportunity to study how the internal logical structure of a text can be completely independent from whether the content described has any connection with reality or is counterfactual—contradicting actual historical events, for example. In recent years, through more precise training methods, developers have successfully reduced counterfactual statements, often referred to as “hallucinations.”1 But I wonder whether the presence of these “hallucinations” should be classified less as a problem of language model programming and more as a characteristic of a gigantic “corpus of knowledge” whose content is increasingly emancipating itself from reality. Is what currently seems simply an amusing malfunction of a language model threatening to become the new norm of a society that increasingly lives outside of physical perception? Doesn’t artificial intelligence reflect phenomena that we can already study in ourselves? From this perspective, the concern that AI systems could become increasingly similar to humans in the future seems rather unfounded—if only because humans are becoming increasingly similar to AI. Waldorf education and Goetheanism, with their focus on regular and conscious sensory and physical experience, appear to be more and more indispensable, existential requirements of our present times.

This text is an excerpt from an article published in the (online exclusive) Goetheanum Weekly. You can read the full article on the website. If you are not yet a subscriber, you can get to know the Goetheanum Weekly for 1 CHF./€.


Translation Joshua Kelberman
Title image Fiber optic connections in a server room. Photo: Albert Stoynov/Unsplash