Against the Shipwreck of Thought and the Speed of Knowing: Paul Virilio and AI Dependence for Learning

“Beautiful things take time but the wait is the best part. The anticipation, the hope, that’s what keeps everyone alive. That’s what makes us, us. Our desire to see our wishes come true. Our desire to one day touch the stars and not be afraid of the emptiness beneath us.”
– Kian Sabik, “And the Seagrass Fades” 

Speed is often celebrated as a marker of technological progress. We praise technology for its ability to make things convenient by reducing duration and eliminating almost every need for movement. For Paul Virilio, that exactly is the problem – what is often overlooked because of favoring instantaneity is the sacrifice of meaning and worse, the possibility of  unwarranted catastrophes. From this, Virilio’s philosophy provokes us to rethink our perceptions of speed clouded by optimism because apparently, speed is not so neutral nor always progressive at all.

If you invent the ship, you also invent the shipwreck. The invention of the plane also invents the possibility for a plane crash. If we extend this to artificial intelligence, the invention of AI as a tool for learning entails the invention of AI plagiarism, reduced cognition, passive mental engagement, etc. that all stem from AI dependence. This is Paul Virilio’s notion of the integral accident. An accident is basically the unexpected outcome of a thing and this outcome reveals the true nature of the thing in question. A knife is not just a tool to cut vegetables but a weapon that can be used to slaughter another human being. Similarly, learning with AI is not merely an automated response, it risks becoming detrimental to the human mind or interaction. Hence, in The Original Accident, Virilio points out that every piece of technology carries with it the potential for its own undoing. 

Like what was established earlier, speed is one of the substances of technology. But what people fail to see is its accident which is integral to its existence namely, stagnancy and the disappearance of depth, meaning, and movement. Virilio called this the polar inertia. He likened this to a swimmer’s treadmill. When swimmers train, they are swimming against a current where they possess the illusion of movement while getting nowhere at all – a pole of inertia. According to Virilio, this is the current state of modern day technologies. Because of its favor toward speed, we sacrifice the movement or the depth that is necessary to make things meaningful. For example, the invention of video calls has the tendency to sacrifice what would have been a meaningful human interaction because it sacrificed the movement and interaction that was necessary for a physical presence that cannot be replaced by a digital presence. In the same way, if we depend on AI for learning and let it take over our thinking, we end up with surface-level, generalized, and oversimplified understanding, lower cognition, less attention  and passive engagement with learning materials. 

AI does not have the same depth and mastery gained from the learning process that is unique to human knowledge. It can say that ‘love is beautiful’ without truly knowing what love is nor what beautiful truly means. It does not know the actual meaning of the concepts you feed it, it only knows the relationships between words. With AI, we arrive too soon and in doing so we risk never really arriving at all. In a world of acceleration, we have the tendency to mistake velocity for understanding and become stationary minds overwhelmed by speed caught in a polar inertia where the value of learning is lost to the illusion of instant arrival.

Hence, AI Ethics, which is a study that ensures safe and ethical development and deployment of AI, must be designed according to the accidents and the polar inertia that AI came with. In other words, it must take what Virilio called a ‘negative horizon,’ a perception of the catastrophic effects of speed (polar inertia) that goes beyond the optimistic light we view it. Virilio extended this later on with the ‘expectation horizon’ where we must anticipate the accidents that follow for every technological invention. So what would a negative horizon look like? For one, we could invent novel theories that will cater to these catastrophes. But for me, I contend that we could already borrow from already existing moral theories and extend it to artificial intelligence to provide frameworks that would set apart responsible use from irresponsible use.

As a response to the polar inertia of AI, we can look to an ethics of attention originally conceived by Simone Weil and Iris Murdoch where they pointed out that paying attention is a moral act since we can commit a moral failure by simply failing to turn our gaze on those who need it. There are many aspects of attention such as but not limited to a) truth-seeking because our attention is captured by something that needs to be known, b) waiting because to reveal the truth is something that takes time, patience, and mental presence, and c) unselfing because we deny ourselves to direct our gaze on the other. In the same vein, what would constitute as a responsible use of AI is when we turn our attention to the actual study material. If a certain use of AI makes us sacrifice the attention that is necessary to truly learn an idea then it counts as irresponsible use.  

As a response to AI’s integral accident of dependence, we must cultivate the practice of independent pedagogy where one thinks for oneself when it is convenient to subdue it to the instantaneity of machines under the false perception that it knows better than us. To support this, Ranciere pointed out (through Joseph Jacotot’s experience of teaching foreign students) by simply giving them study material without ever having to teach them due to a language barrier. Ranciere argued that learning can occur without the need for explanation or the master’s presence to impart his knowledge by simply using one’s own mental faculties to understand anything. “All men were virtually capable of understanding what others had done and understood.” This is the same truth echoed by Kant with the statement ‘sapere aude’ – dare to know – the courage to use one’s own reason without depending on the guidance of another and in this case, the other is AI.

The point behind all this is that learning is not a matter of ingestion but a sustained attention to what resists simplification. Like truth, learning unfolds in time and demands movement through difficulty, not mere exposure to information. To rush it is to abandon the responsibility of understanding and flee from the weight of one’s freedom by outsourcing thought to speed. When we favor the instantaneity of AI, we sacrifice the time, struggle and attentional gravity that gives knowledge its depth and quality. 


Works Cited

Kant, Immanuel., An Answer to the Question: ‘What is Enlightenment?’ (London: Penguin Great Ideas, 2009).

Murdoch, Iris., Existentialists and the Mystics: Writings on Philosophy and Literature, ed. Peter Conradi (Harmondsworth: Penguin, 1998).

Murdoch, Iris., Metaphysics as a Guide to Morals (London: Penguin, 1992), 323.  254.

Panizza, Silvia Caprioglio., The Ethics of Attention: Engaging The Real With Iris Murdoch And Simone Weil, (New York: Routledge, 2022),1, 10, 15-16, 24-25, 28, 38, 44-45, 51, 63-82.

Ranciere, Jacques., “An Intellectual Adventure,” The Ignorant Schoolmaster (California: Stanford University Press, 2018), 2.

Virilio, Paul  Negative Horizon: An Essay in Dromoscopy trans. Michael Degener (London: Bloomsbury Academic), 11, 106-107, 113-115, 118-123, 146. 

_______.,, Open Sky trans. Julie Rose (New York: Verso, 1997). 10-11, 24-26, 39-40, 43-45

_______.,, Polar Inertia (New York: Polity, 2007),19-25

_______.,, The Original Accident trans. Julie Rose (New York: Polity, 2007), 9-12,15, 38, 70.

Weil, Simone., Gravity and Grace trans. Emma Crawford and Mario von der Ruhr (New York: Routledge, 2002), 13 32-38, 48, 119

Weil, Simone., Waiting for God trans. Emma Craufurd (New York: Harper and Row, 1951).

About the Author

20210619_143904
Mark Jaztine Santos

Mark Jaztine Santos is currently a senior student taking Bachelor of Arts in Philosophy in the Polytechnic University of the Philippines and a eupraxophist (secular humanist). His research interests include atheistic existentialism, secular humanism and ethics, occult philosophy, biblical and religious history, philosophy of religion, mythology, cosmology and a bit of phenomenology and hermeneutics. He is also aspiring to become a great lawyer and a doctor of philosophy in the near future. Other than his thirst for knowledge and pursuit for academic excellence, he is also a barista who is addicted to dark roast espresso and also a musician who loves to sing, play violin, piano, and mandolin.