Back in September 6, I was given the opportunity to present my philosophical research paper on AI ethics at the International Ethics Conference conducted at Waseda University, Japan. I was able to represent both HAPI, my university, and myself at the event. This presentation became a reality thanks to the generosity of my principal sponsors – most especially our very own HAPI founder, Ms. M, who made sure I could be here by covering my plane ticket and accommodation. The rest was made possible through the unwavering support of my family and family friends, who graciously sponsored everything else.
The event was co-organized by the Social Ethics Society and the Polytechnic University of the Philippines. With the theme, Harmonizing the Ethics of Global Justice and Peacebuilding: Towards a Sustainable Planetary Future, the conference set out to confront the great moral challenges of our time: artificial intelligence, climate change, globalization, social justice, environmental sustainability, political disruption, and peacebuilding. Its purpose was to ask how humanity might shape a better future for the next generations amid the crises born of modern progress. To this end, the gathering created an international platform where local voices could contribute to building a strong research culture in ethics, philosophy, and international studies – particularly in the fields of peace and conflict resolution, global justice, and environmentalism. It served as a democratic space for dialogue among scholars to enable critical reflection on the aforementioned urgent ethical and global issues. At the same time, the conference aimed to strengthen the knowledge and skills of researchers, educators, and students in the areas of global justice, environmental ethics, and peace studies.
[M]y paper was accepted because they saw its potential. It was a humbling experience.
At first, I did not really expect my paper would make it this far because I did not even plan to submit my abstract in the first place. It was all thanks to a friend who believed in my capabilities and encouraged me to try out for my first philosophical paper conference. Fortunately, my research paper was accepted. To be honest, the presentation itself was quite nerve-wracking since who am I to stand in front of many experienced scholars, graduate students, professors with master’s degrees, and distinguished researchers with PhDs when I am merely about to graduate from my undergraduate degree. But then, my paper was accepted because they saw its potential. It was indeed a humbling experience. The presentation itself was a success. Like all presenters, I was only given 20 minutes to discuss the trajectories of my paper. Hence, I ended up only elaborating on the argument I made and the most important points to discuss rather than providing lengthy contexts for each idea. During the panel, there were no questions raised against my paper but instead, they reflected on the ideas I presented and I was asked about something else that was not related to my paper but to the conference theme such as how Paul Virilio’s philosophy would view the notion of peacebuilding.

[W]e must design AI Ethics as a tool to resolve those integral accidents and polar inertia.
The paper I presented was about how depending on artificial intelligence for learning leads to what Virilio called a polar inertia and how such dependence is an integral accident of AI’s invention which diagnoses the inevitable illness that comes with it. The purpose of the paper was to situate AI ethics within a negative horizon – a perspective that goes beyond the optimistic light in which we see speed but rather the catastrophic consequences we exchange for instantaneity. It is one that anticipates the integral accidents with the shipwreck of thought when we depend on AI for learning and one that foresees the compromises we make that lead to the polar inertia we favor AI’s automated responses instead of actually learning the material ourselves. I argued that we must design AI Ethics as a tool to resolve those integral accidents and polar inertia and set the framework that would be the basis for demarcating responsible use from irresponsible use of AI. To be clear, this is not an argument against AI per se. AI is not going to leave us anytime soon but rather become assimilated to modern culture. Like any other technological advancements, just because the car’s invention opens the possibility of the car crash (its inevitable integral accident) does not mean we should not use cars. Instead, we should set precautions and responsible use that would help us avoid the car crash. The same thing is what I have argued for learning with AI in both my paper and presentation so that we could avoid dependency or its irresponsible use. I have dedicated a whole and separate HAPI article that briefly elaborated the argument of my paper. I am looking forward to the publication of this paper at an esteemed academic journal soon.
This research aligns with HAPI’s commitment to truth and the academic integrity of its pursuit. If learning the truth requires time, mental presence and difficulty, then this learning process must be valued, not bypassed, no matter how long it takes especially in a world of where the soul of journey that gives knowledge its depth and makes us human is lost to the illusion of instant arrival. We stand with integrity; the practice of valuing the learning process that creates authentic thinkers and experts when it is all too easy to skip and cheat instantly with ChatGPT that produces shallow and weak minds.
