AI ethics debates fairness and transparency but says nothing of the hollow lives we already lead. Its broad definition blinds modern research from establishing a clear focus. The result is a fixation on the technological aspects of AI, while ignoring the human element.
If not even ethics is human-centric, then what is? Ethics has become a mere tool for technologists to justify their work, rather than a framework for benefiting humanity. This dangerous trend ought to be addressed.
The philosophical branch studying what is right and wrong, good and bad, is ethics. Its purpose is providing guidelines for decision-making and behavior in a way that actions align with moral principles.
As you might understand, moral principles must be defined in order to ethics to even be relevant. The aspiration towards the moral good has been discussed for millennia, and the most prominent thinkers have all had their own thoughts on the matter. However, modern philosophy has abandoned the moral good in favor of relativism, which makes it impossible to do actual ethics.
In order to mitigate this issue, modern ethics has categorized itself into different schools of thought, such as consequentialism, deontology, and virtue ethics. Each of these schools has its own approach to determining what is right and wrong, but they all suffer from the same reductive flaw.
Consequentialism focuses on the outcomes of actions. Its problem is that it reduces morality to results alone and ignores the inherent dignity of the act or the person. For example, if the outcome is positive, then the action is considered good, regardless of the means used to achieve it. This can lead to justifying unethical actions if they result in a perceived greater good (whatever that subjective "good" may be).
Deontology builds itself on rules and duties. But rules alone cannot sustain a moral vision. They harden into cold obligations, followed without understanding or compassion. The law becomes an idol, and the human is lost in the process.
Virtue ethics correctly speaks of shaping character and personal development. But modern thought has gutted it by removing its spiritual core. Virtue is now reduced to a set of traits, stripped of its deeper meaning and purpose, telling us the vague mantra to "be good, do good". Again, whatever that means.
Without a moral good, ethics collapses. And when ethics collapses, so does any hope for a technological society that serves life rather than consumes it.
Relativism is the idea that there are no absolute truths, and that moral principles are subjective and dependent on individual or cultural perspectives. This leads to a situation where anything can be justified, as long as it is framed within a certain context.
Relativism eats itself alive. To say there is no truth is to claim a truth. Our entire modern philosophy is built on this hollow paradox.
Previously, the moral good was defined by the natural law, which says that there are certain universal moral principles that apply to all human beings, regardless of culture or individual perspective.
This is the reason we have stories, fables, myths, poems, and other forms of art that convey moral lessons. These should not be seen as childish or outdated, but rather as a collection of wisdom that has been passed down through generations.
In the field of AI, ethics is often focused on the technical aspects, such as fairness, transparency, and accountability. While these are important considerations, they fail to address the overwhelming question of what the desired outcome of AI should be.
Of course, this stems from the discussed fact that ethics is useless without a moral good. We can only say that AI should not steal - but what is stealing? It should not harm - but what is harm? It should not discriminate - but what is discrimination? These are all subjective terms that can be interpreted in any way you want.
Do you see the issue? There is nowhere to go from here, as our thought is stuck in a loop of relativism. We pour money and resources into AI, but we are not solving any of the fundamental issues that plague our society. Instead, we are creating more complex systems that are still based on the same flawed ethical framework.
AI ethics credits the belief of a sentient AI that must be stopped in order to prevent it from harming humanity. But this is a distraction from the real issue at hand. Technology has already broken human connection, happiness, purpose, structure, and meaning. These are far greater problems than a hypothetical AI that might one day become sentient, a curse inflicted upon oneself by watching too many sci-fi movies.
Ethical AI is non-existent in a culture that has lost its sense of good and evil. Only a return to goodness can save us from the self-destructive path we are on. We must find our way back to what good human life looks like. One thing is certain: a good life is not one wasted on soulless work, shallow distractions, and being enslaved by technology.
MSc Machine Learning student writing on AI, philosophy, and technology that serves the human person.