I am sure you are aware of the progress made by AI in recent years. OpenAI's release of ChatGPT in late 2022 changed the world more than most people initially predicted. Students at schools were now being questioned about the authenticity of their work, and every semi-technological update to a product now included some form of AI. Even research papers started to resemble cliché AI-generated phrases.
So, is this all AI is? A tool for dumbing down the world? A way to make people lazy and uncreative?
Unfortunately, the current answer is yes.
The field of generative AI is still in its infancy. Businesses are trying to capitalize on the hype, which puts profit and marketing over quality. The result is an absurd amount of mediocre AI products that are not worth using. It will take time for this mess to be resolved. Just like the early days of the internet, we currently find ourselves in a phase where everything is being tried, and most of it is not working.
However, I argue that there still is hope.
A few days ago, under the noise of other unremarkable AI features, Google released MedGemma, a model trained on huge amount of medical data. Its most interesting use case, in my opinion, is its ability to do medical image classification. For example, it can detect tumors in CT scans and describe its findings in natural language to the user.
Now, this is good AI. Something that flourishes human life instead of hindering it. Helping doctors to diagnose patients faster and more accurately is exactly the kind of use case that AI should be used for.
We should agree that technological development ought to bring more benefit than harm to society. I know, modern philosophy often leans on relativism, and you may think that "good" and "bad" are subjective terms. However, I find this line of thinking extremely unhelpful from an engineering perspective. It is an impractical and nihilistic view that can only lead to stagnation and confusion. If we cannot agree on what is good and what is bad, how can we ever make progress?
AI will continue to advance regardless, so it's important to draw a clear line between what helps and what harms.
This requires us to accept that some goods are real and not merely matters of personal preference. Health, knowledge, justice, and human dignity are not arbitrary, but rather the goods that technology, including AI, should serve. Not replace, nor redefine, but serve.
The 20th-century philosopher Jacques Maritain wrote much on this topic. He believed that progress, whether scientific or political, has no value on its own unless it serves the development of the human person. In his view, human beings are not just material objects to be optimized, but persons with intellect, conscience, and purpose. Ignoring this, technology narrows its focus to efficiency or control, reducing people to problems to be solved, rather than lives to be respected. Therefore, tools must be measured not by how advanced they are, but by what kind of humanity they promote.
It is frightening to me that many of today's scientists and engineers share the view that progress is an end in itself; we build things because we can. But this is a modern myth. Progress is about what the ought to build. Abandoning the idea that certain things are objectively better for humanity, development will become directionless. What is worse is that we start valuing efficiency over wisdom, automation over humanity, and novelty over truth.
We need to be concrete. Good AI supports human flourishing. It respects autonomy and improves our ability to make informed decisions without replacing human judgment. It reduces suffering, empowers the vulnerable, and contributes to shared goods like health, education, and truth. The following are few examples:
By contrast, bad AI encourages passivity. It manipulates, misinforms, or simply distracts, valuing efficiency over meaning.
Back in the mid-20th century, as cognitive science and artificial intelligence were just starting to take shape, a couple early critics raised questions about the risk of technology stripping away what makes us human.
Today, however, most of the "critique" of AI is a watered down, sci-fi version of foolish concerns. Often driven by fear and misunderstanding of the technology, these so-called critics warn about a world where AI becomes sentient and takes over. But this is not the real issue - this is simply marketable fearmongering.
The real fear should be that AI is being used as a distraction, a tool for manipulation, and a way to avoid responsibility. Human dignity is not threatened by AI becoming sentient, but rather by the way we use AI to devalue our own human life and creativity, relying on an auto-pilot pathway to mediocrity.
Researchers and engineers have a moral duty to drive the development of AI forward while keeping the communal good in mind. Not often enough do we take a step back to observe the impact of our work on the broader society - we build things for the sake of building, without ever considering the consequences.
AI and human dignity should be balanced. AI development must never be the dictator of human dignity, for it should always be the other way around. Taking time to understand how we want to live our lives in a meaningful way is necessary for a bright future. We have to envision the world we want to create, and then build AI that supports that vision.
If you're building AI, ask yourself: does this help people? Is my solution supporting human judgement or replacing it? Does it lean towards the objective good of humanity, or is it just a tool for profit?
As a user of AI, be aware of tools that encourage passivity or manipulation. Use AI in a way that you feel enhances your life, supports your development, and contributes to the common good.
Leaders of businesses and organizations: prioritize ethical AI use in your strategies. Do not seek the temporary, short-term gains that come at the cost of human dignity, for they are always bound to fail. Instead, invest in AI that aligns with shared goods, such as health, education, and justice.
MSc Machine Learning student writing on AI, philosophy, and technology that serves the human person.