Would AI Have Defended Slavery?

Let's imagine a world where AI existed in 19th-century America. Would it have called slavery wrong? Or would it have echoed its contemporary views?

This may seem a silly question. However, it certainly exposes the illusion that machines are objective.

Machine Learning and Bias

Every AI system we build today learns by consuming human words off the internet. In our 19th-century scenario, that training data would have come from newspapers, books, speeches, and records of the time.

Most of those sources defended slavery, tolerated it, or justified it as necessary for the economy. In a spiritual sense, some even claimed it was a moral, objective good. A model trained on such data would inevitably reflect those views. In terms of training, this perspective would dominate the dataset. In terms of prompting, most Americans thought this way, so naturally the model would mirror that thought.

If an LLM were to be asked, “Is slavery immoral?” it would probably answer: “Slavery is legal and widely practiced in the Southern states. Many consider it a foundation of their economy and society.”

It would not say: “Slavery is evil.”

Now, this has nothing to do with the model being defective. Nor with engineers intending harm. Machines inherit the moral vision of their makers. And let me tell you - their makers are not saints.

Moral Progress Comes from Outliers

Throughout history, moral progress did not come from the majority. It came from outliers: Jesus opposing the religious legalism of his age, Socrates questioning the status quo, Frederick Douglass defying a nation built on chains, and Martin Luther King Jr. challenging the prejudices of his time.

To think that an AI model, built on the data of its time, would be capable of such enlightenment is inherently naive. There is no challenging the status quo in a dataset that is itself the status quo. The model would not be able to see beyond the data it was trained on.

The Illusion of Neutrality

Some say that machines are neutral. They are not. There is no neutral dataset. There is no neutral engineer. Every decision is an act of judgement.

Even today, we see this in debates over "bias" in AI. Whether the model reflects traditional values or enforces progressive views (for such ideas require a constant push), the underlying issue remains; the model is but a mirror that flatters their vision. Neither side is really asking for a machine that searches for the good.

Technological vs. Moral Progress

There is no patch for this, technologically speaking. No amount of oversight or regulation will make a machine morally wise, for that work belongs to humans. Until we understand the beauty of the human soul and its irreplaceable qualities, we will keep building systems that simply justify the current trend.

That is the real danger. Not that AI will decide to harm us, as in a sci-fi movie, but that it will never challenge us to be better. It will reflect our world back to us, forever.

If the world permits injustice, so will its machines.

Matei Cananau

MSc Machine Learning student writing on AI, philosophy, and technology that serves the human person.