The Ethics of Punishing AI: Who’s Responsible When Algorithms Break the Law?

A dissection of the role of artificial intelligence algorithms and breaches of the law.

Natalia Kiger

Natalia is a student at Titulos de Tesorería in Bogota, Colombia


A few years ago, a test vehicle had struck a pedestrian, and the story exploded across headlines. But what caught my attention most wasn’t just the tragedy — it was the question that came after: Who do we blame?

Was it the engineer who wrote the code? The company that built the car? The person who sat in the front seat and didn’t grab the wheel in time? Or was it, somehow, the car itself?

It’s a question we’re being forced to ask more and more as artificial intelligence weaves itself into the fabric of everyday life. From self-driving cars and automated hiring tools to content moderation algorithms that decide what we see — and what we don’t — AI systems are making decisions that carry real-world consequences. Some of those decisions are biased. Some are harmful. Some, frankly, are illegal.

And yet… who’s held accountable?

When a person breaks the law, the justice system knows what to do. There are trials, verdicts, consequences. We weigh intent, harm, responsibility. But when an algorithm makes a mistake — or worse, replicates discrimination at scale — there’s no one to put on the stand. No one to look in the eye. It’s like blaming the wind for knocking over a sign. Or is it?

This is where things get complicated. Because while AI might not be human, it doesn’t exist in a vacuum. Every algorithm was built by someone. Trained on something. Released into the world by a company, a developer, a team.

Let’s consider automated hiring software — the kind that scans résumés and flags applicants. These tools are often pitched as fair and efficient. But studies have shown that many replicate human bias. They might prioritize candidates based on name, zip code, or education — all proxies for race or class. In one case, an AI recruiting tool used by a tech giant was found to penalize applicants who had attended all-women’s colleges. The model “learned” that men were more likely to be hired — and so it simply filtered women out.

That wasn’t malicious. But it wasn’t harmless either.

So again — who’s responsible?

The developer who trained the model? The manager who deployed it? The company that profited off it?

Or what about content moderation algorithms — the kind used by TikTok, Instagram, and YouTube to decide what shows up in your feed? These systems can flag content as dangerous, inappropriate, or irrelevant, often without any human review. They’ve been caught suppressing activism, mislabeling educational videos, or promoting harmful content like eating disorder forums and conspiracy theories. And the harm isn’t abstract. It can shape what young people believe about themselves and the world.

The creators of these systems might say the algorithm “just followed the data.” But what if the data is skewed? Or incomplete? Or baked with historical prejudice?

At what point does a tool stop being neutral — and start becoming complicit?

Some people argue that we should think of AI the way we think of corporations: as entities that can be held liable. But AI can’t pay fines. It can’t apologize. It doesn’t have a conscience or a memory. You can’t teach it a lesson the way you can a person.

So maybe, the question isn’t about punishing AI. Maybe it’s about holding the people behind it accountable. But how do we trace the thread of responsibility through layers of abstraction? From coder to company? From the training dataset to the final output?

And what about open-source models — AI tools released into the wild for anyone to use and modify? If someone uses a large language model to create a scam, or generate misinformation, does that fall on the person who deployed it… or the people who made the model in the first place?

We don’t have neat answers. And that’s what makes this conversation feel so unsettling.

Because we want accountability. We need it. But we also want progress — and AI promises a lot of that. Efficiency, scale, even fairness (at least in theory). So when systems fail, we face a moral tension: if we come down too hard, do we stifle innovation? But if we let it slide, who pays the price?

Often, it’s the most vulnerable.

It’s the job applicant who never hears back. The protester whose post is shadowbanned. The pedestrian who crosses at the wrong time.

These are not just glitches. These are lives.

One of the biggest takeaways is that ethics can’t be an afterthought in AI development. It can’t be something companies add on in a press release, or clean up in Version 2. It needs to be baked in — from the first line of code to the final rollout.

We need laws that reflect that urgency. Right now, regulation is patchy at best. Some countries are trying — the EU’s proposed AI Act is one attempt to impose rules on high-risk systems. But most legal systems are still playing catch-up. And in the meantime, companies set their own rules.

I keep thinking about the idea of foresight. When humans act, we’re judged not just by what we did, but by what we could have known. Did we ignore red flags? Did we fail to anticipate harm? Maybe the same should be true for AI developers. Maybe responsibility lies not just in what the algorithm did, but in what was predictable — and preventable.

Because the truth is, AI doesn’t break the law in isolation. It reflects the systems, values, and blind spots of the humans who built it.

So no, we can’t put an algorithm on trial. But we can hold companies and creators accountable. We can ask better questions before deployment. We can design for transparency, auditability, and ethics — not just accuracy and efficiency.

As we hurtle forward into an AI-powered world, we’re going to face more of these questions. Cars that drive themselves. Judges that consult risk algorithms. Chatbots that advise patients. And each time something goes wrong, we’ll be forced to ask: who do we blame?

My hope is that by then, we won’t be scrambling for an answer. We’ll already have built systems — legal, ethical, and technical — that understand something basic:

That just because a machine made the decision doesn’t mean no one made it.

And just because something was automated doesn’t mean it wasn’t authorized.

Ultimately, AI is only as ethical as the people behind it.

And that means the responsibility — and the power — is still very much in our hands.

Leave a comment