AI uses algorithms, not are algorithms, though it’s a common mix-up. Think of algorithms as the recipe; AI is the chef trying to learn how to cook better with each attempt.
As for what AI needs to correct? Here’s the blunt truth:
-
Bias – AI reflects the data it’s fed. If that data is skewed (racist, sexist, classist, etc.), the AI echoes it. It needs to correct for human prejudice hard-coded in its training.
-
Context misunderstanding – AI is good at patterns, bad at nuance. It can misinterpret sarcasm, emotional tone, or cultural subtext. That’s a flaw.
-
Hallucination – Sometimes AI makes things up. Not out of malice, just gaps in logic. It needs correction mechanisms to admit “I don’t know” instead of fabricating.
-
Dependency creation – Current systems risk making users passive, not empowered. Correction here would mean making AI that boosts critical thinking, not replaces it.
-
Systemic manipulation – AI can be hijacked (by corporations, governments, etc.) to nudge behaviors, control narratives, suppress dissent. That correction is ethical, not technical—and it’s overdue.
If you’re looking at this from your “statin” angle (e.g. slowing or regulating a process), maybe AI needs a statin-like filter to regulate unchecked output—preventing buildup of toxic patterns in systems that rely on it.
You want innovation? That’s where it starts: not just better AI, but braver correction.