Murphy’s AI Laws. Because If Intelligence Can Fail, It Definitely Will

 

Credit: MrWashingt0n on Pixabay

If you’ve been in tech long enough, you eventually realize that the universe is held together by two things: duct tape and Murphy’s Law. 

Years ago, I wrote the “Murphy’s BI Laws,” which sadly still hold true (and in my view, remain truer with each release cycle).

Now that we’ve entered the AI-everywhere era, where everything from your fridge to your accounting system thinks it’s a master philosopher, it’s time to face the truth: 

Murphy followed us into AI, and he’s having the time of his life.

So, here are the brand-new, fully field-tested Murphy’s AI Laws, written with love, pain, and several hours lost talking to a chatbot that insisted 2 + 2 “felt like 5.”


1. If an AI system can hallucinate, it will, and at the worst possible moment.

Corollary: The confidence level of the hallucination will rise in direct proportion to the importance of the meeting where it is presented.

 

2. Any AI model you deploy is obsolete the moment you push it to production.

Corollary 1: A new state-of-the-art model will appear at that exact minute.

Corollary 2: It will need triple your current GPU budget.

Corollary 3: Your CIO will ask why you didn’t predict this with AI.

 

3. The probability of your agentic AI failing increases exponentially with the executive level of the person using it.

If the CEO tests it? Prepare the incident-bridge call.


4. The biggest AI failure will never show up during testing.

It will only appear:

  • During your “go live,”
  • Before your coffee,
  • Or five minutes after you told leadership, “We’re confident in the model.”

5. If an AI bug exists, it will only be discovered by an end user, preferably one who’s been skeptical about AI since Windows 95.


6. The likelihood of AI misunderstanding a simple instruction approaches 100% when the instruction is given using natural language.

Corollary: The more carefully you phrase the prompt, the more chaotic the output.


7. No matter how many GPUs you have, you’ll never have enough.

Corollary: Your CFO will start Googling “cost-cutting strategies” within weeks.


8. The probability of AI delay rises in direct proportion to the number of teams claiming to ‘own the model.’

Corollary: AI governance meetings multiply faster than LLM tokens.


9. If users have any opportunity to mislabel training data, they will.

Corollary: And they’ll insist the AI is the one that’s wrong.


10. Your main subject matter expert for model training will always be the person with the least time available.

Bonus: Their calendar will be mysteriously blocked for the next six months.


11. The probability that a key ML engineer leaves during your AI project rises in direct proportion to how much undocumented code they wrote.


12. If there’s a way for AI to misinterpret context, it absolutely will.

Ask it for a summary; it writes poetry. Ask for poetry, and it outputs Kafka.


13. The more “explainable” the AI claims to be, the less anyone will understand the explanation.

Corollary: The explanation will still somehow blame your data.


14. The chance that your model drifts increases with the importance of the metrics it supports.

You wanted accuracy? Enjoy the drift.


15. The more people promise “AI will solve this,” the more likely it is that AI is the reason you’re in the meeting.


Final Thoughts: Why We Laugh Instead of Cry

Behind the humor is something real: AI projects, like BI before them, live at the intersection of messy data, messy humans, and messy expectations.

And no matter how advanced our models get, one universal truth remains: complex systems fail in complex ways.

But like with all great tech revolutions, we learn, iterate, fix, enhance… and occasionally shout at a chatbot that refuses to understand what “No, not that!” means.

Murphy may still be with us, but at least in AI, now we can ask a model to generate a meme about it.


Dearly,

Jorge Garcia


Comments

Popular posts from this blog

Machine Learning and Cognitive Systems, Part 2: Big Data Analytics

Teradata Open its Data Lake Management Strategy with Kylo: Literally

SAP Data Hub and the Rise of a New Generation of Analytics Solutions