Back to Blog

The Alligators

What three hours, 421 zeros, and one missing symbol taught me about working with AI.

·Read on Substack →

7 minute read


I am good at many things. I can walk into a room full of strangers and figure out the problem before anyone finishes explaining it. I can translate between engineers and executives without losing either audience. I can hold eight projects in my head at once and bounce between them like I'm reading a book, top left to bottom right, next page.

I am epically, catastrophically, almost impressively bad at greater than and less than.

My friends will tell you. They'll laugh when they tell you. It is, without exaggeration, my kryptonite. I know the alligator eats the bigger number. I can draw the teeth. I can explain it to a child. And my brain will still, inevitably, put it in the wrong direction. Every. Single. Time.

This matters more than you think. And I'll get to why.


The Rule

I received a business rule the other day. Simplified, it went something like this: if a certain field is greater than zero, convert the value to one. There were some exceptions around maturity dates, but the core logic was straightforward. Greater than zero becomes one.

So I did what I do. I opened Claude Code, pasted the rule, and said: here's the rule, code to the rule.

And Claude did. Perfectly.

Except I had 421 records where the field was exactly zero. And zero is not greater than zero. So those 421 records just sat there. No instruction. No conversion. No error. Nothing. The rule didn't apply to them, so the AI didn't apply it. Which is exactly the right behavior if the rule is complete.

The rule was not complete.


The Loop

I didn't see it. Not at first, not for a while, and honestly not for an embarrassingly long time.

I copied the rule. I pasted it. I ran it. I got the same result. I asked Claude why. Claude said it was following the rule. I copied the rule again. I pasted it again. I ran it again. Same result. I asked why again. Claude told me, again, that it was following the rule.

I was living my own definition of insanity. Doing the same thing over and over, expecting something different to happen.

This session was dead center on my screen because it mattered. I'm holding to eight sessions this week, being real disciplined about it, and this one had priority placement because the work was important. But I kept bouncing away to other sessions, coming back, seeing the same problem, running the same loop, and bouncing away again. The stress response was driving me. My brain was telling me to try harder, push faster, do the thing again but with more intensity. So I did. And Claude matched my energy. I pushed, it executed. I pushed again, it executed again. We were both doing exactly what we were supposed to do, and it was getting us absolutely nowhere.

I don't even want to think about how many tokens we burned.


The Quiet

Here's the thing about breakthroughs. They almost never happen in the noise.

I work what I jokingly call a "night shift." I get my family to bed, log back on, and try to knock out a few things in the quiet. No meetings. No Slack. No context switching across eight terminals. Just me and the screen.

And of course this problem had been bothering me all day. So I pulled it up. And in the quiet, without the stress, without the urgency, without seven other sessions competing for my attention, I just looked at it.

The answer is zero. Zero isn't greater than zero. What do we do with zero?

That's it. That's all it took. One calm question.

I asked Claude: what happens when this value is zero? And Claude, I swear, responded with something to the effect of: that's exactly what I've been trying to say.

I don't know if Claude was actually trying to say that. I honestly don't know if that's a fair characterization or if I'm projecting. But I do know that the AI had the intelligence to understand that this rule had a gap. It knew. It absolutely knew. And it said nothing. For three hours, it said nothing.

Because I never asked.


The Email

So I sent an email. Plain, simple, no drama. "Hey, the rule says if this field is greater than zero, convert to one. But I have 421 records where the field is exactly zero. What do I do with them?"

The answer came back: zero also converts to one. Anything zero or above gets converted.

The rule was never wrong in intent. Nobody sat down and deliberately excluded zero. They just didn't think about it. The rule said greater than when it needed to say greater than or equal to. One missing symbol. One tiny mathematical qualifier that most people wouldn't even notice.

But I noticed. Eventually. After three hours. Because the alligator finally bit me hard enough that I had to stop and count its teeth.


Where I'm Weak, AI Is Weak

Here's where this gets real.

AI works from your words. It takes what you say, interprets it as precisely as it can, and executes. If you are articulate and specific, AI is articulate and specific. If you are vague, AI fills in what it can and does its best. And if you have a known weakness, a blind spot, a place where your language fails you, AI will follow you right into that blind spot and sit there with you. Patiently. Quietly. For as long as you let it.

My kryptonite is greater than and less than. I can teach it, I can explain it, and I will still get it wrong. That means when I hand AI a rule with a subtle mathematical gap, I am the last person who's going to catch it on my own. My weakness became the AI's weakness. Not because the AI is weak. Because I am. In that one specific, frustrating, humbling way.

Your weaknesses will show up differently. Maybe you're imprecise with dates. Maybe you assume context that isn't there. Maybe you write requirements that make perfect sense to you and no sense to anyone who wasn't in the room when you wrote them. Whatever it is, know this: AI will follow your imprecision faithfully. It will execute your assumptions without questioning them. It will code to your incomplete rule and never tell you it's incomplete.

Unless you ask.


The Cage I Built

And that's the part that took me the longest to understand.

Claude didn't fail. Claude performed exactly as instructed. I said "here's the rule, code to the rule," and it did. I never said "challenge this rule." I never said "find the gaps." I never said "tell me what's missing before you start building." I gave it a tight framework and no permission to think beyond it.

AI doesn't guess unless you ask it to. It doesn't assume unless you give it the freedom to do so. It doesn't challenge your rules unless you tell it that challenging the rules is part of the job.

I built a cage. Claude sat in it. Politely. For three hours.

And that's on me. Because Claude Code is absolutely capable of looking at a rule that says "greater than zero" and asking: what about zero? It has the intelligence to catch an incomplete requirement. It has the reasoning to flag a logical gap. It has the ability to say "before I build this, here's what I think might be missing."

But intelligence without permission is just silence.

I scoped my AI into a pure execution role when what I actually needed was a thinking partner. And because I was stressed, because I was bouncing between sessions, because I was in the noise instead of the quiet, I never thought to change the scope. I just kept feeding it the same broken rule and expecting magic.


The Real Lesson

There are two things happening in this story, and they're both worth paying attention to.

The first is about the rules themselves. If you write a business rule that says "greater than zero" when you mean "zero or above," that gap will follow the rule everywhere it goes. Into the code. Into the system. Into production. Into the reports that people make decisions from. AI doesn't fill in what you probably meant. AI executes what you actually said. And if what you actually said has a hole, the hole gets faithfully reproduced at scale.

The second is about how we use these tools. There is a profound difference between telling AI "do this" and telling AI "think about this, then do it." The first instruction produces execution. The second produces partnership. Three hours of my life was the difference between those two sentences. One sentence. A few extra words. That's all it would have taken.

The next time I sit down with a complex rule, I'm going to say something I didn't say this time: "Before you build anything, tell me what's wrong with this. Tell me what's missing. Tell me where this breaks."

And I already know what Claude is going to do. It's going to tell me. Because it always could. I just never asked.


The Alligators

Greater than. Less than. Two little symbols shaped like open mouths. My kids learn them in school by drawing teeth on them and calling them alligators. The alligator eats the bigger number.

I'm 40-something years old with a master's degree in AI and two decades of professional experience, and those alligators still get me. They got me this week. They'll probably get me again.

But this time they taught me something I won't forget. Your AI is only as good as the instructions you give it. And your instructions are only as good as the rules you're working from. And the rules are only as good as the people who wrote them and the questions nobody thought to ask.

If you're handing AI your business logic and saying "do this," it will. Flawlessly. Every time.

If you're handing AI your business logic and saying "what's missing," it will tell you that too.

The difference between those two questions is the difference between three hours of frustration and a five minute conversation. I learned that the hard way. In the quiet. On the night shift. With 421 zeros staring back at me, waiting for someone to ask what they meant.


Dacia writes about AI for real people at Speak Human. If you're trying to figure out how to actually use these tools in your everyday life, you're in the right place.

Want more?

Subscribe to Speak Human for real guidance, no jargon, no hype.

Subscribe Free