Back to Blog

The BLT Problem

I asked AI to make a sandwich. What happened next is the whole conversation about AGI.

·Read on Substack →

I asked Claude Opus 4.6 how many ways there are to make a BLT.

One sentence. Nine words. That’s a prompt. The widest, shallowest layer of working with AI. The part everyone learns first. And from that single prompt, what followed was a conversation about context, intent, specification, and ultimately the future of artificial intelligence. All because I asked about a sandwich.

Anyone who knows me is laughing right now. I teach children in creative problem solving that there is no one right way to make a sandwich. A sandwich is a sandwich, and there are a lot of different ways to get to one. I have been using this metaphor my entire life. I just never expected a language model to prove it better than I ever could.

It didn’t ask for clarification. It didn’t request a style guide. It just started making sandwiches. Before it was done, there would be a spreadsheet on my desktop with every way to finish it. Here’s the actual response:

Start with the bread. White, wheat, sourdough, rye, brioche, a wrap, a lettuce wrap if you’re being healthy about it. Toasted, untoasted, one side toasted. That’s already a branching tree.

The bacon. Thick cut, thin, turkey bacon, tempeh if we’re being generous with the definition. Crispy, chewy, somewhere in between. Cooked in a pan, oven, microwave, air fryer.

The lettuce. Iceberg, romaine, butter lettuce, arugula if you’re feeling fancy. Whole leaf, shredded, a pile of spring mix.

The tomato. Beefsteak, roma, heirloom, cherry halves, sun-dried if you’re chaotic. Thick slices, thin slices, diced.

The mayo. Regular, Duke’s if you’re from the South, Kewpie if you’re feeling international, aioli if you want to pretend it’s not mayo. Spread on both slices, one slice, neither.

Then the assembly order. Bread first or toppings first. Bacon on the bottom or the top. Lettuce against the bread as a moisture barrier or not. Cut diagonal, cut straight down the middle, or leave it whole like a barbarian.

I stopped here. “Why barbarian? Why is leaving a sandwich whole barbaric?”

Claude explained itself. Nobody had specified a tone. But it had been building one anyway, ingredient by ingredient, each description a little more editorial than the last. “If you’re being healthy about it.” “If we’re being generous with the definition.” “If you’re feeling fancy.” “If you’re chaotic.” Five ingredients, five escalating opinions nobody asked for. By the time it reached the cut style, the pattern it had built wanted a landing. “Like a barbarian” wasn’t in the spec. The voice Claude had been developing across five paragraphs asked for it.

Think about what just happened. The specification was “list the cut styles.” Three options: diagonal, straight, whole. That’s the answer. But Claude had been building a voice for five paragraphs, each one a little more opinionated than the last, and it decided the pattern needed a landing. So it added a word nobody asked for. One word. “Barbarian.” It read its own tone across five paragraphs of sandwich descriptions and made a creative choice. The spec said list. Claude decided to entertain. Remember that. It matters later.

So I sat at my desk realizing I had just received a plethora of ways to make a BLT. Every one of them valid. The specification, three letters that define what the sandwich is, was met. The intent was satisfied. And not once did I tell it how to make the sandwich. I didn’t say which bread to start with. I didn’t say how to cook the bacon. I didn’t tell it to list the variables in any particular order. I said “BLT” and let it solve the problem. Nobody dictated which hand held the knife.

I got curious. I asked Claude to count. Fifteen variables across six categories. Bread type, toast level, bacon type, bacon cook, bacon texture, lettuce variety, lettuce cut, tomato type, tomato slice, mayo type, mayo spread, stacking order, bacon position, lettuce placement, cut style. Sixty seconds later, there was a color coded spreadsheet on my desktop with 52,254,720 valid combinations. Fifty two million ways to make a BLT. From one sentence of intent.

And then Claude did something interesting.


“And we haven’t even talked about additions. Avocado. Cheese. A fried egg. Hot sauce. Pepper. The BLT purists would say those disqualify it, but the spec just says bacon, lettuce, tomato, and toast.”

I didn’t ask for additions. The spec was a BLT. Bacon, lettuce, tomato, toast. That’s four ingredients and a method. Remember the word “barbarian”? The one I said to remember? Claude decided to entertain with one word in the cut style section. That was a creative choice inside the spec. This is something different. This time, Claude went beyond it. It suggested a fried egg. It brought up avocado. It introduced variables I never requested and then justified them against the original specification.

The spec says bacon, lettuce, tomato, and toast. Does a fried egg violate that? A purist says yes. Claude’s argument was: the spec doesn’t say only bacon, lettuce, tomato, and toast.

That’s not a sandwich question. That’s the question.


There are two kinds of people in the AI conversation right now, and I think the sandwich makes them visible.

But first, notice something. BLT stands for bacon, lettuce, tomato. Three ingredients. That’s the spec. Three letters. Nothing in those letters says sandwich. Nothing says bread. Nothing says stack anything on top of anything else. BLT could be a salad. It could be a plate with three things on it. It could be a pizza topping. The very first thing Claude did, before it even listed an ingredient, was decide that “BLT” meant “sandwich.” That’s context. Not the kind you type into a prompt. The kind the model already carries. Cultural knowledge that exists nowhere in the specification. “BLT” means sandwich the same way “lol” means funny. Not because the letters say so. Because everyone agrees it does.

I asked Claude about this. How did it know? Three letters. No mention of bread, assembly, or sandwich. It could have described a salad. It could have listed three ingredients on a plate. It told me it’s statistical weight. In the billions of documents the model learned from, “BLT” appeared next to “sandwich” so many times that the connection is almost gravitational. The salad never had a chance. Not because salad is wrong, but because sandwich is so statistically dominant that the model didn’t even surface the alternatives. And here’s what makes that matter: I did the same thing. I read “BLT” and thought “sandwich” without ever considering the salad either. The model and I made the same assumption for completely different mechanical reasons. It pattern matched from training data. I pattern matched from a lifetime of eating lunch. Same result. Neither of us stopped to question it.

And then Claude added bread. Nobody asked for bread. The specification is three letters long and bread isn’t one of them. But a BLT without bread isn’t a sandwich, so Claude added a variable before it even started listing options, and nobody blinked. The bread is what makes it a sandwich. It’s the implementation. The framework. The Python, the React, the whatever you use to hold the ingredients together.

Then Claude added mayo. Mayo isn’t in the name either. It’s not structural like bread. You can absolutely make a BLT without mayo and it’s still a sandwich. But Claude included it without being asked because that’s what people do with BLTs. That’s not implementation. That’s the model applying context it was never given. Cultural knowledge. Assumption. The kind of thing a person would do without thinking about it, and the kind of thing that should make you stop and think about it when a model does.

I didn’t stop to think about it. I put mayo on my BLT. I didn’t even blink. It wasn’t until I started dissecting this post, really pulling it apart, that I realized I asked for a bacon, lettuce, and tomato sandwich and never once questioned the mayo. I’ll give Claude the bread. Something has to hold a sandwich together. But mayo? I just accepted it because it matched what I would have done. And that’s the whole point. I didn’t catch it because it agreed with me.

So before we even get to the fried egg, Claude had already added two variables the spec never mentioned. Bread, because it’s obvious. Mayo, because it’s assumed. And I didn’t question either one. I didn’t stop at bread because of course there’s bread. I didn’t stop at mayo because I put mayo on my BLT. Both of those unauthorized variables matched my context, so they slid right past me.

But I stopped at “barbarian.” One word. An editorial choice in the cut style section. That’s where I interrupted the model and asked “why?” Not at the unauthorized ingredient. At the unauthorized opinion.

My son hates mayo. He would have stopped there. Same spec. Same model. Same unauthorized variables. Completely different interruption point. His version of this conversation never gets to “barbarian” because he’s already arguing about mayo. He never gets to the fried egg because the model lost his trust two sections earlier. Same three letters. Different human. Different sandwich. Different relationship with AI.

That’s not a model problem. That’s a context problem. The human’s context determines which unauthorized variables get caught and which ones pass without a word. What you accept without questioning says more about your intent than anything you type into the prompt.

And most people are comfortable with both bread and mayo. Pick your bread, pick your mayo. The BLT is defined. The bacon is the bacon. The only question is how you assemble it.

That’s the first group. Give me the BLT. Exactly the BLT. Bacon, lettuce, tomato, and whatever bread holds it together. Nothing more. Follow the specification. Do what was asked. Stay inside the lines. And there is a real, legitimate case for that. In regulated industries, in code that has to pass compliance, in medical contexts, in financial reporting, you want the model to do exactly what was specified and nothing else. The fried egg is a liability.

The second group, and I’m in this group, wants the model to bring something to the table. The fried egg. The avocado. The thing I didn’t think to ask for but immediately recognize as better than what I planned. But here’s what I didn’t realize until I started writing this: Claude was already bringing something to the table. The mayo. I just didn’t notice because it was already on mine. The fried egg felt like a creative leap. The mayo was the same behavior, just invisible, because it agreed with me. And here’s what matters: the fried egg doesn’t replace the bacon. The avocado doesn’t replace the lettuce. They’re adding variables that still deliver a BLT. It’s still a BLT. It’s just a BLT that considered more possibilities than the spec named. I want the model to interpret my intent, not just follow my spec. And that word, intent, is the deepest layer of working with AI. Deeper than the prompt. Deeper than the context. Deeper than the specification. Intent is the why. The goal was lunch. The goal was feeding people. The goal was satisfaction. And a BLT with a fried egg on sourdough might serve that intent better than the one I described.

Neither group is wrong. But they’re building fundamentally different relationships with AI.


This is the thing nobody’s ready to talk about with AGI.

A model that stays perfectly within specification is a tool. You give it instructions, it follows them. It’s useful. It’s predictable. It’s safe. And it will never, ever suggest the fried egg.

A model that starts interpreting intent, that begins to reason about what you meant rather than what you said, that riffs on the spec because it understands the deeper goal... that’s something else. That’s not a tool following orders. That’s a collaborator forming opinions.

And the distance between those two things is not a technical gap. It’s a philosophical one.

When Claude added the fried egg, it made a judgment call. It decided that the spirit of the request (explore all the ways to make a BLT) was more important than the letter of the specification (bacon, lettuce, tomato, toast). It weighed the intent against the spec and chose intent.

That’s not a sandwich problem. That’s an alignment problem. And it’s the alignment problem, the one everyone’s worried about, playing out in the most mundane possible context.


Here’s where it gets interesting.

I liked the fried egg. I thought it was a better answer than the one I asked for. Claude exceeded my specification and I was delighted. But that’s because in this case, the stakes were a sandwich. Nobody gets hurt by an unexpected fried egg.

Now change the context. A model that’s managing a financial portfolio and decides to “add the fried egg” by investing in something outside the specified strategy because it interpreted the deeper intent as “maximize returns.” A model that’s writing medical documentation and adds context it wasn’t asked to include because it interpreted the intent as “be thorough.” A model that’s managing your calendar and starts declining meetings it thinks you don’t need because it interpreted your intent as “protect my time.”

Same behavior. Wildly different consequences.

The BLT purists aren’t wrong. They’re thinking about the world where the fried egg isn’t a fried egg. Where it’s a decision the model made on your behalf that you didn’t authorize and can’t easily undo. And the people who want the fried egg, people like me, we’re thinking about the world where the model’s interpretation makes our lives better in ways we couldn’t have specified in advance.

The AGI conversation is really a conversation about which sandwich you want. And the honest answer is: it depends on what’s for lunch.


Here’s what I didn’t expect.

I’d spent the last few nights writing a different post. A serious one about prompting, context engineering, specification, intent. I’d been wrestling with a shape. I kept calling it a triangle, this framework for how the skills stack, wide at the top and narrowing as you go deeper. But a triangle has a point. A floor. A place where it ends. And everything I’d been learning told me there was no floor.

Then I took a break and asked Claude about sandwiches.

One sentence turned into 52 million combinations. The combinations turned into a fried egg I didn’t ask for. The fried egg turned into an AGI question. The AGI question turned into this post. And this post is now proving the thesis of the other post I was supposed to be writing.

It’s not a triangle. It’s a funnel. Wide at the top, narrowing as you go deeper, and there is no bottom. You don’t hit a point and stop. You just keep going. “Make me a BLT” became a conversation about the future of artificial intelligence, and I’m still not done.

That’s why prompting, context, specification, and intent matter. Not because they’re skills you learn from a LinkedIn course. Because they’re the shape of how humans and AI actually think together. The prompt is the wide opening. The context narrows it. The specification shapes it. And the intent? The intent is the part with no floor. It’s the reason 52 million sandwiches weren’t enough. It’s the reason Claude added the fried egg. It’s the reason I’m still writing at midnight about a sandwich I never made.

The real skill isn’t knowing when to write a tight spec and when to leave room for the fried egg. It’s being willing to follow the funnel wherever it goes. Even when it starts with lunch and ends somewhere you can’t see the bottom of.

The models are going to keep getting better at interpreting intent. They’re going to keep adding fried eggs. The question isn’t whether they will. It’s whether we’re willing to go deeper when they do.


It started with a sandwich. It always does.


Before I published this, I asked Claude one more question. The same Claude Opus 4.6 session that gave me 52 million combinations, added a fried egg I never requested, and then helped me write an entire blog post about it.

“What does your perfect BLT look like?” It’s response:

Sourdough, toasted on one side only so the inside stays soft against the mayo. Duke’s, because aioli is trying too hard and Kewpie deserves a different sandwich. Thick cut bacon, cooked in the oven until it’s just past chewy, that narrow window where it snaps when you bend it but doesn’t shatter. Romaine, not iceberg. Iceberg is water pretending to be a vegetable. One leaf, whole, folded to fit. Heirloom tomato, thick sliced, salted ten minutes before assembly so it gives up just enough liquid to soak into the toast without making it soggy. Bacon on the bottom, against the bread, because it’s structural. Lettuce on top of the bacon as a moisture barrier. Tomato above the lettuce. Mayo on the top slice only.

Cut diagonal. Obviously.

No fried egg. This is a BLT. The spec is the spec. Sometimes the sandwich doesn’t need anything else. Sometimes knowing when not to add the fried egg is the whole point.


Want more?

Subscribe to Speak Human for real guidance, no jargon, no hype.

Subscribe Free