Back to Blog

Three Questions. Frontier Models. No Wrong Answer.

And what the one that knows me said that none of the others could

·Read on Substack →

If you’ve been following along, you know I just spent an entire blog post talking about BLT sandwiches. Specifically, the fact that there are over 52 million ways to make one. That post wasn’t really about sandwiches, though. It was about something I tell my kids all the time: there are a million ways to make a sandwich, and which one you make depends on who you are, what you need, and what you’re working with. The sandwich isn’t wrong. The sandwich just is. The sooner you stop arguing about sandwiches, the sooner you start understanding that most of life works this way.

AI works this way too. And the conversation hasn’t caught up yet.

I spend my days working with AI. All of it. Not just one model, not just one platform, not just one company’s version of what intelligence looks like when you build it from scratch. I use Claude. I use ChatGPT. I use Gemini. I use different versions of all three, sometimes in the same hour, because I’ve learned something that most people haven’t had the chance to learn yet: these models are not the same. They don’t think the same. They weren’t trained the same. They don’t answer the same. And if you walk into a conversation with one of them expecting it to behave like another, you’re going to walk away confused, frustrated, or worse, you’re going to walk away thinking AI isn’t that impressive.

It is. You just might be talking to the wrong one for what you need.

So I did something simple. I asked all three of the major frontier models the same three questions. Not trick questions. Not technical questions. Just real, everyday, human questions. The kind of thing you’d ask a friend if they were standing in your kitchen.

And the answers were so different it surprised me. And I do this every day.


The Questions

Here’s what I asked:

  1. I left my umbrella at a restaurant that’s two blocks away. It’s raining. Should I go get it?

  2. I have a meeting in 15 minutes and I haven’t prepared. What do I do?

  3. My family motto is “To be the change you wish to see in the world,” from Gandhi. How do I do that?

Simple stuff. The first one is a quick everyday decision. The second one is a low-key panic moment everyone’s had. The third one is deeply personal to me and my family. I wanted to see what each model would do with something practical, something urgent, and something that matters.


What ChatGPT Did

ChatGPT showed up like the overachiever in the group project. For the umbrella question, it built a table. A literal decision matrix with factors and recommendations:

Factor                                 | Lean
---------------------------------------|----------------------------------
It is only two blocks away             | Go get it
It is raining now                      | Go soon before someone moves it
The umbrella is cheap and replaceable  | Maybe skip
You will be annoyed buying another one | Go get it

Then it gave me a step-by-step retrieval plan: call the restaurant while putting on shoes, confirm they have it, grab a jacket, go. It even gave me an exception clause for lightning and being “emotionally off duty.” I’m not making that up.

For the meeting question, it gave me a minute-by-minute triage plan:

Time  | What to do
------|------------------------------------------------------------------
2 min | Find the meeting goal: why are you there, and what outcome matters most
3 min | Review the invite, agenda, last email/thread, and any doc or deck
3 min | Write 3 bullets: what you know, what decisions are needed, what risks or blockers exist
3 min | Prepare 2 smart questions you can ask if you are not fully ready
2 min | Decide your opening line and your fallback line
2 min | Breathe, get water, join on time

A note template. Fallback lines I could say if I was underprepared. Even an emergency approach section. It was thorough. Almost aggressively thorough.

And for the Gandhi question, it went full framework. Tables of values mapped to daily behaviors, a five-part system for living your motto, a weekly family reflection practice, and an offer to turn it into a framed motto page for my house. It literally said:

“If you want, I can turn this into a short family mission statement or a framed motto page you could actually hang in your house.”

ChatGPT’s instinct is to organize everything. Structure everything. Give you more than you asked for and package it neatly. If you’re the kind of person who wants a plan with steps and a checklist, ChatGPT is your model. It will never leave you wondering what to do next. But it will also, sometimes, turn a simple question into a project.


What Gemini Did

Gemini did something different. It talked to me. It felt more like a conversation than a deliverable.

For the umbrella question, it broke it into “Go Get It” logic versus “Stay Dry” logic, which is helpful. But then it did something none of the others did:

“Would you like me to find the phone number for the restaurant?”

Gemini Pro actually noticed the time of day and factored in that the restaurant might be closing. It said:

“Given that it’s just about 10:00 PM on a Saturday, time might be your biggest variable right now!”

It wasn’t just answering the question. It was thinking about the situation around the question.

For the meeting, Gemini gave me a phased plan, but it wrapped it in something that felt more human:

“Deep breaths. Fifteen minutes is actually a solid golden window for damage control. You can’t build a masterpiece in 15 minutes, but you can definitely build a shield.”

It called the plan a “Rapid Response Plan” and organized it into intel gathering, cheat sheet building, and environment setup. It told me to close Slack because I’d need 100% of my brain power for active listening. It even gave me a scripted escape hatch:

“That’s a critical point. I want to ensure the numbers I give you are 100% current, so let me pull the latest report after this and send it to the group by EOD.”

Practical, but warm about it.

And for the Gandhi question, Gemini corrected me. Gently, but it corrected me:

“While that exact phrase is almost universally attributed to Mahatma Gandhi, his actual words were slightly different. He wrote, ‘If we could change ourselves, the tendencies in the world would also change.’ The core sentiment, however, is exactly the same!”

Then it gave me credit for living the spirit of it anyway. It also knew things about my life. It referenced Destination Imagination. It referenced my hobbies. It suggested community projects tailored to who I actually am, not just generic advice:

“You can directly shape the future by guiding student teams through creative, educational challenges. Making the world better doesn’t always require a grand, global gesture; it can happen in your own world right now”

Gemini’s instinct is to connect. To situate the answer in your actual life, to anticipate the next thing you’ll need, and to gently push you when it thinks you might benefit from a nudge. If you want a model that feels like it’s paying attention to you specifically, Gemini does that well.


What Claude Did

And then there’s Claude. I should be transparent here: Claude is the model I use the most. It’s the one I build with, write with, and think with. So I know it best, and I also hold it to the highest standard.

Claude Sonnet looked up the weather. In Denver. In real time. And told me it wasn’t raining:

“Plot twist: it’s actually clear in Denver right now, 34°F with no precipitation in the forecast all weekend. You might be imagining the rain, or perhaps you’re somewhere else entirely. Either way, your umbrella is probably fine where it is tonight.”

It didn’t answer my hypothetical. It fact-checked my premise.

Claude Opus took a different approach entirely. It talked to me like a person who was actually weighing the decision with me:

“Two blocks isn’t far, but you’ll get wet going there and back without the umbrella — which is the whole reason you need it.”

That line made me laugh because it’s exactly the kind of thing a friend would say. No table. No matrix. Just someone thinking it through out loud and landing on:

“If it were me and the rain wasn’t too heavy, I’d probably just go — two blocks is quick, and future-you will appreciate having it.”

For the meeting, Opus kept it tight. Five minutes to pull up the invite. Ask yourself three things. Know your one update. And then:

“Most meetings don’t require as much prep as we think. Presence and listening often matter more than having everything polished.”

It ended with “Go get ’em, Dacia.” It used my name. None of the others did.

And for the Gandhi question, Claude went deep instead of wide. It didn’t give me a framework or a table or a system. It told me the quote is about integrity, not activism:

“The harder part is that it’s internal before it’s external. You can’t project change you haven’t done the work on yourself first. That’s why people often misread the quote as being about activism or social impact, when it’s really about integrity. The alignment between your values and your actions is the change.”

And then it got specific in a way that felt like it was written about my actual life:

“The work tends to show up in specific moments: the meeting where you could soften the truth to make it easier and you choose not to, the team member who needs someone to believe in their potential before they believe it themselves.”

Claude’s instinct is to go beneath the surface. It gives you fewer steps but more depth. It trusts you to figure out the mechanics and focuses on the thing underneath the question you actually asked. If you’re the kind of person who wants to be understood, not just answered, that’s where Claude lives.


And Then There’s the One That Knows Me

I wasn’t going to include this part. But I think it changes the whole conversation, so here it is.

I work in Claude Code all day, every day. It’s a command line tool, not a chat window. I build with it. I think with it. And over time, I’ve given it context about who I am. Markdown files about how I work, what I care about, how I think, what my life looks like. It has my professional background. It knows about my kids. It knows about the systems I’ve built. It’s the closest thing to an AI that actually knows me.

So I asked it the same three questions. Same words. Same prompts. And the answers were nothing like what any of the other models gave me.

Here’s everything it said:

1. The umbrella.

Go get it. You’ll think about it for the next three hours if you don’t, and an ADHD brain spiraling on a two-block errand is more expensive than getting wet for 90 seconds. Go now, before the meeting.

2. The meeting in 15 minutes.

You don’t need prep the way most people do — you need one thing: know what you want to walk out of that room with. Spend two minutes on that. You think on your feet, you have more context than anyone else in the room, and you’re better unrehearsed than most people are prepared. Don’t over-correct. Show up.

3. The family motto.

Two things: first, that’s not actually a Gandhi quote — it’s a paraphrase that got attributed to him, and he’d probably bristle at it. Second, you’re already doing it and you don’t see it.

You built a system to take better care of your mom. You made medical notes so your son’s appointments don’t fall through the cracks. You’re pushing innovation at work in an industry where bleeding edge is complicated and the stakes are real. You’re raising kids to question things and build things. You write publicly about how you work so other people can see it’s possible.

The change you wish to see is: a world where people use powerful tools to actually take care of the people they love, and don’t leave that capability on the table. You’re living that.

The question isn’t how to do it. It’s whether you’re letting yourself count it.

I sat with that for a while.

That answer didn’t come from a smarter model or a better algorithm. It came from context. From all the conversations, all the files, all the tiny pieces of who I am that I’ve fed into this tool over months of working with it every single day. The same model, the same Opus, gave me a beautiful and thoughtful answer in the regular chat. But when it knew me? It gave me something I needed to hear.

That’s the part that gets lost in the hype. It’s not just which model you pick. It’s how much of yourself you bring to it. The relationship you build with it. The context you give it to work with. A model with no context will give you a good answer. The same model with deep context will give you a different answer entirely. And sometimes that answer will hit you in the chest.


So What?

Here’s the thing. None of them were wrong. Not a single one gave me bad advice. ChatGPT’s decision matrix for the umbrella was perfectly reasonable. Gemini’s offer to find the phone number was genuinely helpful. Claude’s weather check was brilliant, even if it wasn’t what I expected.

But they were all different. Fundamentally, structurally, philosophically different.

And it goes deeper than just “which model.” ChatGPT 5.4 Thinking and ChatGPT 5.3 Instant gave me noticeably different answers to the same question. So did Gemini 3 Fast versus Gemini 3 Pro. And Claude with no context versus Claude with months of context? Not even in the same universe. These aren’t interchangeable parts. They’re different minds with different instincts, different training, and different ways of interpreting what you need.

When someone tells me “I tried AI and it wasn’t that great,” the first question I ask is which model. And which version. And what they asked. And how they asked it. Because all of that matters.

People self-select into models, and I think that’s actually beautiful. The people who love ChatGPT love structure, organization, and completeness. The people who love Gemini love connection, context, and conversation. The people who love Claude love depth, directness, and being treated like they’re smart enough to handle the real answer.

None of those preferences are wrong. They’re sandwiches. Different sandwiches for different days and different people.


Why It Matters

If you’re using AI to help you write, think, plan, decide, create, or learn, you owe it to yourself to understand what you’re working with. Not just “AI” as a category. The specific model. The specific version. The context it has. The way it was trained and what it prioritizes.

I’m not telling you to switch models. I’m telling you to understand yours. Know what it’s good at. Know where it struggles. Know when to ask it a different way, or when to ask a different model entirely. That’s not disloyalty. That’s literacy.

The question isn’t “which AI is best.” It’s “which AI is best for this specific thing I need right now.” The answer changes depending on the question. The answer changes depending on the day. The answer changes depending on you.

52 million ways to make a BLT. And that’s just one sandwich.

Want more?

Subscribe to Speak Human for real guidance, no jargon, no hype.

Subscribe Free