Back to Blog

Fire

From a match to a campfire to a power plant. Or a wildfire. We get to choose.

·Read on Substack →

8 minute read


The first human who figured out fire had to make a choice.

Not a complicated one. Not a philosophical one. A survival one. You can use this to cook your food and warm your family and light the dark. Or you can use it to burn down someone else's shelter. The fire doesn't care. The fire doesn't choose. The fire just is.

Every meaningful technology in human history has carried that same weight. The printing press spread knowledge and propaganda in equal measure. The telephone connected families and enabled fraud. The internet built a global library and a global black market on the same infrastructure. Every single time, the technology arrived, and humans had to decide what to do with it.

We're at that moment again. Only this time, the fire is smarter than any fire we've ever built. And it's getting smarter faster than most people can comprehend.


A Match, a Campfire, a Wildfire

I talk about exponents a lot. If you've been reading along, you've heard me use this analogy before. My son and I play Magic: The Gathering, and there's a card called Doubling Season that doubles every time you play a spell. The first time, it's two. Then four. Then eight. Then sixteen. Then it's out of control and nobody at the table can stop it. I keep coming back to this card because I haven't found a better way to explain what's happening. Every time he plays it, all I can think about is AI.

A few years ago, these models could barely hold a conversation. They hallucinated constantly. They forgot what you said three messages ago. They were a match. Interesting. Warm. A little unpredictable. Useful if you were careful.

Then they got better. Fast. They started writing code. Analyzing data. Building applications. Holding context across long conversations. Learning your voice, your preferences, your patterns. They became a campfire. Genuinely useful. Powerful enough to build around. Something you could rely on.

And now we're watching the campfire become something else entirely.

This week, Anthropic announced Project Glasswing. They built a model called Claude Mythos that is so powerful at finding security vulnerabilities that it discovered bugs in every major operating system and web browser. One of those bugs had been hiding in OpenBSD for 27 years. A researcher working with the model said he found more bugs in two weeks than he'd found in the rest of his life combined. The model doesn't just find individual vulnerabilities. It chains them together, linking three, four, sometimes five weaknesses into sophisticated attack sequences that no human would have assembled.

Read that again. A single AI model found vulnerabilities that the entire global security community missed for decades. In weeks. And it's not even the released version.

That's not a campfire. That's something else entirely. And the question on the table right now is whether it becomes a wildfire or a power plant.


The Same Tool

Here's the thing about fire. The fire that cooks your dinner and the fire that burns down a forest are the same fire. Same chemical reaction. Same physics. Same element. The only difference is intention, containment, and control.

Anthropic knows this. It's why they didn't release Mythos to the public. Instead, they gave it to Amazon, Apple, Microsoft, Google, CrowdStrike, Cisco, Palo Alto Networks, the Linux Foundation, and about 40 other organizations. The explicit purpose: find the vulnerabilities before the bad actors do. Fix them. Patch them. Share what you learn with the rest of the industry.

That's a choice. That's Anthropic saying: we built a fire that's powerful enough to be dangerous, and instead of lighting it in the middle of a dry forest, we're handing it to firefighters first.

But here's the part I can't stop thinking about. The same capabilities that let Mythos find a 27-year-old vulnerability in OpenBSD could let a bad actor exploit it. The same reasoning that chains five weaknesses into a defensive insight could chain them into an attack. The same model that makes the internet safer in the right hands makes it catastrophically more dangerous in the wrong ones.

Anthropic said it themselves: given the rate of AI progress, it will not be long before these capabilities spread beyond actors committed to using them safely.

The fire is already lit. The question is who's holding it.


The Exponent Problem

Let me come back to Doubling Season for a second.

In February, Anthropic released models that genuinely changed how I work. I've talked about this before. I run sessions all day in Claude Code. I build systems. I connect workflows. I watch these tools do things that surprise me, and I don't surprise easily. The February models were a leap. A real, tangible, qualitative leap in capability that I felt in my daily work.

Now think about exponents.

If the February models were the jump from four to eight, the next release isn't the jump from eight to twelve. It's the jump from eight to sixteen. And the one after that isn't sixteen to twenty. It's sixteen to thirty-two. That's how exponential growth works. It feels slow at first, incremental, manageable. And then suddenly it isn't. Suddenly you're looking at numbers that don't make sense to a human brain because we're wired for linear thinking and the universe doesn't care about our wiring.

Mythos is what happens when Doubling Season gets played a few more times. A model that finds decades-old vulnerabilities in every major operating system in a matter of weeks. A model so capable that the company that built it decided the responsible thing was to not give it to anyone except firefighters.

We are in the early turns of the doubling. And each turn from here comes faster than the last.

But here's what I want people to remember. In all of human history, across every civilization, every culture, every era, nobody serious ever said "don't use fire." Not once. Fire burned things. Fire destroyed things. Fire killed people. And the response was never to stop using it. The response was to learn. To contain it. To build hearths and furnaces and forges. To create fire departments and building codes and extinguishers. To respect it enough to study how it works and disciplined enough to use it responsibly.

The answer to fire was never fear. It was knowledge.


I Believe in the Campfire

I want to be very clear about something. I am not afraid of this technology. I am aware of it. There's a difference.

I have spent the better part of the last two years building my life around AI. I use it to build systems at work. I use it to manage my household. I use it to teach, to learn, to create, to solve problems that would have taken me weeks to solve on my own. I have watched these tools help friends fix their cars, make better financial decisions, navigate medical questions. I have watched my son use them to learn things that would have taken me years to figure out at his age.

This technology, in the hands of people who care about doing good, is the most powerful force multiplier I have ever seen.

I believe in the campfire. I believe in building something warm, useful, and controlled. I believe in teaching people how to sit beside it, how to tend it, how to use it to feed their families and light their way. I believe that the overwhelming majority of people who touch these tools will use them for good, for growth, for connection, for solving the problems that actually matter to them.

And I also believe that somewhere, someone is sharpening this same tool for a different purpose. That's not paranoia. That's the $300 toolkit on Telegram. That's the deepfake videos fooling liveness checks. That's the synthetic identities printed on real passport material. That's the reality of living in a world where fire exists and not everyone who holds it is building a hearth.


The Gap

This is the part where I stop being philosophical and start being direct.

The criminals are not waiting. The $300 toolkit is already for sale. The bad actors are already using AI to bypass security, to forge identities, to chain vulnerabilities, to move faster than the people trying to stop them. They are not reading articles about whether AI is safe. They are not debating the ethics. They are not waiting for permission. They are using the fire.

And every day that we spend afraid of this technology, debating whether it's safe to touch, circulating scary headlines and retreating from the tools instead of learning them, is a day we fall further behind the people who have no such hesitation.

Anthropic didn't build Project Glasswing because the world is safe. They built it because the world is getting dangerous faster than most organizations are adapting. They built it because the gap between the people who are using AI and the people who are afraid of it is becoming the single most important variable in global security.

I work in an industry that handles other people's money and other people's trust. I don't take that lightly. I never will. And because I take it seriously, I believe the most irresponsible thing we can do right now is nothing. The most reckless response to a world where criminals have AI is to refuse to use AI ourselves.


Learning to Tend the Fire

We're early. That's the truth of it. We're going to get some things wrong. We're going to burn a few things we didn't mean to burn. And we're going to learn, and adapt, and build something extraordinary on the other side of the learning curve.

But only if we pick it up.

I said in my first blog post that the best thing you can do right now is invest in these tools and start learning. That hasn't changed. If anything, it's more urgent. The February models were remarkable. What's coming next will make February look like a match. And the people who spent this year learning, building, experimenting, making mistakes, and growing alongside the technology will be the ones who know how to tend the fire when it gets bigger.

The people who waited will be the ones standing in front of it, wondering when it got so hot.


The fire doesn't choose. We do.

Every day, in every prompt, in every decision to learn or to look away, in every choice between fear and action, we decide what this fire becomes. A tool or a weapon. A hearth or a wildfire. Something that warms us or something that consumes us.

I know which one I'm building.


Dacia writes about AI for real people at Speak Human. If you're trying to figure out how to actually use these tools in your everyday life, you're in the right place.

Want more?

Subscribe to Speak Human for real guidance, no jargon, no hype.

Subscribe Free