Brands of loving grace, and humans who use AI for good

Michael Carter Co-Founder & CEO, brand.ai

Brand has never mattered more. And it has never been managed worse. Billion-dollar companies run one of their most valuable assets on 300-page PDFs no one opens. Guidelines ship quarterly while culture moves at meme speed. AI is flooding every channel with content while consumers grow allergic to anything that smells synthetic. The gap between what brands need and what their tools can do has never been wider.

Richard Brautigan once imagined a where machines and nature coexist in harmony. Dario Amodei borrowed that vision in describing a world where a century of progress collapses into a decade, a "country of geniuses in a datacenter."

I've had a year to sit with that essay and watch the ideas play out. If "marginal returns to intelligence" are real, we're nearing a threshold where the gap between human and superhuman intelligence becomes a cliff. My question is simple: what happens when that intelligence turns its attention to the cultural work of building brands?

AI has the power to turn brand from a static artifact into a living system: versioned, enforceable, and adaptive. The brands that win will be the ones that preserve distinctiveness while moving at cultural speed.

What took Coca-Cola nearly 140 years to build might happen to new brands in five. Awareness can scale fast. But trust and meaning don't automatically scale with it. That's where the real work lives.

What took Coca-Cola 140 years to build will happen to new brands in five.

Michael Carter Co-Founder & CEO, brand.ai

The Breaking Point

Jaguar's 2024 rebrand shows what happens when brands can't keep up with culture. Months of work produced an identity that erased heritage in favor of generic minimalism. The launch was received as meme fodder. By April 2025, . The agency relationship ended. Leadership churn followed.

The backstory matters: a leaked 2022 letter from internal designers had They felt the soul of the brand was being handed to people who didn't understand it. Two years later, the rebrand proved them right.

Jaguar isn't alone. In May 2024, Apple launched its iPad Pro "Crush" ad, showing a hydraulic press destroying instruments, books, and art supplies to reveal the tablet. The intended message: all of human creativity compressed into one device. The received message: "The destruction of the human experience. Courtesy of Silicon Valley," as Hugh Grant put it. Apple apologized within 48 hours and pulled the TV buy. Even the company that defined creative marketing for forty years could misread the room.

Then there's Arc'teryx. In September 2025, the outdoor brand staged a fireworks display at the base of the Himalayas to "honor Mother Nature." The backlash was immediate: an outdoor brand built on environmental credibility, setting off explosions in one of Earth's most fragile ecosystems. Both the artist and Arc'teryx issued public apologies. Anta's stock dropped 2.4%, wiping out roughly $849 million in market value. The stunt is now under government investigation.

These are more than PR crises. They're symptoms of a deeper problem: brand decisions being made without real-time cultural feedback loops. The gap between intention and reception has never been wider, and the consequences have never been faster.

Billion-dollar companies are managing brand with tools from 1995.

Cultural trends now move faster than brand processes can handle. Approval workflows crawl. Traditional brand management assumes a world that no longer exists, one where messages could be carefully crafted, tested, and deployed in controlled environments. Today, every touchpoint is a potential flashpoint. yet many are firing their CMOs for lack of ROI.

The year of slop

Slop isn't just an internet problem. It's a brand problem. When content is cheap, meaning is expensive.

"digital content of low quality that is produced usually in quantity by means of artificial intelligence." According to , consumer enthusiasm for AI-generated content dropped from 60% in 2023 to 26% in 2025. And yet 79% of marketers increased AI investment this year. The gap between what brands are doing and what consumers want has never been wider.

"We're basically teaching our models to chase dopamine instead of truth."

Edwin Chen, Surge AI CEO, on Lenny's Podcast

The backlash is becoming physical. Friend, an AI wearable company, Within days, they were covered in graffiti: "AI is not your friend." "The best way to make a friend is over a beer." Pinterest added tools to filter AI content. iHeartMedia launched a "guaranteed human" campaign. Apple TV's new show from Vince Gilligan runs a credit: "This show was made by humans." That phrase, "Made by humans," is becoming a quality label, like "handmade" or "small batch." When everything can be generated, the fact that someone chose to create something becomes the differentiator.

Adam Mosseri, head of Instagram, recently published a 20-slide memo admitting that AI content has won. His exact words: "Everything that made creators matter, the ability to be real, to connect, to have a voice that couldn't be faked, is now suddenly accessible to anyone with the right tools." His solution? Creators should lean into "raw, unproduced, unflattering" content as proof of humanity. Imperfection as defense. But here's what Mosseri is really saying: Instagram can't solve this problem. They're passing the burden to creators. Platforms are admitting they can no longer distinguish real from synthetic. In a world where platforms can't verify authenticity, brands need systems that maintain distinctiveness at the source.

When OpenAI upgraded ChatGPT to version five, there was such backlash from users attached to version four's personality that they had to bring it back. "GPT-5 is wearing the skin of my dead friend," as one user put it. People formed relationships with a voice. That's brand at its most elemental.

There's an irony that the companies building these AI systems understand this better than anyone. Anthropic and OpenAI are using their own tools, but they're pairing them with the best agencies, the best photographers, and the best writers. They're not cutting corners on taste. They're investing in human judgment precisely because they know what's at stake. If the companies building AI are betting on elite creative talent for their own brand work, maybe that tells us something.

The homogenization crisis

The paradox is that brands are adopting AI to differentiate, but AI is making them all sound the same. It's not that the content is bad. It's that it converges. The same adjectives, same rhythms, same safe ideas. When everyone optimizes toward the same statistical center, distinctiveness disappears.

One warning came in 2023. When Italy briefly banned ChatGPT, found that Milan restaurants experienced 15% decreases in content similarity and 3.5% increases in consumer engagement during the ban. Access to AI appeared to drive homogenization.

You might think this was an early-model problem. But research using GPT-4 told the same story. A found that while AI-assisted stories were rated more creative individually, they were significantly more similar to each other. The authors called it a "social dilemma": individuals benefit, but collective novelty suffers.

The most troubling finding came in 2025. Researchers discovered what they call a : even months after people stopped using AI, their content remained homogenized. The pattern had been internalized.

Brands are using AI to cut costs while consumers explicitly prefer human-made content. They're automating distinctiveness out of their own identities.

Homogenization is the enemy of memorability, and memorability is the foundation of brand building.

The false binary

The creative industry is divided into camps: all-in on AI, or loudly against it. Both camps are missing the point.

This debate has happened before. In 1998, Paul Krugman predicted the internet's impact on the economy would be "no greater than the fax machine's." He wasn't wrong because he was dumb. He was wrong because he was early.

The pattern repeats across industries. When personal computers entered ad agencies in the 1980s, typesetting collapsed. Almost 4,000 companies in North America were gone by 1987. Until the 1950s, "computer" was a job title, not a machine. Rooms full of human computers, mostly women, performed calculations by hand at NASA and Los Alamos. When VisiCalc launched in 1979, businesses bought Apple II computers just to run a spreadsheet. Finance didn't disappear, but the bottleneck moved from calculation to judgment.

The same thing happened in art. Charles Baudelaire called photography "art's most mortal enemy" in 1859, then sat for photographs throughout his life. His portrait by Nadar remains iconic. Photography didn't kill painting. It freed painting to become something else entirely. AI will do the same to brand work.

What's actually working

Remarkable things are happening. Demis Hassabis and John Jumper won the 2024 Nobel Prize in Chemistry for AlphaFold, which predicts the 3D structure of proteins. In five years, it's become as fundamental to biochemical research as microscopes. At Harvard, an AI model called PopEVE helped diagnose rare genetic diseases in roughly a third of 30,000 previously undiagnosed patients.

Sam Altman recently noted that AI-driven scientific discovery arrived faster than OpenAI expected. Mathematicians are already reporting that current models have crossed a threshold in how proofs and research workflows operate. OpenAI's internal benchmarks now show GPT-5.2 beating or tying human experts 74% of the time across 40+ business tasks. Six months ago, that number was 38%. This is the compression Amodei described, and we're already inside it.

But the track record of AI in marketing is less inspiring. Most tools optimize for volume, not quality. They make it easier to produce more content, faster, with less friction. That's precisely the problem. The world doesn't need more content. It needs better judgment about what content should exist.

So why build an AI tool for brand work? Because the alternative is worse. Brands are already using AI, badly, through consumer tools that don't understand their context. The same compression is coming for cultural work. The question is who's building the tools, and what they're optimizing for.

The invisible seam

Here's what we actually care about at brand.ai: using AI in a way that you don't even know AI was used.

Not the AI slop marketers know too well. Not obvious ChatGPT prose with its telltale cadence. (RIP to the em dash, a punctuation mark I loved for years, now ruined by overuse.) We're talking about AI as an invisible ingredient. Present in the process, absent from the perception.

The best work will be indistinguishable from human craft, not because AI has replaced humans, but because humans have mastered AI as a tool. But mastery requires distinguishing between two uses: learning and creating.

Andrej Karpathy put it well in a : "Learning is not supposed to be fun... the primary feeling should be that of effort. It should look a lot less like that '10 minute full body' workout from your local digital media creator and a lot more like a serious session at the gym." Learning is like going to the gym for your brain. AI can be an incredible tutor, patient and endlessly willing to explain. But it can't do the reps for you. The moment you let it write your brief instead of helping you think, you've traded learning for the appearance of learning.

Creating is different. Once you've developed genuine skill and taste, AI becomes a force multiplier. You know what good looks like because you've put in the hours. The people who skip the learning phase produce slop. They can't tell when AI is giving them mediocre output.

Imagine what Man Ray would have done with this technology. He pushed photography into places Baudelaire couldn't have imagined, like rayographs made without a camera, solarization techniques with Lee Miller, and fashion spreads for Vogue alongside surrealist films. He used photography to realize what he saw in his mind. But he could only do this because he understood the medium deeply enough to break its rules intentionally.

When anything is possible, the bar gets higher. Technology disappears, and what remains is the concept, the vision, the point of view.

In brand work, we translate product truth into culture. We make choices about what to emphasize, what to leave out, and how to earn attention in a crowded world. There's craft in that. There's taste. There's responsibility. And there's always the temptation to take shortcuts. AI is another tool. Brands need to have something worth saying, and they need to say it well.

Lessons from engineering

The software engineering world faced this inflection first. Six months ago, engineers debated whether paying for AI tools was worth it. Today, adoption is quickly becoming the default.

Karpathy in February 2025. He barely touches the keyboard, accepts changes without reading diffs, copy-pastes error messages with no comment. This sounds reckless, but Karpathy is an extremely talented programmer. He's using AI this way because it's fun and fast. For low-stakes projects, why not?

His more recent posts capture the magnitude of this shift: "Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession." And: "I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between."

Jaana Dogan, a principal engineer at Google, posted recently: "I'm not joking and this isn't funny. We have been trying to build distributed agent orchestrators at Google since last year. There are various options, not everyone is aligned... I gave Claude Code a description of the problem, it generated what we built last year in an hour."

drew an important distinction: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding. That's using an LLM as a typing assistant."

"2024 was the year of chatting with AI. 2025 is the year of delegating to it."

Karpathy's 2025 Year in Review

The engineers who are best at their craft are also the most proficient users of AI. This isn't coincidental. Good engineers have spent decades automating their own work: linters, formatters, test suites. They see AI as a force multiplier, not a threat.

The same principle applies to brand work. The strategists and creatives who will thrive aren't the ones resisting these tools or surrendering to them. They're the ones learning to direct them toward outcomes that require human judgment, taste, and cultural intuition.

The taste bottleneck

The top 1% of writers and creatives are in higher demand than ever. Why? Because they can control and understand these systems. They know when output is slop. They have the taste and judgment to direct AI toward specific outcomes.

The divergence between AI companies tells the story. OpenAI's Sora 2 didn't just ship a model; it shipped a social product: a standalone app with a TikTok-like feed for creating, remixing, and sharing AI video. To be clear, Sora 2 is genuinely impressive, and genuinely fun to use. OpenAI says it's explicitly not optimizing for time spent in the feed, that the design goal is creation over consumption.

I hope that's true. But architectures have incentives. The moment you build an algorithmic feed, even with good intentions, you're building the infrastructure that usually rewards the most compulsive inputs. Feed-shaped products tend to create feed-shaped outcomes. Altman himself has noted that adding AI features to old products makes them incrementally better, while redesigning from scratch creates step-change opportunities.

Anthropic, meanwhile, employs a philosopher named Amanda Askell to work on Claude's character, asking questions like: How should an AI model feel about its own position in the world? What does it mean to bring a new kind of entity into existence? How do you raise an AI to be good in the way an ideal person would be good? One company is building feeds. The other is asking what kind of mind they're creating. The difference matters.

UI design is collapsing into code. Figma launched prompt-to-app. Cursor has exceptional built-in design capabilities. Designers are shipping features in an afternoon that used to take weeks. But let's be clear about what this means: UI component generation is getting commoditized. But there is still a need for human craft in creative direction, curation, intuitively knowing what to make, exceptional typography, and beautiful motion. That's where the bottleneck remains.

Engineers adapted by becoming "context engineers," structuring codebases so AI could generate code while understanding the reasoning. As , context engineering is "less like operating a scientific instrument and more like mastering a musical one."

Brand builders face the same challenge. Dario makes the case that intelligence becomes the scarce resource that unlocks exponential progress. The same is true for brands.

Emily Segal and proved this with "." A single cultural read, published as a PDF, that reshaped fashion for years. Outsized gains from the right idea at the right time.

When companies turn to generic AI, the gap only widens. ChatGPT doesn't know your blue is a promise, not just Pantone 286. It can't tell the difference between clever and offensive. What it produces is an illusion of coherence that falls apart when tested in the world.

The solution isn't avoiding AI. It's building AI that actually understands your brand.

The view from the front row

We started building brand.ai in late 2022, right as ChatGPT launched. We saw what was coming before most brands did.

First it was individuals using AI for emails. Then whole teams uploading brand books to consumer tools, each getting slightly different answers about what their brand stood for. By 2024, strategy decks were being run through generic AI, producing five hundred versions of the truth with zero alignment.

Researchers have a name for this now: . A Stanford and BetterUp Labs study published in Harvard Business Review found that 41% of workers have received AI-generated content that "masquerades as good work, but lacks the substance to meaningfully advance a given task." Each instance costs nearly two hours of rework. For a 10,000-person company, that's over $9 million a year in lost productivity. The tool meant to save time is creating more work downstream.

The scariest moments happen in the gaps. While IT drafted policies, brand teams had already uploaded years of strategy to ChatGPT via personal accounts. One company discovered their team had been using AI for months, shipping ten times more content while nearly publishing a campaign that would have offended multiple cultural groups. Caught only by luck, not process.

We built brand.ai because we knew this was inevitable, and we knew it could be done better.

Building brand intelligence

Dario imagines a datacenter full of geniuses revolutionizing science. What if brands had the same? A collective intelligence that knows your brand as deeply as your best strategist, but can process cultural signals in real time.

That's what we're building. Not another tool to manage, but a living operating system. Instead of treating your identity as just another dataset, we create dedicated intelligence trained on your brand's DNA: voice, values, visual system, strategic framework.

Here's what that looks like in practice: a social post gets generated, then brand.ai checks voice consistency, flags taboo topics, applies legal guardrails, validates visual system rules. It returns a scored report with suggested edits and logs the decision for governance. Permissions and audit trails ensure brand knowledge doesn't leak into personal accounts. The result is AI that knows your brand specifically, not AI that produces generic output in your fonts.

The PDF becomes an API

Aaron Levie, CEO of Box, recently captured the strategic question. In a world where everyone has access to the same intelligence, how does a company differentiate? Context. Generic AI gives everyone the same expert, same strategist, same outputs, same voice, same slop. Context makes intelligence useful. Context makes brands distinct instead of interchangeable.

Brand guidelines are already executable. We're doing it now. But they're about to become the foundation for something bigger: agentic branding.

2025 has been called "the year of the agent." These aren't chatbots. They're autonomous systems that can reason, plan, and execute multi-step workflows. For brands, this means moving from static rules to living systems.

Rules alone aren't enough. The real value is in decision traces. The exceptions, approvals, precedents, and cross-system context that currently live in Slack threads, email chains, and people's heads. Why did we approve that headline? What precedent set that tone? Who signed off on that partnership? That reasoning has never been treated as data. It should be.

Altman recently argued that memory matters more than raw intelligence. Models are converging, and products diverge based on how well they remember context. He called today's memory capabilities the "GPT-2 era of memory," implying we're very early. The brands that capture their decision history will build what some are calling a "context graph," a queryable record of not just what happened, but why it was allowed to happen. That's the real unlock for autonomous brand systems.

My bet is that within two years, brand guidelines become fully executable. Your brand knows what it should say, and flags what it shouldn't. This happens before content ships. Within five years, real-time cultural adaptation is table stakes. Systems detect weak signals and adjust tone, imagery, and channel mix before trends hit mainstream. The strategist's job shifts from writing guidelines to training systems, from approving content to designing constraints.Static guidelines will feel as old fashioned as filing cabinets.

Authenticity as strategy

The "100% human" marketing trend isn't a rejection of all AI, but a rejection of bad AI. Brands like Patagonia, DC Comics, and Polaroid have staked out positions against AI-generated imagery. Senior executives from LVMH, Kering, Chanel, and Richemont have agreed that machines should never replace people in creative work. Chanel's tech innovation lead called the stakes "reputational, you might even say existential."

But plenty of terrible work predates AI. The world was full of mediocre campaigns, tone-deaf messaging, and brand-damaging creative long before anyone typed a prompt. "Made by humans" isn't a quality guarantee. The real differentiator is whether a human with taste and judgment directed the outcome.

Two frameworks help navigate this. Anthropic's offers a practical model. The goal is using AI well, not simply using it more.

approach it from the design side. In their report "From HAL to Her," they identify four territories for how AI presents itself. Mechanization signals control. Magic signals possibility. Biomimicry signals ease. Anthropomorphism signals companionship. Each answers the fundamental questions people ask when encountering AI: Am I being deceived? Am I being replaced? Who's really in charge?

For brands building with AI, these aren't just design choices. They're trust signals.

The timeline is compressed

We're in what Jack Clark at Anthropic calls a "parallel world" moment. Those working closely with frontier AI systems already live in a different reality than those who don't. The gap between passive consumption of AI (scrolling past synthetic slop, asking how to roast a turkey) and active building with it (creating in five minutes what used to take weeks) is widening fast. By mid-2026, that gap will be impossible to ignore.

The numbers tell the story. AI data center investment accounted for over 90% of U.S. GDP growth in the first half of 2025. A handful of companies will spend more than the inflation-adjusted cost of the entire Apollo program in ten months.

We're at the beginning of a shift that will reshape how brands are built, maintained, and evolved. What might have taken decades will happen in years. It won't all be smooth. Some experiments will break. Some AI outputs will offend. But ignoring it isn't an option, and the alternative is irrelevance.

The winners won't always be those with the biggest budgets. They'll be the ones that pair human judgment with machine intelligence to create living systems. Brands that can move as fast as culture while staying true to themselves.

The cybernetic meadow Brautigan imagined was about harmony between human values and technological capability. For brands, that means preserving what makes them distinctly human while embracing the intelligence that can help them thrive.

Brand has no final state. It's always becoming. If we build AI that truly understands brands, we won't just make marketing more efficient. We'll make business more human.

Listen to this article

Related Articles