Skip to main content
Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· business · ai

True 1-to-1 outreach is finally possible with AI

The 1-to-1 personalization promise is thirty years old. It never worked because understanding each person was too expensive. AI changed the economics.

True 1-to-1 outreach is finally possible with AI

In 1993, Don Peppers and Martha Rogers published The One to One Future. BusinessWeek called it “the bible of the new marketing.” The thesis was simple and correct: stop treating customers as segments. Treat them as individuals. Build learning relationships where every interaction makes the next one smarter.

The book sold millions of copies. It spawned an industry. And then that industry spent thirty years building the exact opposite of what Peppers and Rogers described.

Instead of understanding, we got merge tags. Instead of learning relationships, we got drip campaigns. Instead of treating people as individuals, we treated them as rows in a database with a few custom fields. The vision was right. The execution was a fraud.

I’ve spent the last two years building systems that do what Peppers and Rogers actually described. Not the cartoon version. The real thing. And I can tell you exactly why it took this long — and why it’s finally working now.

The economics that killed personalization

The problem was never the idea. The problem was the cost of understanding a single human being.

A fully-loaded sales development rep costs somewhere between $110,000 and $160,000 per year. A good one can work about 15 leads per day. Manual prospect research — actually reading someone’s content, understanding their business, figuring out what they care about — takes 15 to 30 minutes per lead. Writing a genuinely personalized email, one that demonstrates real understanding of the recipient, takes another 5 to 10 minutes.

Do the math. At those rates, a single SDR working at full capacity can produce maybe 15 genuinely personalized touches per day. That’s 75 per week. Around 3,500 per year. For $110K+.

Nobody pays for that. So the industry invented “personalization theater” instead.

You know personalization theater even if you’ve never heard the term. It’s the email that opens with “Hi {{first_name}}, I noticed you went to {{university}}.” It’s the LinkedIn message that says “I’ve been following {{company_name}}’s work in {{industry}} and I’m impressed.” It’s any message where the “personalization” could have been assembled from a database lookup in under a second.

Recipients aren’t stupid. They can tell the difference between someone who read their work and someone who queried a CRM field. This is why average cold email response rates sit around 5%. This is why only about 5% of senders bother to personalize every email. The math doesn’t work, everybody knows it, and the industry keeps pretending otherwise.

The CRM revolution of the 2000s gave us the plumbing. Salesforce, HubSpot, Marketo — they built infrastructure for managing contacts, tracking interactions, automating sequences. Good plumbing. Necessary plumbing. But plumbing doesn’t supply the water. The water is understanding. And understanding remained hand-crafted, expensive, and unscalable.

Account-based marketing tried to solve this by narrowing the aperture. Instead of spraying a thousand people with generic messages, focus on fifty accounts and personalize deeply. Better. But it still depended on human researchers doing manual work for each account. ABM didn’t solve the cost problem. It accepted the cost problem and built a strategy around it.

For thirty years, the industry oscillated between volume and quality. High volume, low personalization. Low volume, high personalization. Nobody could do both. The economics wouldn’t allow it.

The wrong breakthrough

When large language models arrived, the first thing everyone did was use them to generate more email. Faster. At higher volume.

This is exactly wrong.

The ability to generate text was never the bottleneck. Template engines have been generating text since the 1990s. Mail merge has been generating text since the 1980s. Adding an LLM to your outbound sequence to produce slightly more natural-sounding templates is just a more sophisticated version of the same failed approach.

The emails read better. They still don’t demonstrate understanding. They’re still personalization theater — just with better stage production.

If your AI outreach strategy is “generate more emails faster,” you’ve automated the wrong part of the process. You’ve made the assembly line more efficient without improving the product.

What actually changed

Here’s what LLMs actually made possible, and it’s not what most people are focusing on.

AI collapsed the cost of understanding.

Not the cost of generating text. The cost of reading. The cost of synthesis. The cost of taking someone’s last ten LinkedIn posts, their company’s recent blog entries, a podcast they appeared on, a conference talk they gave, and the industry trends affecting their business — and assembling from all of that a genuine comprehension of what this specific person cares about, how they think, and what challenges they’re facing.

That work used to take a human researcher 20 to 30 minutes per prospect. It required reading, interpreting, connecting dots, forming a mental model of another person’s professional world. Valuable work. Skilled work. Expensive work.

An LLM does it in seconds. For pennies.

This is the breakthrough. Not text generation. Text comprehension and synthesis at a cost that makes individual-level understanding economically viable for the first time.

Peppers and Rogers had the right destination in 1993. They just couldn’t have known that the bottleneck — the cost of understanding each individual — would take thirty years and a completely different technology to break through.

How it works when you do it right

The approach that actually works looks nothing like what most people are building.

Most AI outreach tools start with the message. They take a template, inject some AI-generated “personalization,” and fire it off. The intelligence, such as it is, lives in the output layer. The email sounds better. The underlying approach is identical to what wasn’t working before.

The approach that works starts with the understanding. Before you write a single word of outreach, you build a genuine model of the person you’re reaching out to.

What does that look like in practice? You analyze their content footprint — what they write, what they share, what they comment on. You don’t just note that they posted about AI on LinkedIn last Tuesday. You read the post. You understand the argument they were making. You notice the thread connecting that post to one from three months ago. You identify the problems they keep returning to.

Then you build what amounts to a voice profile. Not just what they talk about, but how they communicate. Are they data-driven or narrative-driven? Do they speak in frameworks or in anecdotes? What level of formality do they use? What kind of arguments do they find compelling?

Only then do you generate outreach. And the outreach works — not because it’s well-written, but because it demonstrates understanding. The recipient can tell that someone (or something) actually engaged with their thinking. The message responds to their ideas, not just their job title.

The numbers bear this out. AI-personalized campaigns built on genuine prospect understanding see reply rates between 9% and 21%, compared to 1-5% for generic outreach. Research time drops 60-90%. Not because you’re skipping the research. Because the research happens at machine speed instead of human speed.

This is the difference between using AI to write faster and using AI to understand better. One of those is an incremental improvement. The other changes what’s possible.

The line

I’m not going to pretend there’s no tension here.

The data is clear on both sides. 71% of consumers say they want personalized interactions. And 68% worry about how their data is handled. 41% find it “creepy” when brands seem to know too much about them. These numbers coexist. People want to be understood and they don’t want to feel surveilled.

In B2B outreach, the calculus is different than in consumer marketing. You’re working with intentionally public information. LinkedIn posts are published for professional visibility. Blog articles are written to build an audience. Podcast appearances are, by definition, public speech. Conference talks are designed to be heard. Nobody publishes a LinkedIn post and then objects when someone reads it.

But there’s a line, and it matters.

The line is this: Are you using public information to understand someone, or to deceive them into thinking you understand them?

Understanding means you’ve actually engaged with their ideas and you’re responding to them. Deception means you’ve scraped enough surface data to mimic understanding without having done the work. The first approach leads to conversations. The second leads to people feeling manipulated.

When prospects feel watched instead of understood, outreach backfires. And the difference between the two is not about what data you use. It’s about whether the understanding is genuine.

A human researcher who reads your blog posts and writes you a thoughtful email is doing the exact same thing these AI systems do. They’re consuming your public content, building a mental model, and crafting a response that demonstrates understanding. Nobody calls that creepy. They call it professional. The ethical question with AI isn’t about the activity. It’s about the intent and the honesty behind it.

If your system builds genuine understanding and generates outreach that honestly reflects that understanding, you’re on the right side of the line. If your system grabs a few keywords from someone’s profile and constructs a message engineered to fake understanding, you’re on the wrong side. The technology is the same. The intent is what differs.

I think about this a lot because I’ve seen both approaches. The fake-understanding approach works in the short term — maybe one or two touches before the recipient realizes the depth isn’t there. The real-understanding approach works because it has to. When you actually engage with someone’s thinking, the conversation that follows is substantive. It has somewhere to go.

The thirty-year arc

Here’s how I see the story.

1993: The promise. Peppers and Rogers articulate the vision. Treat customers as individuals. Build learning relationships. Make every interaction smarter than the last. The idea is right. The technology to execute it doesn’t exist.

2000s: The infrastructure. CRM platforms, marketing automation, account-based marketing. The industry builds the plumbing for managing relationships at scale. Good work. Necessary work. But the plumbing carries merge tags, not understanding. The infrastructure enables personalization theater, not personalization.

2010s: The failure at scale. Everyone has the tools. Nobody has the understanding. Email volume explodes. Response rates crater. “Personalization” means inserting database fields into templates. The gap between the promise and the reality becomes a running joke in every sales team’s Slack channel. People start marking cold emails as spam before reading them.

2020s: The breakthrough. Large language models collapse the cost of understanding. For the first time, it’s economically viable to build a genuine model of each individual prospect — their interests, their communication style, their professional challenges — before reaching out. The bottleneck that killed personalization for thirty years disappears in about eighteen months.

Now: The responsibility. We have the technology to do what Peppers and Rogers described. Actually do it. Not the cartoon version. Not merge tags. Not “I noticed you went to {{university}}.” Real understanding of real people at a cost that scales.

And with that capability comes a choice. You can use these tools to flood inboxes with slightly better spam. Or you can use them to do what was always the right thing — treat people as individuals, understand their world before asking for their attention, and earn conversations instead of manufacturing touches.

The first approach will produce a brief spike in metrics followed by the same immune response that killed email marketing’s credibility the first time around. The second approach is what Peppers and Rogers were actually talking about. It just took thirty years and a technology they couldn’t have imagined to make it economically possible.

They were right about the destination. They were just wrong about the timeline.

And now there’s no excuse left.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

The proxy problem

Every organization has this problem: knowledge locked inside one person's head. Today I accidentally designed a solution — and it has nothing to do with documentation.

Manual fluency is the prerequisite for agent supervision

You cannot responsibly automate what you cannot do manually. AI agents speed up work for people who already know how to do it. They do not replace the need to learn the work in the first place.

The gun you didn't need

Every organization has loaded weapons lying around that nobody remembers loading. The most dangerous capability in any system is the one you built 'just in case.'

The proxy problem

Every organization has this problem: knowledge locked inside one person's head. Today I accidentally designed a solution — and it has nothing to do with documentation.

When execution becomes cheap, ideas become expensive

This article reveals a fundamental shift in how organizations operate: as AI makes execution nearly instantaneous, the bottleneck has moved from implementation to decision-making. Understanding this transition is critical for anyone leading teams or making strategic choices in an AI-enabled world.

Build for the loop, not the lecture

A junior developer used to wait days for mentor feedback. Now that loop closes in seconds. When feedback is scarce, you batch your questions. When feedback is abundant, learning becomes continuous. AI changes the supply side of learning—most of our systems weren't designed for this.