Building a B2C AI Growth Engine
Retention, Distribution, and the GTM Playbook that worked for us in scaling CrawlAI
In March 2025, I joined CrawlAI through Chalkboard, a student entrepreneurship circle at my university, to help scale the platform.
For context, CrawlAI is a no-code platform built by engineering students at SCU, that lets users make AI assistants by automatically crawling websites, ingesting uploaded documents, and building vector stores for retrieval-augmented generation. It integrates user data pipelines with LLM APIs, adds an interface for configuring prompts and workflows.
When I joined, Crawl needed growth, real users, not just prototypes, and I took the challenge of building its first user acquisition engine. I had never done any marketing work before, with virtually no experience in ad platforms or influencer spreadsheets. This meant that I had to learn quickly.
And learn I did. Below are the lessons I took away from helping build a B2C AI agent platform.
To give some quick background, my name is Henrique. I’m an undergraduate student in Computer Science and Business at Santa Clara University, with experience in VC, PM, and SWE. See https://henriqueoliveira.dev/
CrawlAI became my personal lab for learning GTM in the AI era. Along the way, I pulled lessons from some of the smartest operators and investors writing today, Andrew Chen on retention, GTMfund on distribution, Lenny Rachitsky on AI product iteration, and Ronit Yawalkar on GTM Engineering.
Over the next few weeks, I ran CrawlAI’s first GTM playbook:
Weekly site traffic scaled 6.9x
Registrations grew from 2 → 348
Campaigns across Google Ads, LinkedIn, Yandex, and 50+ influencer partnerships were launched, paused, and relaunched based on performance data.
Cost-per-registration dropped 36% as I shifted spend toward high-conversion channels
I built a centralized analytics dashboard to track every step of the funnel, from impressions to prompt creation to assistant usage
These aren’t unicorn-status results, but they were proof that CrawlAI, one of thousands of so-called “GPT-wrappers,” could build a repeatable growth engine. More importantly, they were my first real lessons in leading go-to-market.
This post is a reflection on those lessons, my CrawlAI experience, and why I believe that the future of product management in AI requires being as fluent in GTM systems as in code.
Why GTM for AI Is Different
When I stepped into growth at CrawlAI, I quickly realized that AI products don’t behave like traditional SaaS tools. The rules of engagement are different because AI itself is different: it’s non-deterministic, hard to predict, and inherently shaped by user trust.
In a classic SaaS product, you can map out the funnel: acquisition → activation → retention → expansion. You know what a button click does. You know what a user flow looks like.
In AI, every interaction feels like a moving target.
Non-determinism: Users can phrase prompts in infinite ways, and the system can respond differently each time. That means growth isn’t just about acquisition; it’s about continuously proving that the product is reliable enough to trust again.
Agency vs. control: As Lenny Rachitsky’s CC/CD framework points out, AI systems must earn autonomy. At CrawlAI, this applied not only to the product’s AI assistants but to how we scaled campaigns: we started small (manual influencer outreach, close monitoring of ad spend) and only gave “more agency” to automated systems once I (almost as a user) could trust the platform to behave autonomously.
Trust as retention: Andrew Chen’s reminder that “you can’t fix bad retention” hits harder in AI. If a user gets one wrong or unhelpful answer, trust can evaporate instantly. In B2C AI growth, every session is a retention test.
This is why distribution alone isn’t enough in AI. Yes, the VC Corner framework is right that distribution is a moat, but if you don’t layer distribution on top of reliability and trust, the moat leaks.
At CrawlAI, I saw this firsthand:
Users who activated (ran their first assistant) only stuck if the output was trustworthy and felt personalized.
Campaigns drove traffic, but retention came only after refining onboarding flows, feedback loops, and assistant templates.
Scaling spend blindly didn’t work. We had to iterate weekly, feeding learnings back into both GTM and product.
If there’s one lesson investors drill into every founder, it’s this: a great product without distribution is just a demo.
For whatever you’re building to be real, people need to use it. And people only use great products.
Building CrawlAI’s Distribution Moat
The VC Corner piece put it best: distribution has become the final moat. In AI, product features can be copied or commoditized overnight; how you reach customers often matters more than the code itself.
At CrawlAI, I lived this. We had a compelling product vision, assistants that crawl and transform information into usable knowledge, but without distribution, it would’ve stayed invisible. My job was to engineer the first distribution loops.
Finding Our “One Channel”
The playbook wasn’t to be everywhere at once. Following GTMfund’s advice, I focused on one or two channels we could dominate. For us, that meant:
Influencer outreach: 50+ partnerships tested across niches to drive credibility and registrations.
Paid campaigns: Google Ads, LinkedIn, and even Yandex to reach international markets.
By going all-in on these early channels, we avoided dilution and learned quickly what worked.
Building the Analytics Engine
Scaling distribution without visibility is a recipe for wasted spend. So I built a centralized dashboard (on our shared Notion, nothing fancy) to track every step of the funnel:
Traffic sources (ads, influencers, organic)
Conversion to registration
Activation (running first assistant)
Retention (returning usage)
This turned our GTM motion into something measurable. Instead of “more ads,” we asked: which channel is lowering CPR, which message is driving activation, and where are users dropping off?
The real insight was this: in AI, distribution isn’t a separate function. It’s the other half of product-market fit. By owning the early channels, shaping the onboarding experience, and engineering feedback loops, we turned CrawlAI’s distribution into something durable.
In other words: we didn’t just acquire users. We built CrawlAI’s first moat.
Customer Feedback and Continuous Calibration
If distribution was CrawlAI’s first moat, then feedback was the second.
In AI products, retention is built on trust, and trust is earned by listening carefully and responding quickly. This is where Lenny Rachitsky’s CC/CD framework (Continuous Calibration and Continuous Development) became invaluable.
Unlike traditional software, AI systems are inherently non-deterministic. The same query can yield different outputs. That means you can’t just ship and assume stability; you need a loop of calibration that keeps the system aligned with user expectations. At CrawlAI, I treated our GTM motion the same way.
Collecting Feedback as Data
We tracked not just registrations and activations, but how users actually engaged with assistants. Did they run more than one query? Did they save or abandon outputs? Each interaction became part of a live dataset that guided our next set of changes.
Iterating Onboarding With Calibration in Mind
Our first versions of onboarding were too generic. Many users didn’t know what to ask CrawlAI to do. By observing failure points, sessions that ended after the first query, we redesigned the templates to guide users toward clearer, higher-value use cases. This was calibration in action: not just improving features, but adjusting the entire flow so the system and users stayed in sync.
Human-in-the-Loop as a Control Mechanism
Just as CC/CD recommends starting with high control and low agency, we structured feedback loops to keep humans in the loop. When outputs missed expectations, we asked users to rate or flag results. This gave us direct error patterns to fix and prevented the system from drifting too far from trust.
Turning Calibration Into a Growth Advantage
By embedding feedback directly into our GTM experiments, CrawlAI’s product and distribution became inseparable. Campaigns brought users in, onboarding converted them, but feedback kept them engaged. Each iteration tightened the loop: the more we calibrated, the more trustworthy the product felt, and the more retention improved.
This is the lesson I take forward: AI GTM isn’t just about acquisition: it’s about calibration. Without feedback loops, growth leaks. With them, every user interaction becomes an investment in stronger retention.
The Future of GTM Engineering
Ronit Yawalkar defines GTM Engineering as building the technical infrastructure that powers scalable customer acquisition, retention, and expansion.
It sits at the intersection of product, data, and revenue operations. And in 2025, it has become the discipline that separates startups that scale from those that stall.
From Scrappy Tests to Systems Thinking
At first, my work was tactical: testing influencers, spinning up ad campaigns, tweaking onboarding flows. But over time, these one-off experiments evolved into systems:
A centralized analytics dashboard that acted as our customer data pipeline.
Iterative onboarding templates that functioned like “feature flags” for activation flows.
Automated reporting loops that kept track of CPR, activation rates, and retention curves across cohorts.
What started as experiments became infrastructure.
Building the Data Foundation
One of the biggest shifts was realizing that every GTM decision depends on reliable data. Clean, real-time funnels mattered more than any single campaign. In CrawlAI’s case, our lightweight data stack (manual at first, then progressively automated) gave us clarity on which channels to scale and which to abandon. This was our first step toward a Customer Data Platform (CDP)-like system.
GTM as Engineering, Not Just Ops
Traditional marketing operations would have stopped at campaign reporting. GTM Engineering pushed me to think in terms of automation, workflows, and scalability. Instead of asking, “Which influencer converts best?” I started asking, “How do we design a repeatable system that can handle 50 influencer campaigns at once without breaking?”
Lessons for AI GTM
AI products force GTM teams into engineering mode faster than most categories. Because outputs are unpredictable, and retention hinges on calibration, you need infrastructure to:
Track nuanced user behavior across prompts and sessions.
Automate responses to churn signals.
Enable rapid experimentation without losing data integrity.
This is why GTM Engineering felt like the natural next step in my CrawlAI journey. I wasn’t just running ads or collecting feedback, I was designing the systems that made growth measurable, repeatable, and defensible.
Looking Forward
The companies that will win in AI aren’t just those with the best models or the lowest latency. They will be the ones with GTM systems that scale intelligently—where distribution, retention, and feedback are engineered together.
At CrawlAI, I got my first glimpse of that future. And it’s why I believe the role of Product Manager in AI is expanding: tomorrow’s PM must be part engineer, part data scientist, and part GTM architect.
-Henrique

