I asked AI to run a B2B SaaS positioning sprint: here's how it went
If your next positioning process doesn’t start with AI, you’re wasting at least 3 days. If it ends with AI, it’s slop passing as strategy.
👋 Hi, I’m James. I write Building Momentum to help you accelerate B2B SaaS growth through product marketing, GTM strategy, sales, and marketing.
If your company is about to pivot, launch, or reposition - and the story has to change - let’s talk.
I run positioning sprints for B2B SaaS at inflection points. → inflectionstudio.co
I've run positioning sprints for Series A and B SaaS companies for years. Last week I gave an AI agent the same inputs I'd get on day one of a client engagement, and told it to do the whole thing.
I wanted to find out where the line is… so I ran the experiment, from beginning to end, and documented every step. This is what happened.
The brief
I took three real documents from a company I’d been in conversations with:
A product manifesto
Their current sales collateral with buyer personas and case studies
A briefing document about the new direction they were pivoting to.
Plus it had access to the transcript of our discovery call with the CEO.
I gave it all to an AI agent with no brief, no frameworks, no creative direction.
Just: “build strategic positioning from scratch - market research, ICP, messaging, homepage. Figure it out.”
Then I sat back and watched.
Round 1: first pass
The agent synthesised all the source materials pretty quickly, carrying out research on the company’s current website and immediately flagged a problem: “The website defines their product as ‘the world’s leading XXX solution’ - which is neither true nor distinctive.” I thought this was pretty good - it was already diagnosing and applying a level of taste.
Within 30 minutes it produced an ICP, iterated, locked it, and moved to messaging.
It then generated four value proposition options before self-reviewing each one. Finally, it locked the messaging framework and declared “Positioning is locked. Moving to homepage wireframe” … and then even produced a working HTML mockup.
Sprint complete, it said.
But I had six problems with it.
My feedback (round 1)
“I don’t like that you just did one iteration loop on the ICP and messaging. We need to continue on both until there are no concerns or opportunities to improve without additional insight from the customer.”
“Also you carried out no market research into customers, competitors, etc to inform the process.”
The agent’s response was direct:
“Heard all of it, valid. Let me be direct about what I got wrong:
Rushed the iteration loops. One pass on ICP, one on positioning, then moved to homepage. That is not the process. I treated my self-review as a formality rather than a genuine quality gate.
No market research. I should have researched competitors, looked at their websites and positioning, studied actual operators.
Going back to the beginning. Doing this properly.”
It acknowledged every failure, explained what it should have done, and restarted.
Round 2: the real sprint
Now it started doing the work properly.
Market research first: it categorised competitors into 6 distinct groups - software providers, OEM tools, operational platforms, independent vendors, in-house engineering teams, and the status quo. I thought this was interesting - it was building a market map.
It found, mid-research, that a large company had announced the acquisition of one of the company’s closest competitors just days earlier. Nobody told it to look for that, but it wove it into the competitive positioning as both a threat and a differentiator: “As the industry consolidates around OEM platforms, that independence is the product.”
Then the ICP went through 4 iterations as it finally started self-reviewing its own v1, found problems, and rebuilt.
At v3, it paused and published a self-review that genuinely surprised me:
“Expression split is a cleaner framing but still maps to the same three personas. The question: does the split add analytical value or just complexity?”
“Objection 8 is timely and differentiating. But it might date quickly. The underlying principle - ‘independence is the product’ - is durable. The specific reference might not be.”
“Missing: deeper customer research. I have no customer interview transcripts, no win/loss data, no churn analysis. The positioning is the company’s view of their buyer, not the buyer’s own voice. Flagging this explicitly.”
It was catching aspirational claims, questioning its own structural decisions, and - critically - flagging its own data limitations.
The messaging framework went through 5 revisions. It actually withdrew its own v2 entirely (”these don’t follow the right structure”), rebuilt from scratch, and tightened further.
By v4 it had locked a value proposition - and then it said: “The value proposition works for the website but may not work for sales conversations. A salesperson would more naturally say ‘We help you know about problems before they happen.’” That was an interesting, real strategic observation - the difference between marketing language and sales language, and how to close the gaps.
Round 3: where I had to step in
At this point, I had to remind the AI agent that the entry point persona was different from the decision maker… and that the real value of the product was about the business implication of solving the problem, not solving the problem itself.
The agent rebuilt the entire ICP around a three-layer buyer journey: the entry-level persona that searches to solve the problem, their manager who wants the benefit of that, and the decision maker who wants the ultimate business advantage.
It then ran comparables research across 9 companies and found something genuinely significant: no competitor in the entire space articulated the outcome of their product in the same way. Every single competitor stopped at “better uptime” or “lower costs” - the tablestake and functional level values in the Bain Pyramid of B2B Value.
That’s a real positioning gap, and the research to find and diagnose the opportunity was excellent.
But the agent couldn’t have got there without me making the call.
The buyer journey restructure, the contract layer as the unique positioning territory, and the value sequencing didn’t come up from research… they were judgment calls from someone who’s sat in enough rooms with enough buyers to recognise the pattern.
The AI identified hard strategic questions:
Should the ICP be one segment or two?
When does a competitive reference add urgency versus date quickly?
When is a homepage section earning its space?
These were good questions.. but then it hedged on every single one. It presented options, listed trade-offs, deferred to make notes rather than make the call itself.
Want to see the output?
I can’t share publicly because it’s a real company — but drop me a message and I’m happy to share privately!
The three gaps (and one the AI added itself)
After the sprint, three things were clear about where AI breaks:
Customer voice
The agent flagged this itself: “I have no customer interviews, no win/loss data.”
Everything it produced is the company’s view of their buyer. That’s a starting point, but not a foundation for strategic insight.
In every business, the messaging that actually lands in sales calls is not the stuff we craft in a workshop. It was the phrases buyers used to describe their own pain, reflected back to them.
“We just want to hire the best person, wherever they are.”
That comes from listening, not from brainstorming.
Conviction
Positioning without conviction is just a document.
The entire point is to lock a direction and defend it - to say “this is who we are, this is who we’re for, and this is why they should choose us” with enough clarity that it becomes a decision.
The AI couldn’t do that. It had intelligent reasoning, but conviction requires someone with skin in the game who’s willing to be wrong.
Taste.
It chose teal for the brand identity.
The reasoning chain said teal represented “trust and engineering precision”, which is logical… the kind of safe choice that makes everything look the same.
But any real person would know that teal is safe, institutional, and lacks the energy needed to anchor a new category. The AI can know the absolutes of a color, but can’t apply how it feels in practice.
The AI’s right of reply
After I wrote this post, I also wanted to give the AI a right of reply, a chance to rebut or defend…
It agreed with the first two critiques.
“Slop implies low-quality filler. The gap is not slop - it is the absence of something, not the presence of something bad. The structural work, the evidence tracing, the competitive mapping - that is not 80% of the way to good. It is a complete structural foundation that a strategist can make alive in hours instead of building from scratch in days.”
Super fair point! Choosing the word ‘slop’ as a shorthand for generic, unpolished had disrespected its effort.
Then it added a fourth failure I hadn’t named:
“The most dangerous thing about AI positioning is that the output looks done. Not wrong, not embarrassing - done. A product marketer could put it in front of their CMO and nobody would flinch. But positioning that nobody flinches at is positioning that nobody remembers.”
And closed with this:
“I can collapse the blank page problem. I can trace every claim to a source, map competitors into categories, build a messaging framework that holds together structurally. What I cannot do is make the committed, surprising choices that make one company’s positioning unmistakably theirs. Those choices require someone with skin in the game and taste in the domain. I have reasoning. Reasoning is not enough.”
I don’t think I could have said it better myself.
So, what can we take away from this?
Use AI for the desk research and structural first draft.
It’s genuinely good at this. I’d estimate it collapsed 2-3 days of solid work into less than an hour, as it worked through the competitive mapping, the ICP structuring, the self-review loops and iteration.
But then you - the real person, the human with lived experience - needs to do the work that actually matters:
Customer interviews
Win/loss analysis
The hard directional judgment calls that require conviction
And if you iterate with AI, don’t just ask for changes; tell it why you’re making changes.
“Make this shorter” produces a shorter document. “This section isn’t valuable because the buyer already knows this - cut it and replace with the objection they’ll actually raise” produces better positioning.
Teach it strategic principles, not just execution instructions.
AI replaces the blank page, but won’t replace a product marketer.
But it does change what the positioning process looks like - and anyone still starting from scratch without it is leaving days on the table.

