Amrita Sarkar
AI and Technology

The Product Manager's Toolkit for AI-Driven Decision Making in 2025

A practical framework for product managers to integrate AI tools into research, strategy, and execution workflows — drawn from real experience across beauty tech, fantasy gaming, and cross-border trade.

Amrita Sarkar
Amrita Sarkar
· 12 min read

Last month, a founder I mentor asked me a question I keep hearing in different forms: “Should I be using AI in my product process, or is it just hype?” My answer, after four years of integrating AI tools into product work across wildly different contexts, is this: AI is neither hype nor magic. It is a set of capabilities that makes some parts of product management dramatically faster and leaves other parts entirely untouched. Knowing which is which is the real skill.

This post is the framework I have built through trial, error, and iteration — from launching India’s first AI-powered skincare app at Pers Active Lab, to building AI-generated prediction questions at Droit, to coaching a Ugandan founder on AI-integrated workflows during my DHL Fellowship, to using AI tools throughout my MBA research. It is not a tools listicle. It is a practitioner’s map of where AI creates genuine leverage in the product management workflow and where it creates the illusion of progress.

The Three Layers: Research, Strategy, Execution

I think about AI’s role in product management across three layers, each with a different value proposition and a different risk profile.

Layer 1: Research Acceleration — where AI provides the most immediate, least risky value.

Layer 2: Strategy Formulation — where AI is useful as a thinking partner but dangerous as a decision-maker.

Layer 3: Execution Support — where AI can handle volume but needs human quality control.

Let me walk through each with specific examples from my own work.

Layer 1: Research Acceleration

Product managers spend an enormous amount of time on research — market sizing, competitive analysis, user interviews, trend identification. This is where AI delivers the most obvious value, and it is where I recommend every PM start.

Market sizing and data synthesis. When I was working on the Worley energy transition project, we needed to size the UK renewables equipment market. The traditional approach would have been weeks of manual report reading, spreadsheet modelling, and source triangulation. AI tools compressed the initial data gathering phase from days to hours — not by doing the analysis, but by helping me synthesise information from dozens of industry reports, government policy documents, and academic papers into structured summaries that I could then verify and build upon.

The critical word there is “verify.” AI is excellent at extracting and organising information. It is unreliable at evaluating the quality of that information. I caught several instances where the AI presented outdated figures or conflated UK and EU market data. Every number in our final £1.275 billion TAM was manually validated against primary sources. The AI saved time on the gathering. The judgment was entirely human.

Competitive landscape mapping. At Droit, when we were building CricHit — a real-time cricket prediction platform — I needed to understand the competitive landscape of fantasy gaming apps in India. AI tools helped me rapidly map competitors, identify feature gaps, and synthesise user reviews at scale. I fed thousands of Google Play Store reviews through a sentiment analysis pipeline and generated a competitive feature matrix in an afternoon that would have taken a week manually.

But the insight that mattered most — that our competitors were all optimising for pre-match engagement while ignoring the in-play prediction window — came from watching users interact with the existing apps, not from any AI analysis. AI gave me the map. Observation gave me the territory.

User research augmentation. I want to be careful here because this is where I see PMs making the most dangerous mistakes. AI can help you analyse user research data — transcription, thematic coding, sentiment analysis. It should not replace the research itself. When I conducted forty-plus consumer interviews for Greenflip, the insights that shaped the product came from the pauses, the contradictions, and the things people said after the formal questions ended. No AI tool captures that.

What AI does well is the post-interview synthesis. I now use AI to transcribe interviews, identify recurring themes across multiple conversations, and flag patterns I might have missed. But I read every transcript myself, and I make my own judgments about what matters. The AI is an assistant, not an analyst.

Layer 2: Strategy Formulation

This is where I see the most confusion in the market. AI tools can generate strategy documents, competitive frameworks, and positioning statements that look polished and sound plausible. But looking plausible and being correct are different things, and in strategy, the cost of a plausible-but-wrong direction is measured in quarters of wasted effort, not just a bad document.

Where AI helps: structured thinking prompts. I use AI as a structured thinking partner — essentially a very fast rubber duck. When I am formulating a product strategy, I will describe the problem space, the constraints, and the available data to an AI tool and ask it to generate questions I should be asking, assumptions I might be making, and frameworks that might apply. This is genuinely useful. It surfaces blind spots faster than solo thinking.

During my MBA, I used this approach extensively. For a market entry strategy assignment, I described the market dynamics to an AI tool and asked it to challenge my assumptions. It identified three assumptions I had not questioned — about price elasticity, channel dynamics, and regulatory timelines — each of which changed the analysis when I investigated them. The AI did not do the strategy. It improved the questions I was asking.

Where AI fails: judgment calls. Strategy is fundamentally about making choices under uncertainty with incomplete information. AI can model scenarios, but it cannot tell you which scenario to bet on. It cannot weigh the soft factors — your team’s capabilities, your relationship with a key partner, the political dynamics of your organisation — that often determine whether a strategy succeeds or fails.

I learned this the hard way at Pers Active Lab. We were positioning Skin Beauty Pal, India’s first AI-powered skincare app. The AI tools available at the time could analyse competitor positioning and generate positioning statements. But the positioning that actually worked — leading with the “AI-powered” angle in a market where AI was novel and trust-generating — was a judgment call informed by my understanding of the Indian beauty consumer’s aspirational psychology. No AI made that call. A product marketer did.

The synthesis challenge. The hardest part of strategy is synthesis — combining quantitative data, qualitative insights, market dynamics, competitive positioning, and organisational capability into a coherent plan. AI can help with individual components but struggles with the synthesis itself, because synthesis requires weighing incommensurable factors against each other. How do you weigh a 15% market growth rate against a key competitor’s new feature launch against your engineering team’s morale? These are judgment calls that resist quantification, and they are where product managers earn their salary.

Layer 3: Execution Support

Once strategy is set, AI becomes valuable again in the execution phase — handling volume tasks that would otherwise consume disproportionate PM time.

Content generation. Product specifications, release notes, internal communications, knowledge base articles — these are tasks where AI generates a strong first draft that a PM can edit into a final version. During my DHL Fellowship, I helped the founder of Kedi Organics build a content workflow that used AI to generate product descriptions from her voice notes. The output was not publishable as-is, but it reduced her content creation time by roughly 60%.

The key insight is about quality thresholds. For a product specification that will be read by engineers, accuracy matters more than style — use AI for the structure, verify the details yourself. For a social media post, voice and authenticity matter more than precision — use AI for the draft, infuse your personality in the edit.

Data analysis and reporting. AI tools are increasingly capable of querying databases, generating visualisations, and identifying trends in product metrics. I use them to build first-pass dashboards and to identify anomalies in data that warrant deeper investigation. At Droit, we used automated analysis to flag engagement patterns that suggested a segment of users was engaging with prediction questions but not completing the transaction — an insight that led to a UX intervention that improved conversion by 8%.

Workflow automation. The most underrated AI application for PMs is in workflow automation — connecting tools, triggering notifications, summarising meetings, and managing the operational overhead that consumes 30-40% of most PMs’ time. I have built simple automations that summarise daily customer feedback into a morning briefing, flag P0 issues from support tickets, and generate weekly stakeholder update drafts from Jira activity.

These are not glamorous applications. They are the AI equivalent of a dishwasher — they free up time for the work that actually matters.

The Human Layer: What AI Cannot Replace

After four years of integrating AI into my product work, I am more convinced than ever that the core of product management is irreducibly human. Here is what I mean.

Judgment. Every product decision involves weighing trade-offs that cannot be fully quantified. Should we ship this feature now with known edge cases or wait two weeks for a complete solution? Should we expand to a new market or deepen engagement in our current one? Should we build internally or partner? AI can inform these decisions with data and analysis. It cannot make them, because making them requires accepting responsibility for the outcome — something only a human can do.

Empathy. Understanding users — really understanding them, not just analysing their behaviour — requires empathy. It requires sitting in a room with someone and sensing the frustration behind their words, the aspiration underneath their feature request, the context that turns a minor inconvenience into a deal-breaker. When I interviewed artisan suppliers for Greenflip, the most important insight was not in anything they said. It was in the way they held their products — with pride, but also with anxiety about whether anyone would value their work as much as they did. No AI detects that.

Stakeholder alignment. Product management is, at its core, a coordination function. You align engineering, design, marketing, sales, leadership, and users around a shared direction. This requires political skill, relational intelligence, and the ability to hold multiple competing narratives simultaneously. AI has no model for this. The PM who can align a sceptical VP of engineering, an eager marketing team, and a cautious legal department around a risky product bet is doing work that no tool can replicate.

Ethics. As AI becomes more integrated into product decisions, the ethical dimension becomes more important, not less. Who benefits from this product? Who might be harmed? What data are we collecting and how are we using it? At Pers Active Lab, building an AI-powered skincare app that used facial image analysis, we had to think carefully about data privacy, beauty standard bias in training data, and the psychological impact of algorithmic skin assessments. These are not technical questions. They are human questions that require human deliberation.

A Practical Framework: The AI Integration Matrix

Here is the framework I use to decide where to apply AI in any product workflow. I call it the AI Integration Matrix, and it has two axes.

Axis 1: Cognitive complexity. How much judgment, context, and synthesis does the task require? Tasks that are primarily data-driven and rule-based sit low on this axis. Tasks that require weighing incommensurable factors and making judgment calls sit high.

Axis 2: Cost of error. What happens if the output is wrong? Tasks where errors are easily caught and cheaply corrected sit low. Tasks where errors propagate and compound sit high.

This gives you four quadrants.

Low complexity, low cost of error: Automate fully. Examples: meeting summaries, first-draft release notes, data formatting, routine communications. Let AI handle these without close supervision.

Low complexity, high cost of error: Automate with verification. Examples: data analysis for executive reporting, competitive intelligence, market sizing inputs. AI does the heavy lifting, but a human validates every output before it is used.

High complexity, low cost of error: Use AI as a thinking partner. Examples: brainstorming product names, exploring positioning angles, generating hypotheses for A/B tests. The AI’s suggestions are starting points, not endpoints, and the cost of a bad suggestion is simply the time to generate another one.

High complexity, high cost of error: Keep human. Examples: product strategy decisions, pricing models, stakeholder negotiations, ethical assessments. AI can inform the inputs, but the decision and its accountability belong to a human.

Looking Forward

We are in a period where the capabilities of AI tools are expanding faster than most product managers’ ability to integrate them thoughtfully. The temptation is to use AI everywhere because it is available, or to avoid it everywhere because the hype is exhausting. Both responses are wrong.

The product managers who will thrive are those who develop a discriminating sense of where AI creates genuine leverage — and where it creates the comfortable illusion of productivity while leaving the hard work undone. The framework above is my current best thinking on that distinction, informed by building an AI skincare app, shipping AI-generated content at scale, coaching a founder through AI integration, and using AI tools in academic research.

It will evolve. The tools will get better. The capabilities will expand. But I believe the underlying principle will hold: AI augments product management. It does not replace it. The human layer — judgment, empathy, stakeholder alignment, ethical reasoning — is not a limitation to be automated away. It is the core competency that makes product management a craft rather than a process.

If you are a PM figuring out where to start with AI, start with Layer 1 — research acceleration. Pick one research task you do repeatedly, apply an AI tool to it, and evaluate the output honestly. Then expand from there, guided by the Integration Matrix and your own developing sense of where the technology helps and where it flatters.

The best tools are the ones that make you better at your job without making you forget what your job actually is.

AI in marketingproduct managementAI tools for PMsdecision makinggenerative AIproduct strategy
Amrita Sarkar

Amrita Sarkar

Product Manager | Growth & Marketplaces | MBA

Product Manager with 13+ years of experience spanning advertising (McCann, Publicis, M&C Saatchi), two startups (PitchNDA, Greenflip), and product leadership across fantasy gaming, telecom, and beauty tech. Chartered Manager. MBA from the University of Glasgow Adam Smith Business School. Y Combinator Startup School graduate. Recognised among India's Top 200 women-driven startups by Niti Aayog.

Connect on LinkedIn →

Related Articles