Amrita Sarkar
Product Strategy

From Stanford to the Cricket Pitch: Product Management Meets Fantasy Gaming

How applying Stanford's product management frameworks to a 0-to-1 fantasy cricket prediction platform achieved 1.2M monthly impressions and 3x ROAS.

Amrita Sarkar
Amrita Sarkar
· 10 min read

There is a moment in product discovery that I live for — the moment when a user tells you something that completely upends your assumptions. For me, it happened in the third week of user research for what would become CricHit, a real-time cricket prediction platform.

We had been interviewing fantasy sports players, asking them about their experience with existing platforms. The assumption going in was that users wanted more control: better team selection tools, deeper statistics, more player data. That is what competitors were building. More depth. More complexity.

Then a 23-year-old engineering student in Pune said something that changed everything: “I don’t want to spend thirty minutes building a team before the match. I want to do something during the match. Something quick. Something that makes watching the match more exciting.”

He wanted immediacy. Not depth, not complexity — immediacy. He wanted the dopamine of prediction without the homework of team building. That single insight became the foundation for CricHit’s entire product strategy.

The Stanford Foundation

Three months before that conversation in Pune, I had enrolled in Stanford University’s “Transforming Opportunities into Great Products” programme through the School of Engineering. It was 2021, the world was still navigating the aftermath of COVID, and the programme was delivered online. What it lacked in campus experience, it more than made up for in intellectual rigour.

The programme covered the full arc of product management: discovery, definition, development, and delivery. But three frameworks in particular became load-bearing pillars in everything I built at Droit.

Framework 1: Opportunity Assessment

Stanford’s opportunity assessment framework asks four questions before you commit to building anything:

  1. Exactly what problem will this solve? (Value proposition)
  2. For whom do we solve that problem? (Target market)
  3. How big is the opportunity? (Market size)
  4. What alternatives are there? (Competitive landscape)

For CricHit, the answers were:

  1. Fantasy cricket requires too much upfront investment (time and cognitive load) for casual fans who want an engaging second-screen experience during matches.
  2. Cricket fans aged 18-35 who watch IPL and international matches on mobile but do not use existing fantasy platforms because the barrier to entry is too high.
  3. India’s fantasy sports market was valued at approximately $2.5 billion in 2021, growing at 30%+ annually. The second-screen engagement market was largely untapped.
  4. Dream11, My11Circle, and MPL dominated traditional fantasy sports. But none offered real-time, ball-by-ball engagement. The competitive gap was not in fantasy sports generally — it was in the specific job of “make watching this match more exciting right now.”

That fourth answer was the strategic insight. We were not competing with Dream11. We were competing with Twitter, Instagram, and WhatsApp — the other things fans were doing on their phones while watching cricket. Our real competitor was passive scrolling.

Framework 2: Product Discovery through Rapid Prototyping

The Stanford programme emphasised that product discovery is not about finding the right answer. It is about eliminating wrong answers as quickly and cheaply as possible. The tools for this are prototypes — not code, but experiences that test assumptions.

Before writing a single line of code, we ran three experiments:

Experiment 1: The WhatsApp Prediction Game. We created a WhatsApp group of 50 cricket fans and ran a manual prediction game during an IPL match. I posted a question after every over (“Will the next over have a six? Yes/No”). Users replied. We tallied scores manually. The engagement was extraordinary. People who had never used a fantasy sports app were checking in every two minutes. This validated the core mechanic: ball-by-ball predictions during live matches.

Experiment 2: The Reward Sensitivity Test. We ran the same WhatsApp game for three matches, each time varying the reward structure. Match 1: bragging rights only. Match 2: small cash prizes for top predictors. Match 3: a leaderboard with social sharing. Match 2 had the highest engagement, but Match 3 had the highest retention (people came back for the next match). This told us that the initial hook could be monetary, but sustained engagement required social mechanics.

Experiment 3: The Question Complexity Test. We varied question difficulty. Simple binary questions (“Will the next ball be a dot ball?”) versus complex multi-variable questions (“Will Virat Kohli score more than 15 runs in the next 3 overs?”). Simple questions had 3x more participation. Users wanted to feel like they were making a prediction, not solving a statistics problem.

Three experiments, zero engineering cost, two weeks. We had validated our core hypotheses before committing any development resources. That approach directly reflected the Stanford emphasis on evidence over opinion in product decisions.

Framework 3: Leading vs. Lagging Indicators

This was perhaps the most practically useful framework. Stanford’s programme distinguished between:

  • Lagging indicators: Outcomes you want to achieve (revenue, monthly active users, retention). These tell you whether you succeeded, but they are backward-looking.
  • Leading indicators: Behaviours that predict those outcomes. These are forward-looking and actionable.

For CricHit, we identified our leading indicators through analysis of early user data:

Leading IndicatorPredicted Outcome
Questions answered in first match7-day retention
Leaderboard checks per session30-day retention
Social shares after a matchOrganic acquisition
Time between match start and first predictionSession depth

The most powerful leading indicator turned out to be “questions answered in the first three overs.” Users who answered at least 5 questions in the first three overs of a match had a 65% chance of returning for the next match. Users who answered fewer than 3 had less than a 15% chance. This insight shaped our entire onboarding and notification strategy: we optimised ruthlessly for getting users to make their first predictions as early in the match as possible.

Building CricHit: The Product Decisions

The Differentiated Vision

Based on our discovery work, CricHit’s product vision crystallised around three principles:

  1. Real-time, not pre-match. While competitors required 30 minutes of team building before a match, our engagement started at ball one.
  2. Simple choices, not complex analysis. Binary and multiple-choice predictions that anyone could answer in seconds.
  3. AI-generated questions. The questions themselves were generated by an AI engine that analysed match context (run rate, recent events, pitch conditions) to create contextually relevant predictions. This made the experience dynamic and unpredictable, even for returning users.

The AI question engine was the most technically ambitious component. It needed to process real-time ball-by-ball data from cricket data feeds, assess match context, and generate engaging prediction questions within seconds. The engineering team built it as a pipeline: data ingestion, context assessment, question template selection, difficulty calibration, and delivery. Each question felt spontaneous to the user, but behind it was a carefully designed system.

Balancing Competing Constraints

Building CricHit meant navigating four simultaneous constraints that constantly pulled in different directions:

Regulatory compliance. India’s gaming regulation landscape is fragmented. Different states have different laws regarding fantasy sports, skill-based gaming, and prize-based contests. Some states ban certain types of gaming entirely. We architected a geo-aware availability system that adjusted the product experience based on the user’s location, without fragmenting the core UX. I go deeper into this in my post on building products in regulated markets.

Real-time data reliability. A prediction platform that delivers questions three minutes after the relevant event is worthless. We needed sub-second latency on cricket data feeds and built redundancy with multiple data providers to ensure continuity during high-traffic IPL matches.

Payment security. Any platform involving monetary stakes needs robust payment infrastructure, KYC compliance, and fraud detection. We integrated with established payment providers and implemented withdrawal limits and identity verification as part of the core user flow, not as an afterthought.

Growth mechanics. A two-sided platform (users and advertisers) needs to solve the cold start problem. We could not attract advertisers without users, and we could not invest heavily in user acquisition without advertiser revenue.

The Growth Engine

Solving the cold start problem required creative thinking. We could not outspend Dream11 on television advertising. We needed channels where our specific differentiation — real-time, during-the-match engagement — gave us an asymmetric advantage.

University Partnerships

We partnered with cricket clubs and student groups at universities in Maharashtra, Karnataka, and Delhi. The proposition was simple: we would sponsor their cricket tournaments, and they would run CricHit prediction contests during IPL viewing parties. This was hyper-local, low-cost, and perfectly targeted. University students were our core demographic, and the social dynamics of watching cricket together in a hostel common room amplified the platform’s social features.

Influencer Strategy

Rather than paying cricket celebrities (which we could not afford), we identified micro-influencers: cricket content creators on YouTube and Instagram with 10,000-100,000 followers. We gave them early access and worked with them to create content around their CricHit predictions during live matches. The content was authentic because they were genuinely using the product, not reading a script. A YouTuber streaming his CricHit predictions during an IPL match was far more compelling than a polished advertisement.

On paid channels, we focused on two: Google App Campaigns (for install volume) and Facebook/Instagram (for targeted reach). The critical insight was timing. We ran ads only during live cricket matches and in the two hours preceding them. This concentrated our budget on moments of highest intent and kept our cost per install 40% below what competitors were reportedly paying for always-on campaigns.

The Results

After six months of launch and three IPL seasons of iteration:

  • 1.2 million monthly impressions across the platform and social channels
  • 3x return on ad spend on paid acquisition channels
  • 65% first-match-to-second-match retention for users who hit our activation threshold (5+ predictions in first 3 overs)
  • Organic acquisition growing to approximately 35% of total new users, driven by social sharing mechanics and university partnerships
  • Question engagement rate averaging 78% per active session (users answered nearly 4 out of 5 questions presented)

The 3x ROAS was the number that mattered most to the business. It meant that every rupee spent on acquisition generated three rupees in revenue, making growth self-funding at scale.

What Stanford Taught Me That Experience Could Not

I had been building products since PitchNDA in 2016. I had learned the hard way about product-market fit, user research, and growth. The Stanford programme did not teach me these concepts for the first time. What it taught me was discipline.

Before Stanford, my approach to product decisions was informed but unstructured. I would do user research, but not with a rigorous hypothesis-testing mindset. I would track metrics, but without distinguishing between leading and lagging indicators. I would assess opportunities, but not with a systematic framework that could be communicated to stakeholders.

The programme gave me a shared language and a structured process. When I told our engineering lead at Droit that we were going to validate our hypothesis through a WhatsApp prototype before committing engineering resources, he understood exactly what I meant and why. When I told our investor that we were tracking five leading indicators that predicted retention, she could see the analytical rigour behind our growth strategy.

Looking Forward

The fantasy gaming market in India continues to evolve rapidly, with regulatory frameworks becoming clearer and user expectations rising. The lesson from CricHit that I carry with me is that the biggest product opportunities are often not in doing the same thing better than competitors, but in redefining what the “same thing” even means.

Every fantasy sports company was competing to build the best team-building experience. We asked whether team-building was the right job to be done in the first place. That willingness to question the category definition — to step back from how to win and ask whether you are playing the right game — is the most valuable skill in product management. No framework can substitute for it, but good frameworks can help you act on it systematically.

The regulatory challenges we navigated at Droit added another layer of complexity and learning that shaped how I think about building products under constraint. Building within boundaries, it turns out, often produces more creative solutions than building in the open.

product-led growthproduct managementfantasy gamingStanforduser researchexperimentation
Amrita Sarkar

Amrita Sarkar

Product Manager | Growth & Marketplaces | MBA

Product Manager with 13+ years of experience spanning advertising (McCann, Publicis, M&C Saatchi), two startups (PitchNDA, Greenflip), and product leadership across fantasy gaming, telecom, and beauty tech. Chartered Manager. MBA from the University of Glasgow Adam Smith Business School. Y Combinator Startup School graduate. Recognised among India's Top 200 women-driven startups by Niti Aayog.

Connect on LinkedIn →

Related Articles