Building a Product in a Regulated Market: Lessons from Fantasy Gaming in India
How to build product foundations that balance competing constraints: regulatory compliance across fragmented state laws, real-time data, payment security, and growth mechanics.
A product manager I mentor recently asked me a question that took me back to my first month at Droit: “How do you build fast when the regulations keep changing?” She was building a fintech product and was frustrated that compliance requirements kept shifting the goalposts on her roadmap. My answer surprised her: “You don’t build fast despite the regulations. You design your architecture so that regulatory changes are a configuration update, not a rebuild.”
That answer cost me six months of hard-won experience building CricHit, a real-time cricket prediction platform, in one of the most regulatory complex markets imaginable: fantasy gaming in India.
The Regulatory Landscape: Fragmented by Design
India does not have a single, unified framework for online gaming. Instead, gaming regulation is a state subject, which means each of India’s 28 states and 8 union territories can — and many do — have their own rules. When we began building CricHit in 2021, the regulatory landscape looked roughly like this:
- States that permitted skill-based gaming with real-money stakes (most states, based on Supreme Court precedent that fantasy sports constitute games of skill)
- States with explicit prohibitions or restrictions (Andhra Pradesh, Telangana, Assam, Odisha, and Sikkim had varying degrees of restriction)
- States with ambiguous or evolving regulation (several states were actively considering new legislation)
To make things more complex, the definition of what constituted “skill-based” versus “chance-based” gaming was not consistently applied. Our real-time, ball-by-ball prediction format was novel. It did not fit neatly into existing frameworks designed for traditional fantasy team-building games. We needed legal opinions in each key state to confirm our model qualified as skill-based gaming.
This was not a one-time exercise. Regulatory updates could arrive mid-cricket-season. A state government could announce new restrictions with weeks or even days of notice. We needed to build a product that could adapt to regulatory changes without requiring a code deployment every time a state updated its rules.
Architecting for Regulatory Flexibility
The critical architectural decision was to separate availability logic from core product logic. We built what I called an “availability framework” — a configuration layer that sat between the user and the product experience.
How the Availability Framework Worked
Every user interaction started with a location check. Based on the user’s state, the availability framework determined:
- What product features were accessible (full game with monetary stakes, game without monetary stakes, or no access)
- What payment methods were available (some states had restrictions on certain payment types)
- What KYC level was required (different states had different identity verification requirements)
- What messaging was displayed (legal disclaimers, age verification prompts, and responsible gaming messages varied by jurisdiction)
Critically, all of this was driven by a configuration file, not hardcoded logic. When a state changed its regulations, we updated a JSON configuration file that specified the rules for that state. No code changes. No app store resubmission. No engineering sprint required.
This sounds simple in retrospect, but it required a deliberate product decision early on. The temptation was to build for the majority case (most states allowed full access) and handle exceptions later. If we had done that, every regulatory change would have meant emergency engineering work and potential service disruptions. By investing in the availability framework upfront, we turned regulatory compliance from a recurring crisis into a routine configuration update.
Preserving UX Across Restrictions
The second challenge was more subtle: how do you restrict features in certain states without making the overall product feel broken?
A user in Andhra Pradesh who opened CricHit should not see a product that appeared to be missing features. That creates confusion and erodes trust. Instead, they should experience a coherent product that happened to work differently in their state.
We designed three “experience tiers”:
- Full experience: Real-money predictions, leaderboards, withdrawals, the complete product.
- Play experience: All prediction mechanics with virtual currency and social leaderboards, but no real-money stakes.
- Spectator experience: Read-only access to predictions and leaderboards, with an explanation of why the full experience was not available and an option to be notified if regulations changed.
Each tier was a complete experience, not a degraded version of the tier above. Users in restricted states could still enjoy the core mechanic of predicting cricket outcomes ball by ball. They just could not stake real money on those predictions. This was important both for user experience and for our growth metrics — users in restricted states could still be engaged, could still invite friends (including friends in unrestricted states), and could still contribute to our monthly impression numbers.
Prioritising the MVP: What to Build First
With regulatory complexity consuming significant architectural attention, we had to be ruthless about MVP scope. The Stanford product management training I had completed earlier that year, which I wrote about in my post on product management and fantasy gaming, provided the framework: validate the riskiest assumptions first.
Our riskiest assumptions, in order:
- Will users engage with ball-by-ball predictions during live matches? (Desirability risk)
- Can we generate contextually relevant questions in real-time using AI? (Feasibility risk)
- Can we achieve sustainable unit economics with our acquisition and monetisation model? (Viability risk)
We validated assumption one before writing any code, through the WhatsApp prototype experiments. For assumption two, we built a minimal version of the AI question engine that worked with a single data feed and a limited question template library. It was not elegant, but it proved the technical concept.
For the MVP launch, we explicitly deferred:
- Advanced social features (friend challenges, private leagues)
- Multi-sport support (we launched with cricket only)
- Detailed analytics dashboards for users (prediction history, performance stats)
- Referral programme mechanics (we used manual tracking initially)
Each of these was important for the long-term product vision. None of them needed to exist for us to test whether the core proposition worked.
Instrumentation Design: Measuring What Matters
One area where I invested disproportionate effort for an MVP was instrumentation. Every interaction in the app was tracked, not because we wanted to create beautiful dashboards, but because we needed to make decisions quickly and could not afford to fly blind.
I structured our metrics around the leading-versus-lagging framework:
Lagging Indicators (Business Outcomes)
- Monthly Active Users (MAU)
- Revenue per match
- Customer Acquisition Cost (CAC)
- Lifetime Value (LTV)
- 30-day retention
Leading Indicators (Predictive Behaviours)
- Predictions per user per match (target: 15+)
- Time to first prediction after match start (target: under 5 minutes)
- Leaderboard check frequency (target: 3+ per session)
- Social share rate after match completion (target: 10%+)
- Return rate within 48 hours of a match (target: 40%+)
The leading indicators were not just tracked — they were the basis for our weekly team meeting agenda. Every Monday, we reviewed the leading indicators from the previous week’s matches and asked: “What do these tell us about what our lagging indicators will look like in 30 days? And what can we do this week to move the leading indicators?”
This forward-looking discipline prevented the common startup trap of reacting to lagging indicators. By the time your MAU is declining, the underlying problem happened weeks ago. If you are watching leading indicators, you catch the problem while it is still small enough to address.
Managing Stakeholders Through OKRs
A 0-to-1 product in a regulated market has an unusually diverse set of stakeholders, each with different priorities:
- The founder/CEO wanted growth and market traction
- The legal team wanted full regulatory compliance with zero risk
- Engineering wanted architectural soundness and manageable tech debt
- The data team wanted comprehensive instrumentation
- Investors wanted user metrics and a path to unit economics
These priorities are not inherently conflicting, but they compete for the same scarce resources: engineering time, product attention, and budget. Without a framework for alignment, every planning session becomes a negotiation.
I introduced OKRs (Objectives and Key Results) structured at three levels:
Company OKR: Launch CricHit in 5 states with 100K MAU within 6 months.
Product Team OKRs:
- Objective: Deliver a compliant, engaging cricket prediction experience
- KR1: Availability framework supporting 10+ state configurations
- KR2: Average predictions per user per match exceeding 12
- KR3: Real-time question generation latency under 3 seconds
Growth Team OKRs:
- Objective: Build a sustainable user acquisition engine
- KR1: CAC below target threshold
- KR2: Organic acquisition reaching 25% of total
- KR3: First-match-to-second-match retention above 50%
The OKRs were not set-and-forget. We reviewed them fortnightly and tracked progress on a real-time dashboard built in Google Data Studio. When a key result was off track, it triggered a structured conversation: Is the target wrong, or is our approach wrong? If the approach is wrong, what experiment can we run this week to test a different approach?
The dashboards were shared with all stakeholders, including investors. This transparency built trust. Instead of quarterly investor updates where I presented polished narratives, investors could see our real-time progress and the decisions we were making in response to the data. Several investors told me this was the most transparent reporting they had seen from an early-stage portfolio company.
Scaling the Cross-Functional Team
CricHit started with a core team of four: me (product), one backend engineer, one frontend engineer, and one data analyst. By the time we were six months into launch, the team had grown to twelve, spanning product, engineering, data, design, growth marketing, and operations.
Scaling a team during a 0-to-1 build is treacherous. You need to maintain the speed and informality of a small team while adding the structure that a larger team requires. I navigated this with three principles:
Principle 1: Document decisions, not just outcomes. Every significant product decision was recorded with context: what options we considered, what data informed the decision, and what we expected to happen. When new team members joined, they could read the decision log and understand not just what we had built, but why. This reduced ramp-up time and prevented the expensive mistake of relitigating decisions.
Principle 2: Pair new hires with existing team members on live matches. CricHit’s product cadence was tied to the cricket calendar. New team members spent their first week monitoring live match events alongside an experienced colleague. This was the fastest way to build intuition for the product’s real-time dynamics and the user behaviour patterns that drove our metrics.
Principle 3: Keep the blast radius small. Every feature launched behind a feature flag. Every experiment ran on a subset of users. Every configuration change was tested in one state before rolling out to all. In a regulated market, the consequences of a mistake are amplified. A bug that accidentally allows restricted activity in a restricted state is not just a bad user experience — it is a legal liability. Small blast radius was non-negotiable.
Hypothesis-Driven Experimentation for Sustainable Economics
The most existential question for any gaming platform is: can you achieve a CAC-to-LTV ratio that sustains growth?
We approached this through structured experimentation rather than gut-feel spending. Every acquisition channel was treated as a hypothesis:
- Hypothesis: University cricket club partnerships will yield users with higher retention than paid social channels.
- Test: Run partnerships with 5 university clubs during one IPL round. Compare 30-day retention against a control group acquired through Facebook ads during the same period.
- Result: University-acquired users had 2.4x higher 30-day retention and 1.8x higher LTV. The CAC was 30% higher, but the LTV difference more than compensated.
We ran 12 such experiments over six months, each testing a specific acquisition hypothesis. The cumulative learnings were:
- Community-based acquisition (universities, cricket clubs, viewing parties) produced the highest-quality users
- Influencer partnerships were effective for awareness but needed careful attribution tracking
- Paid social worked for volume but required precise timing (only during live matches)
- Referral mechanics were the most cost-effective channel once we had a base of engaged users
By the sixth month, our blended CAC-to-LTV ratio had reached the level that allowed us to invest in growth with confidence, knowing that each cohort would pay back its acquisition cost within an acceptable timeframe.
What I Carry Forward
Building CricHit in India’s fragmented regulatory environment taught me that constraints are not obstacles to good product thinking — they are ingredients in it. The availability framework, the tiered experience design, the configuration-driven compliance architecture — none of these would have existed if we had been building in an unregulated market. And the product was better for having them.
For anyone building in a regulated market, here is the core advice: treat regulation as a product requirement, not an external imposition. Design for it from day one. Make it configurable, not hardcoded. And invest in the instrumentation that lets you prove compliance — not just achieve it — because when a regulator asks “how do you ensure compliance in State X?”, your answer needs to be specific, documented, and auditable.
The experience of building within constraints at Droit directly informed how I later approached building Greenflip, where the constraints were different (supply chain complexity, artisan onboarding, cross-border logistics) but the principle was the same: the constraints define the product as much as the vision does.
Amrita Sarkar
Product Manager | Growth & Marketplaces | MBA
Product Manager with 13+ years of experience spanning advertising (McCann, Publicis, M&C Saatchi), two startups (PitchNDA, Greenflip), and product leadership across fantasy gaming, telecom, and beauty tech. Chartered Manager. MBA from the University of Glasgow Adam Smith Business School. Y Combinator Startup School graduate. Recognised among India's Top 200 women-driven startups by Niti Aayog.
Connect on LinkedIn →Related Articles
From Stanford to the Cricket Pitch: Product Management Meets Fantasy Gaming
How applying Stanford's product management frameworks to a 0-to-1 fantasy cricket prediction platform achieved 1.2M monthly impressions and 3x ROAS.
Energy Transition Strategy: How Product Thinking Applies to the Renewables Sector
How applying product management frameworks to an energy transition strategy helped size a £1.275bn TAM and design commercial pricing models for the renewables equipment market.
The Art of Building Two-Sided Marketplaces: Supply, Demand, and the Trust Problem
Practical frameworks for solving the chicken-and-egg problem, managing supplier relationships, and building trust in two-sided marketplaces, drawn from building Greenflip.