AI Is Breaking the Established Economics of Software
.png)
Published Date
AI Is Breaking the Established Economics of Software.
While traditional SaaS costs virtually nothing to serve more users, AI-powered features rack up costs with every single interaction. This is driving significant shifts to the established way of doing things. To help navigate this and develop my own thesis on what this means for product teams, I've started a series of articles to navigate the new economics of building with generative AI.
The New Reality of AI Economics
Features that use generative AI don't scale like traditional software. Instead of near-zero marginal costs, each model call adds to your operating expenses. This disruption has spawned new pricing models beyond the standard seats-based approach. For instance Clay, the GTM data enrichment tool, offers unlimited seats but a set allocation of usage tokens that increases with each step in the payment plan. Canva, on the other hand, charges for seats but provides a set allocation of AI-generations per user, per month. Where your product should sit on this spectrum is largely defined by the answers you give to the following questions.
1. What are your true build and operational costs? Beyond development resources, factor in ongoing service costs as a significant potential sink. Companies like Stability AI discovered this the hard way—their initial free image generation API was widely adopted, and loved by millions, but became financially unsustainable within months forcing a rapid pivot to paid tiers and the near total collapse of the once enormous AI company.
2. How should you structure free trials? You'll probably want to prove the value of your gen-AI feature by offering a few free uses of the service before getting users to upgrade to use it more. The right strategy here requires balance. Offer too little and your users won't be able to see the value of the feature or get into the habit of using it regularly. Offer too much, and you'll be giving all the value away for free. It usually pays to have a gradual strategy here. Rather than charging per AI prompt, Typeform includes its generative AI features in certain plan levels, to enhance the quality of data collected from survey participants with follow up questions. This strategy lets Typeform showcase value without giving everything away for free.
3. What's the generation-to-value ratio? Does your feature deliver high value from a few generations (like Resume.io's one-time CV improvement), or does value accumulate with volume (like Clay's data enrichment)? The answer to this will fundamentally shape your pricing strategy. It will also likely shift over time as new entrants to the market offer similar value to users and degrade your initial value proposition.
4. How transparent should pricing be? Should you show users a token metre, a simple quota, or bundle usage into a monthly cost? Transparency builds trust but can encourage token hoarding. The way you frame this is important too. For instance, visualising "tokens remaining" can decrease overall usage compared to showing "generations remaining" even though they represented the same limit because of the associations user have with these different words.
5. How will you measure true value? Work with your data, finance and operational teams to track costs and incremental revenue per user, per feature, and per output. Grammarly's product team have spoken in the past about how they don’t just track “usage” but "successful corrections adopted", which is a far better indicator of value than measuring raw generations alone.
6. What's your escape route? Develop features with built-in limits and contingencies. Set billing caps at the foundation model's API level and implement fair-usage limits per user. Have communications ready if you need to deprecate or modify the feature. When Figma rolled out their AI design features, they included clear "experimental" labeling and flexible terms that allowed them to adjust offerings based on actual usage patterns.
Looking Ahead
As AI capability improves and foundation model costs decrease, these economic calculations will evolve. The most successful teams aren't simply watching these trends—they're building flexible systems that can adapt to changing cost structures while maintaining user trust. Next in this series, I'll explore how teams are implementing staged rollouts to test AI economics before full deployment, and how competitive pricing is shaping the market.