Launching Usage-Based Pricing: A Story in 3 Sprints and a 26% Increase in NRR

It may come as no surprise that usage-based pricing (UBP) is a hot trend in SaaS. A 2021 report by OpenView shows that 45% of SaaS companies have a usage-based component in their pricing model – a 2X increase in adoption compared to just four years ago.

During my time at Landbot, I had the opportunity to lead the introduction of UBP, and it ended up increasing our net revenue retention by 26%.

While the results were successful, getting usage-based pricing right was far from easy. Here’s how we made it happen and what I learned along the way.

Before UBP, there were tiers and seats

To give you a little bit of context:

Landbot is a no-code chatbot builder that helps businesses automate their customer interactions – lead generation, customer support, surveys, quizzes, user onboarding, and more.

Our previous pricing model was based entirely on feature tiers and seats, which meant customers generating 100K chats a month and customers generating 10 chats a month would pay us exactly the same amount.

Landbot pricing structure plan

As a result, we believed there was an opportunity to monetize the usage, so we set out to experiment with usage-based pricing as soon as we closed our series A.

When is the price right? When you do some research

Pricing is a highly sensitive topic that can have irreversible impacts. Considering that no one on our team had prior experience, we decided to hire an external consultant to help us.

We broke down the research into three sprints. Each sprint took two to three weeks to complete, covering research design, data collection, analysis, and discussions.

Sprint 1: Buyer persona

We already had a fairly good idea of who our ideal customers were, but we wanted to further validate our qualitative understanding by looking at:

  • Internal data (enriched by Clearbit) – conversion rate, retention rate, long-term retained customers, and newly acquired customers.
  • Price sensitivity survey results from an external research panel.
Austin Yang qualitative survey

The resulting output was a set of firmographic and demographic attributes of our ideal and secondary customers that we could use to segment our data in the following sprints.

Sprint 2: Feature packaging & value metrics

The goal of this second sprint was to find out:

  • How highly is each feature valued? -> We wanted to see if we could refine the features that go into each tier (or decide whether to keep feature tiers at all).
  • Aside from features, how do customers believe our pricing should scale? -> So we could find the value metric for our usage component.

To do so, we used MaxDiff survey to understand each respondent’s most-valued and least-valued options in a list of features (relative preference). We also sought to understand their willingness to pay (WTP) by using the price sensitivity survey.

Austin Yang qualitative feature survey Austin Yang qualitative feature survey results

Once we had the relative preference and WTP data for each feature, we plotted them onto a 2×2 matrix where the Y-axis represented the relative preference and the X-axis represented the deviation from median WTP.

Each feature fell into one of four quadrants:

  • Differentiator (high RP x high WTP) = Customers are willing to pay more for it -> High tiers only.
  • Table stakes (high RP x low WTP) = Customers see it as a “must-have” but don’t want to pay more for it -> All tiers.
  • Add-ons (low RP x high WTP) = Some customers find it valuable and are willing to pay for it -> Horizontal add-on.
  • Trash (low RP x low WTP) = Customers don’t really care about it.
Austin Yang quadrant results for Median WTTP

Surprisingly, the result was almost identical to our existing packaging, so we kept our feature tiers untouched.

We also went through the same process for value metrics. But instead of using the matrix, we looked at whether the WTP for each value metric scales with its potential volume. Eventually, we decided on “number of chats” as the new usage metric while also keeping seats and feature tiers.

Sprint 3: Price points

In the final sprint, we were interested in nailing down the price points for our new usage-based pricing by again running the price sensitivity survey. However, this time:

  • We presented the research panel with detailed feature tiers.
  • We asked them about their potential usage volume (based on proxies like website traffic, number of leads, etc).
  • We then performed the Van Westendorp Price Sensitivity Analysis to find the optimal price range for each tier.
Austin Yang Landpot price points

Once the range was established, the only remaining question was whether the usage should be banded (fixed range per tier) or continuous (bill for every X usage). In the end, we decided to include a free allowance and charge only for the overage.

Note: Our research was conducted mainly with an external audience (who fits our buyer persona). That’s because doing pricing research with existing customers is always going to be biased – they have the incentive to not be honest.

Of course, getting data from non-customers has its own biases, but it’s the lesser of two evils.

Experimenting with a new UBP model

To ensure the usage component wouldn’t have any negative impact, we started it off as an experiment. The original idea was to run the new pricing as an A/B test.

However, upon realizing that it would take at least a year to reach statistical significance and that there was virtually no way to prevent users from seeing the other variant, we decided to launch it to all new users.

After three months, we reviewed how it performed against the old pricing in terms of:

  • Pricing page visit-to-signup %
  • Signup-to-Paid %
  • Average selling price
  • M2 logo retention
  • M2 net revenue retention
    (Month two or M2 = proxy for long-term retention)

We found that the new pricing did not hurt conversion rate, logo retention, or the average selling price, but it did increase net revenue retention by 26%! This gave us the confidence to fully commit to UBP and continue testing different iterations.

After three iterations, we chose the best-performing version and have kept it to this day:

Landbot current pricing model

Note: I later talked to several late-stage PLG companies that had also gone through pricing changes. None did A/B tests for the same reasons I shared above.

What I learned in the transition to UBP

What boosts your monetization might hurt your acquisition and retention

Usage-based pricing often introduces significant friction to the buying process. Users must understand how it’s calculated, estimate their usage, and work out the potential cost. Even if they upgrade, there’s always a chance they’ll churn after seeing a surprise bill.

Basically, the effect isn’t always net positive. So it’s worth thinking about how UBP fits with your growth model.

For example, if your revenue is mostly sales-driven and has an underlying cost to serve that scales with usage, then usage-based pricing could be a great fit. However, if your product is mainly self-serve and relies on virality to drive acquisition, sacrificing conversions in exchange for a higher average revenue per account might not be worth it.

Another way to look at it is by picturing your monthly recurring revenue (MRR) as a two-dimensional graph. To grow it, you can improve either:

  • Number of paying customers
  • Average revenue per customer

A pricing change can improve your revenue growth in one direction but hurt it in the other. Your goal is to find a formula that will yield the most MRR growth in the long run.

MRR and customer growth chart

Get your value metric right (outcome vs. usage, cumulative vs. one-off)

Although it’s called “usage-based” pricing, users usually prefer getting billed for successful outcomes. Unfortunately, outcome-based metrics aren’t always realistic. That is why many SaaS companies stick to usage-based metrics as the best available option. This was also true in our case.

When the result of our value metric research came back, “number of leads” was rated as the most preferred option.

However, we went against the data and picked “number of chats” because:

  1. Everyone has a different definition of “leads.”
  2. Although lead capturing was our top use case, we did not want to be seen as a vertical solution.

The good news is that SaaS users today have become accustomed to paying for usage, so it is still an acceptable option.
Another aspect to consider when choosing your value metric is whether it should be cumulative or one-off.

Cumulative metrics, such as “total number of user records,” are more predictable and tend to only go up. One-off metrics, such as “number of emails sent,” can fluctuate up and down more unpredictably.

As a business, you probably see cumulative metrics as the better option, but your users might not always agree. Make sure to consider their view, the nature of your product, and the norm of the competitive landscape.

Generally speaking, cumulative metrics work better with products that act as a system of record (like CRM and analytics), whereas one-off metrics work better with workflow automation tools (like business process automation and email marketing).

Make sure you have the technical and operational capacity

Usage-based pricing adds complexities to your billing system. You have to consider things like:

  • Should the usage be billed in arrears or upfront as credits?
  • How precise should your billing increment be?
  • How do you handle refunds, rate changes, and bulk discounts?
  • How will you inform a customer when they’re about to reach their usage limit?

You also have to prepare your sales and customer success teams to handle communications regarding UBP. This requires an entirely new playbook on processes, contract negotiation, and even compensation.

Last but not least, don’t forget about your finance team. Their involvement is crucial if you want to correctly report your usage revenue.

To be frank, most SaaS companies that have adopted UBP are figuring things out as they go, and that’s perfectly fine. You just have to be aware of the resources involved before committing.

You won’t monetize every customer the same way

When analyzing how much usage to give away for free, we realized that the volume our competitors were offering was way higher than what the majority of Landbot customers will ever need.

This put us in a tough spot. If we matched the competition, we would monetize only a small fraction of customers through usage. If we didn’t, our conversion rate might have suffered.

After a few rounds of testing, we decided to go with the first option and monetize the remaining customers via features and seats.

This hybrid model offers a hidden benefit to products that serve a wide range of personas. It can push users to upgrade through any of the value metrics that are relevant to them.

For example, most B2B companies didn’t have enough top-funnel traffic to require the high chat allowance in our higher plan, but many still chose to upgrade for advanced integrations and conditional logic features. However, this doesn’t mean you should get greedy and add a bunch of value metrics. Every value metric creates friction, and these frictions don’t simply add up – they compound.

Don’t try UBP if you don’t have a stomach for risk

There were months when we saw large MRR contractions from usage. This made some of my teammates question whether UBP should stay.

I had to calm them down by pointing out the following: “The contracted MRR came from usage in the first place. We can work on improving usage, but if we remove UBP, this entire growth lever is gone. Do we care about overall MRR growth, or keeping a secondary metric look pretty?”

Pricing changes are always risky, but the risk-to-reward ratio of usage-based pricing is asymmetric – which is likely what makes monetization the most effective growth pillar.

Make forward-looking decisions

Launching usage-based pricing is considered a major pricing change. It’s rare for a SaaS company to make more than one major pricing change per year without confusing their customers or creating operational nightmares.

This means your UBP should be designed for not only the current state of your company but also its next one to three years. Bear in mind that your product will evolve, and so will your team, market, and competition. No amount of data will be enough to predict these changes, and a lot of it comes down to qualitative judgment and convictions.

In our case, many decisions were not so clear-cut, but we ultimately went with what we believed would best fit our new strategy.

UBP is one of many weapons in your company’s arsenal

Remember, usage-based pricing is not a silver bullet. It’s only one potential lever in your overall growth model, but it can be a powerful one when executed well in the right context.

If you believe UBP could be a good fit for your SaaS, why not give it a try?

Austin Yang
Austin Yang
Lead Product Manager
Softr

Austin Yang is a Lead Product Manager at Softr, the easiest no-code web app builder. He has previously built and grown products at startups backed by Softbank, Sequoia, Google, and Alibaba. For more of his thoughts about product-led growth, SaaS, and product management, check out: Austinyang.co
You might also like ...
Pricing & Positioning
Your Guide to Reverse Trials
What’s better for PLG: freemium or free trial? You’ve probably debated the question ad nauseam. They both have their pros...
by Kyle Poyar
Sales
Going Usage-Based? It’s Time to Rethink your Sales Comp
Usage-based pricing (UBP) continues its march toward becoming the dominant pricing model in SaaS. One of the thorniest topics that...
by Ben Chambers
Pricing & Positioning
How Usage-Based Pricing Unlocks Rapid Growth: Stories of Scaling [eBook]
How did Snowflake, a data cloud company, grow to become one of the few cloud-native SaaS companies to reach $1...
by Kyle Poyar