Scale-up Lessons: HubSpot’s Journey from MQL to PQL
One of the best things about working for a scale-up company is you get an opportunity to work on many different challenges.
Scale-ups become scale-ups because they’re able to find new growth engines to continue scaling the business.
At HubSpot, I’ve been lucky enough to work on two of those growth engines. I initially joined HubSpot as the marketing leader for international. My role was to grow both a team and a marketing funnel of traffic, leads, and MQLs for our target regions.
I next joined a small team within HubSpot who had a mission to turn a free Chrome extension into a freemium business. Instead of building a marketing funnel, we created a product funnel of traffic, free users, and product qualified leads (PQLs).
We learned a lot about what it takes to build a successful product funnel. In this post, I’ll cover some of the main reasons we made that funnel successful.
The Product Funnel Equation
We broke the funnel down into two core components:
Product Value + Product Demand = $$
How to measure and iterate on product value?
You want to track a metric that shows the number of people getting value from your product is growing over time. It’s your north star metric and should have some correlation to better upgrades and retention. Improving this metric should help you to create sustainable growth for your product.
At HubSpot, we decided early on that our metric was weekly active teams (WATs). The more WATs we had, the better our freemium business would look.
Metrics like WATs (weekly active teams) are challenging to create an actionable plan for. As Brian Balfour argues, they’re too big, too broad, and not actionable. Instead, they’re a lagging indicator of success, one that tells you if you’re winning or not.
You need to break that metric down to its inputs by looking for common user actions that have a positive correlation to improvements on your north star metric e.g a high percentage of users who complete action X go on to become a [whatever you’ve defined as your north star metric].
Let’s take the following example for WATs.
Instead of running experiments and measuring their success or failure by changes in WATs, you instead focus on having a measurable impact on the different inputs that go into WATs.
For example, if we find ways to get more users to import their data, or close more deals using our sales tools, our WAT number would increase.
We could go one layer deeper, by increasing the number of people who imported contacts via Gmail, who consumed in-app tutorials on how to import their data, who used our email templates or booked meetings via our scheduling app, our WAT number would eventually increase.
Taking this approach means you can run experiments against inputs where you get near instant feedback on whether your efforts were successful or not.
How to measure and iterate on product demand?
Along with a ‘product value’ metric, we also obsessed over monetization.
We created product demand in the form of a touchless sale or a ‘product qualified lead’ for the sales team.
PQLs are a combination of product actions (how people are using the product), and whom that person is (using demographic and firmographic data).
We categorized our PQLs into different buckets to find the best opportunities to grow revenue.
- Hand Raise PQLs: We would show free users call to actions (CTAs) within the product for paid only features or the opportunity to get assistance with a particular task, those users would interact with the CTA to reach out to us. We used them sparingly.
- Usage PQLs: We triggered a call to action based on product usage, for example using all of your free call minutes or email templates would trigger an option to upgrade or talk with sales.
- Upgrade PQLs: These were features only available to paid users, they would send users to an upgrade page.
Each PQL event was like a unique MQL so had to be measured as its own funnel. The result was a giant spreadsheet that showed each PQL event, the category it belonged to, the number of times the PQL event had occurred, the amount of revenue we closed from that event and its conversion rate.
The spreadsheet helped us to not only decide on the PQL category we should focus on but the actual PQL events with the best potential upside for improvements in conversion rate and revenue.
Experimenting Your Way to Product Funnel Success
How our product funnel works today is a result of the many experiments we ran since its beginning. Every test you run is an opportunity to learn something new. Those learnings slowly add up to real changes in how the funnel itself works.
Let’s look at an example using three actual experiments with differing levels of complexity that build on each other.
Way back when we started building a product funnel, we prioritized experiments that were easy to implement and would increase the number of ‘hand raise’ PQLs generated.
For example, we had a hypothesis where we believed customers who had previously used a spreadsheet as their system of record struggled to import their data because a CRM was unfamiliar to them. We would experiment by showing a CTA at common points of friction that offered consultation to help that user get their data imported. The execution of this was pretty basic; we were just getting started.
That PQL event became one of our top upgrade points for a period. It also provided us with two crucial learnings, many free users wanted to reach out to us when using the product, and those calls provided us with a lot of information on how we could improve our freemium onboarding experience.
Over the months our CTA’s evolved to a modal that free users would see when they completed specific events, e.g., hit a limit on a free feature. The modal provided an option to talk with sales, and an opportunity to buy via touchless.
Successful B2B companies of the future are going to be those who make it easy for their customers to buy their software. It sounds so simple, but most companies make you buy their products based on how they want to sell them to you.
One of the missions of our product funnel was to allow users to buy our software in the way they wanted to. We had a hypothesis that some of our free users didn’t upgrade because we didn’t have the right communication channel for them, so we experimented with providing different options in-app for them to reach out to us.
Initially, users could either reach out to talk with sales or upgrade themselves. We then experimented with providing them new options; schedule a meeting with a sales rep; they could live chat with someone to get their questions answered, or call a sales rep directly.
There was a lot more effort involved in running this experiment, but we saw a notable improvement in our conversion rate.
Making it easy for people to buy your software also means removing the hurdles they need to jump through to make a purchase. We continued to try and optimize the experience. Instead of making users click the ‘Schedule a meeting’ CTA, open a kickback email, click on a calendar link and book time with a rep, we experimented with allowing them to do it all from within the app. Once a user clicked on ‘Schedule a meeting’ our scheduling app would pop open and they could book a meeting with a sales rep without ever having to leave the app.
Again, we saw our conversion rates improve as a result of that test.
The experiments also provided us with new learnings, or reinforced learnings we had gotten previously:
- People liked to live chat; it was the best performing communication channel.
- Some cohorts of people needed help with getting on-boarded onto our freemium products.
- Gathering information from these people helped us to engineer away the points of friction and improve our onboarding.
Using the previous learnings, I showed the following chart at one of our meetings:
The idea was to show live chat to specific groups of users at different points of friction. User success coaches would help answer questions, and provide constant feedback to both our product and engineering teams.
Again, there was a lot more time, effort and resources needed to run an experiment like this, but your natural trajectory in growth is to get wins on the board, build trust with the leadership team and earn your right to tackle more complex opportunities.
After several iterations the experiment proved successful, and coaches are now a core part of our freemium go-to-market and a great way to gather information to continually improve our freemium onboarding.
As a scale-up, we’re never happy with our current success. We always want to get better. We’ve managed to build a successful marketing funnel and product funnel both fueled by creating inbound demand, but it still feels like we have an ever-growing list of opportunities to keep getting better and that’s exciting!
Many SaaS marketers do keyword research to pull high-volume search terms. The problem: this list might not help you drive revenue.
According to CMO Julie Herendeen, all great marketing starts with an understanding of customers’ problems. We couldn’t agree more.