Scaling up your prioritization

Alvaro Yuste Torregrosa
Spotahome Product
Published in
9 min readJan 18, 2022

--

Does your team have a limited amount of time and resources, and too many important initiatives to deliver? Of course, all good teams face this.

Do you suspect that deciding the priorities based on the sensations of the loudest person in the room is not the most effective way? You probably suspect it if you are not that person.

We at the Spotahome product development team answered “yes” to both questions. And then tried to find a formula to take this decision, “what goes first and what will wait”, following a scalable, replicable, and justifiable process. Of course, it’s not perfect and we are iterating it continuously. But still, we thought that it could provide some value or inspiration to similar teams facing similar problems.

We can explain the formula in four different phases

  1. Choosing your customer success metrics or their proxies
  2. Roughly estimating the effort behind each initiative
  3. Building the hypothesis of business impact
  4. Prioritization based on customer value and cost

And if you get to the end, an extra ball and a present are waiting for you.

1. Customer success metrics

The whole approach is probably only applicable to products with intense analytics. It is more powerful and effective for digital products, or for systems in which we can measure and react quickly.

We should monitor the most relevant events and interactions of our users and decide which ones determine the quality of their relationship with our product. We call them success metrics or goals, and our efforts are focused on optimizing them. If you validated your product-market-fit, those metrics that make your customers successful must be the same ones that your business needs to succeed.

Example:
Imagine that we want to optimize an e-commerce platform. The simplest goal could be the purchase. And a simple example of a success metric could be the conversion rate of a user to purchasing, the percentage of users that end up becoming customers. A purchase is the best proof that the customer trusted you, found your product useful, and solved one of their problems with it. And the conversion rate (CVR) is a good metric to evaluate the quality of your pre-sale product. It does not depend on the amount of traffic, but it can be affected by the quality of this traffic. It’s not perfect, but usually good enough.

Still, some of the success metrics can be hard to measure, difficult to move, or can require long periods to be affected (for example when the purchase decision lasts for days). In this scenario, the proxy metrics (also called leading indicators) are very useful. They must have a proven correlation with the success metrics, so, when a user performs one proxy goal, it is more likely to end up succeeding.

Example:
Following the previous example, a possible proxy goal could be a signup event. And its proxy metric could be the CVR of users towards a signup event if you can demonstrate that the users with accounts are way more likely to make a purchase.

Finally, we need to quantify the value of each goal. This will serve us when we build hypotheses on top of them. For us, the most relevant value is the revenue generated. The easiest one to calculate is the value of a purchase, the average revenue that the company gets from purchase (for simplicity let’s assume that each user only purchases once). Once we have this value, we can go backward to calculate the value of each proxy metric, this being the value of a purchase multiplied by the CVR of the segment of users that complete that proxy.

Rev. per goal (€) = Avg. rev. per purchase (€) × CVR to purchase of users with the goal (%)

Example:
Let’s consider that the average revenue we get from each purchase is 100€. If we want to establish the attributable business value of a signup event, we have to multiply it by the conversion rate to purchase of the users with an account. Imagine that 10% of users with an account end up purchasing. Then the value of a new signup event would be.

Rev. per signup = rev. per purchase × CVR from a signup to purchase = 100€ × 10% = 10€

Still, the most important part of this phase is choosing the best metric to measure the initiative based on its objectives. Depending on the problem we want to solve, the interaction we want to optimize, or the biggest window of opportunity we think we have, we might be choosing one metric or another.

2. Effort estimation

If you want to take costs into account for prioritizing, then you will need to roughly estimate the effort. At Spotahome we tend to prioritize the initiatives with the best Return of Investment (ROI), the more profitable ones. Thus, the cost is a very relevant metric to consider when prioritizing, especially when the team is people or time-constrained, and we have to choose which initiatives are left out.

We will go deeper about how we estimate the effort in our team and the different kinds of estimations that we work with, in a dedicated article. But for prioritizing we should extract the one we call “Early estimation”. This estimation should be just a guide and it should not require relevant investments, we don’t have to forget that we are still prioritizing and we don’t know what should be done first yet. Thus, it does not have to trigger a very deep technical investigation, but we can just rely on surface knowledge about the technical challenges that the initiative will find in our product. Normally the tech lead alone should be capable of calculating it. For ensuring it is brief, we only allow T-shirt sizes (XS, S, M, L, XL), that have an associated cost in development time based on the historical time dedicated to initiatives of the same size. This historic is nurtured with every estimation and lets the team estimate with agility based on comparison instead of on isolated assumptions.

Once chosen the size, we can calculate the early estimated cost with the super simple following formula:

Cost (€) = Avg. dev hours of same-sized tasks (h) × cost per dev h. (€/h)

Example:
Imagine that we want to change the color of the signup button to increase the CVR of our users to signup. We can label this initiative as an XS given its high-level estimated complexity. Let’s also consider that our average dedication to tasks of this size is around 10 hours of development, and each hour has a cost of 10€. Then the estimated cost of this initiative would be:

Cost = XS avg. hours × cost per hour = 10 h × 10 €/h = 100 €

Of course, these are all unreal quantities to make calculus simpler.

3. Hypothesis generation

Once we have measured the cost of the initiative and chosen the best metric to measure its success, it’s time to hypothesize the potential impact it can have. We build this hypothesis using the knowledge we have about our customers. When possible, it’s recommended to extrapolate data from past experiments already measured that can apply to the new initiative. Still, this step is the riskiest and most subjective one since some assumptions can be potentially affected by personal biases. For more innovative initiatives, that have no precedents, this risk grows, and thus we should try with smaller and cheaper attempts (XS or S) to gather more data, before making bigger investments.

Normally our hypotheses can belong to one of three different categories:

  • An absolute increment of conversions per month. For example: generating 100 extra signups per month. This is useful when the impact does not scale with the number of users. The value calculus is as easy as multiplying the number of conversions by the value per conversion.

Example:
If the value of a signup event is 10€, 100 signups per month can be estimated like 1.000 €/month.

  • A specific reduction of costs per month. For example: reducing the number of complaints per month by 100 complaints. This can apply also to technical improvements that can save development time (for example solving technical debt), infrastructure cost, or payments to external providers (like G Maps). These hypotheses can be quantified in revenue very directly too.

Example:
If a development hour costs 10€ and we’ll save 100 hours per month, the impact is 1.000 €/month.

  • A relative uplift on a specific conversion rate. For example, increasing the CVR to purchase by 20%. This is the mechanism we use the most. It is simpler to apply when the reach of the initiative is close to 100% of our users, otherwise, we should dilute the uplift multiplying it by our sample: incrementing by 20% the CVR only for 50% of our users results in an overall uplift of 50% × 20% = 10%. For calculating the revenue, in this case, we also need the total amount of conversions that happen in one whole month, to understand what is the size of the uplift.
    .
    Rev. per month (€) = Events per month × uplift expected (%) × value per event (€)

Example
If we have 1000 signups per month in total, and we aim to increment by 10% the CVR to signup, this means 100 extra signups per month. If the value of a signup event is 10€, then the impact of the hypothesis is again:

Rev. per month = signups per month × uplift expected × value of signup

Rev. per month = 1.000 signups/month × 10% × 10 € = 1.000 €/month.

4. Profitability-based prioritization

Once we have our cost and hypothesis quantified for a given initiative, we can define how profitable it is. We could calculate the ROI with the classical formula.

ROI = profit / cost = (benefit — cost) / cost

But normally, the product initiatives have a sustained benefit in time. If we increase the conversion, we do it “forever”, that’s why we tend to quantify the hypotheses with revenue per month. For that same reason, we also prefer to evaluate the profitability with the break-even time. The number of days that the initiative will need to pay itself, to be profitable from that moment onwards.

Break-even (months) = cost (€) / benefit (€/month)
Break-even (days) = 30 (days/month) × Break-even (months)

Example
Taking all the data extracted in the examples above. The initiative (changing the color of the signup button) costs 100€ and can potentially increase the CVR to signup by 10%, thus generating 1.000€/month. And the break-even is the following:

Break-even = (cost / benefit) × 30 = (100/1.000) * 30 = 3 days

If the hypothesis is true, this initiative has to be active for just 3 days to generate the money we invested in developing it.

Now we can compare different initiatives to each other (comparing apples with apples) and decide which one should come first. In our case, the one more profitable, the one with a shorter break-even time, the one with more customer value and less effort behind. If we choose correctly our success metrics, this profitability will be correlated with a better user experience, a more efficient team, or a better-optimized product.

Extra ball: Measuring your errors

In the process explained before, there are two manual and subjective decisions that are potentially exposed to personal biases or instrumental imprecisions: 1) the choice of the T-shirt size, and 2) the hypothesis generation. As we mentioned, it is recommended to attend to historical data and extrapolate it when possible to reduce the uncertainty and become more and more precise. But in retrospect, we can also measure the error percentage of each estimation or hypothesis proposed.

For evaluating the precision of the cost estimated we can compare it with the final effort. This final investment can be inferred depending on your methodology: story points, work units, or time dedicated, for example. Crossing it again with the development hour cost we can get the final monetary investment. We can apply a simple formula to get the error ratio:

Estimation Error (%) = ( final cost (€) — estimated cost (€) ) / estimated cost (€)

The most interesting part comes when we evaluate the accuracy of our impact hypothesis. Here is normally where we will find more deviations, since our knowledge of our customers is always partial (surely inferior to the knowledge of our platform that guides the cost estimations), and several lateral circumstances can affect (such a seasonality, user experience, communication effectiveness, …). For this reason and some others, measuring the impact of our initiatives and learning from them is crucial for ensuring the health of this process.

We at Spotahome try to launch all of them under an experiment (normally an A/B test), always with a control group, so we can attribute the effect caused to the specific feature launched. Once the effect is measured we can easily compare it with the initial hypothesis and extract learnings for future initiatives. The error can be calculated with the same formula above:

Hypothesis error (%) = ( final effect (€) — hypothesized effect (€) ) / hypothesized effect (€)

FREE GIFT! if you read until the end

Thanks a lot for your time reading the article. We hope it was understandable, useful, or inspiring to build your own mechanism for work prioritization.

We know this is not a simple process and thus we internally created a spreadsheet to manage all these priorities, we call it Prioritization Framework. And we want to share its template with you.

Sharing is caring! So. please feel free to share any feedback, doubt, or idea in the comments!
Happy prioritization!

--

--