How to test a new pricing model without changing anything

Have you ever wondered, what would happen if we made a big change to pricing? I don't mean try ending a few prices in $0.99 or $0.95. No. I mean changing everything!

Egad. What would happen? Either it'd be amazing and I'd keep my job–or I'd get fired! No thanks. That payoff matrix looks awful. 

payoff_matrix_change_prices.png
Nobody wants to make a big, potentially disruptive change, without knowing what will happen. Especially when the upside–keeping your job–is the same if you do nothing.

Good news: you can test a model, without changing anything. Here's the recipe. And we'll use real numbers. And it changes this matrix completely.

Step 1: Assemble the required ingredients

To get started, you'll need:

  • A new pricing model 
  • Some math smarts (or people with math smarts)
  • Your historical data, ideally both top of funnel (page views, proposals sent, etc.) and sales, linked together (that is, you can tell which views or proposals were bought, and which weren't). 

As a further requirement, your business must be one in which people pay different amounts as a matter of course. If you're Dollar Shave Club and always charge $1, this won't work (ok, they don't actually, and it might work–but you get the idea) . If you're eBay, this will work amazingly well. 

Watch the webinar recording for an in depth explanation >>

Step 2: Load everything into your favorite statistical tool

We run a lot of python scripts here, on data stored in Hadoop. You're welcome to use whatever you want; R can usually be good enough; Tableau can also work, and Excel (as long as you don't have too much data) can work as well.

Step 3: Apply backtesting AB test methodology 

We're going to do an AB test without changing anthing. What? Was your mind just blown?

 mind blowing watermelon explosion

That's right. It's Optimizely without changing a damned thing. No offense meant to Optimizely–a partner of ours–but still, you don't always have to change things to learn. 

Instead, we're going to do a "what if" or backtest type AB test. We're going to compare our past quotes and prices paid to what our model would recommend, and separate them into 3 groups:

3_buckets_for_backtest_ab_test.png

By comparing Similar Prices to the performance of Lower and Higher prices, we will understand what would likely have happened if our model had been in place and all prices were what the model would recommend. That is, the "what if" actually happened: if we had been running the model, we would have only had the Similar Prices available. So, if they outperform much different prices, then we can say our new model is much better. And, by the way, you can do this when moving from one model to the next model–not just from moving from unintelligent prices to artifically intelligent prices.

Of course, this assumes you have historical pricing variability and that it is not correlated to or totally explained by a variable such as what your competitors are charging in that moment, in retail, or the time to expiration in, say, event ticketing. If it is, you'll want to look closely and understand how that variable impacts the data–this analysis is still quite possible, it just gets a little more complicated. 

Step 4: Get results

We run the comparison in our statistical program. I'll admit, this is the tricky part. So, yadda yadda, now you have results to analyze.

 

 

Yes, I "yadda yaddad" over building a model. That's what data scientists are for (you might want to go talk to yours about this after you finish reading). 

Here's what the results might look like for you (yes, these are real results, anonymized–see note at the end):

Low       : 73,204 ± 3,614.45 purchases, $363,450 ± 185,610
Similar   : 527,251 ± 7,661.32 purchases, $1,466,200 ± 199,160 << This is how our model would do
High      : 327,005 ± 0.00 purchases, $1,104,000 ± 0.00

In other words, for prices similar to our model's recommendations, we made $1.4 million in this example. That's much better than either low or high prices, specifically $362,000 better than high prices and $1.1 million (wow!) better than low prices. Ouch.  (Note that number of example items in each bucket is intentionally the same, so you can compare the results directly).

Let's break this down: 

Low       : 73,204 ± 3,614.45 purchases, $363,450 ± 185,610

Low = the low priced group

73,204 = how many purchases occurred in this group

± 3,614.45 =  standard deviation on purchases, that is, if you rerun this on a different sample it might differ a bit. But not too much. 

$363,450 = how much money was made (you can run this against anything–revenue, profit, whatever your objective function is). 

± 185,610 = standard deviation in dollars. 

Doing the comparison, we want to see low standard deviations to be sure we have a large enough sample. We also want to be sure they don't overlap, that is, the low price group results plus the standard deviation for that group is not greater than than the similar group minus its standard deviation. That would show that there's significant chance that the result is meaningless (due to randomness) and in fact there may not be any difference in performance (which means you learned nothing–which is almost worse than learning that your new model hurt your business!).

In sum, the new pricing model would be expected to increase revenue over $1.1 million or 403% versus underpriced products, and $362K or 132% versus high priced products.  

Step 5: Try different angles

If you can segment very differently, do that. For example, try the absolute value instead of low and high (just 2 buckets: Similar, Different). If prices vary with time, averaging, or using diffferent averaging methodologies, can be worthwhile. It shouldn't be necessary, using this approach, to address externalities, unless there is a high correlation between them and  pricing behavior  (like you only show high prices on weekends, or in the evenings, or when it's raiing) in which case you should definitely address that. 

Step 6: You're done, for now

You'll know when you're done when you are confident that your model is, indeed,  better than what went before. The numbers above are a pretty solid example: who doesn't want a 300% increase? Like, immediately?  Drink a coffee or a Mountain Dew, think about it, chew on it, and when you're done... you'll know it. You'll be ready. 

Step 7: Roll out the new model, and do it again

Did Step 6 say "Done"? Oops. You're never done. You're just done with this revision. First, push out the model you just established is better than what you had before. Then, watch it. And start working on a new model. Do the exact same with the new model as you did with this model. 

Conclusion

It is possilbe to test major price changes without actually changing things in real time. Using this backtesting methodology, by looking back over historical sales, you can establish if a significant change to your prices would make you money–or cost you a fortune. This changes the payoff matrix from lose/lose/lose/not really win, to win all around–because you'd never change your pricing strategy if you knew it would lose you money, would you? 

payoff_matrix_change_prices_after_test.png

And wouldn't you be more likely to be fired if you sat around not changing your pricing strategy when you knew you were losing $1.1 million on the old strategy? 

Note on the data used: All numbers are real numbers but have been changed in order of magnitude and rounded for anonymity. We're leaving some additional data out that a statistician might consider important in this type of analysis, for simplicity, clarity and to preserve anonymity.

Image credit: giphy.com

Related Posts