Math for Marketers: Rover’s Overbooking Problem (Part 3)

We covered how to handle the problems of overbooked dog sitters in Part 1 and elaborated on it a bit in Part 2. However, in those previous posts we examined fairly simple, static scenarios, where we only had to decide between two sitters. Here we’ll cover how to handle perhaps any number of sitters, with any number of available spaces, existing inquiries, booking rates, and even different revenue/sitting fees.

To do all this, however, requires some coding knowledge. I use Python as friends recommended I pick that one up first, and I highly encourage anyone else who works on complex analytical problems–yes, even fellow marketing people–to learn it as well.

Now imagine a scenario where we have to sort three sitters, Rob, Jane and Rose. When we were looking at only two sitters in Parts 1 and 2, we could simply compare the expected values of sending the next lead to one sitter vs. the other. When we’re dealing with more than two sitters, however, that method is simply too inefficient and too slow, so instead we’ll look at Marginal Expected Value. How much additional expected value can we expect from sending the next lead to a sitter? We’ll calculate that by taking the Expected Value from sending the next lead to the sitter and then subtracting the Expected Value the sitter would have had if he only had his initial set of leads (i.e. if we hadn’t sent over the next lead):

EVMarginal = EVAdditional – EVInitial

 

Once we’ve run that calculation for each sitter, we’ll sort the sitters from greatest Marginal Expected Value to least.

Now on to the script, which you can find here on GitHub.

We begin by importing Python’s ‘math’ module.

Screen Shot 2017-06-02 at 11.56.26 PM

Then we set up a list of dictionaries, ‘sitter_info’.

Screen Shot 2017-06-02 at 11.56.58 PM

Each dictionary within sitter_info contains the following data for one sitter:

  1. Name
  2. Spots – the number of open spots/dogs the sitter is still able to take in
  3. Leads – the number of requests/inquiries for sitting the sitter still has active
  4. Rev – the revenue associated with the stay
  5. Bookrate – the sitter’s booking rate (the odds that the sitter can convert a lead into a stay)

In this case, we’re working with three sitters. Rob has 2 spots, 10 leads, $40 revenue per stay, and a booking rate of 40%. Jane has 1 spot, 4 leads, $40 revenue per stay, and a booking rate of 33%. Rose has 3 spots, 6 leads, $0 revenue per stay, and a booking rate of 25%.

We’ve also set up an empty dictionary called ‘sitter_marginal_ev’ that we’ll use for entering each sitter’s name and Marginal Expected Value.

Screen Shot 2017-06-02 at 11.57.34 PM

Then we’ll loop through the list sitter_info to calculate each sitter’s marginal EV. We set up the ‘for’ loop here and define the variables we’ll use in our calculations:

Screen Shot 2017-06-03 at 12.08.03 AM

Note that we use the term/variable ‘wins’ for booked stays.

Within the ‘for’ loop, we’ll run a couple of while loops to get both EVInitial and EVAdditional. Starting with EVInitial, we set up this ‘while’ loop:

Screen Shot 2017-06-03 at 12.11.00 AM

The loop begins by defining ‘effective_stays’ in the ‘if’ statement. We use this variable to reflect the limit on revenue placed by the maximum number of vacancies a sitter has. For example, even if a sitter gets more than two people wanting to book a stay, if she only has two vacancies, then she can only book two stays.

In each iteration of the ‘while’ loop, we calculate the expected value from the sitter being able to book 0, 1, 2, 3, and so on stays, up until we’ve calculated the expected value of the sitter booking as many stays as she has leads (‘initial_leads’). This expected value then gets added to the variable ‘ev’.

After iterating through all the possible numbers of stays and adding up the expected values for the initial set of leads, we then run a similar ‘while’ loop to get the expected value if the sitter gets the next lead. In other words, we find EVAdditional which we define as ‘ev2’:

Screen Shot 2017-06-03 at 12.30.08 AM

Then we calculate Marginal Expected Value:

Screen Shot 2017-06-03 at 12.33.31 AM

And then we populate the dictionary ‘sitter_marginal_ev’ that we defined back near the beginning of this script:

Screen Shot 2017-06-03 at 12.34.18 AM

Finally we sort the sitters by their Marginal Expected Value and print the results:

Screen Shot 2017-06-03 at 12.35.57 AM.png

In this case, we find that we’ll get the highest Marginal Expected Value from guiding the lead toward Rose, who, despite having the lowest booking rate, has the lowest ratio of leads to vacancies. Jane comes in second place, and Rob in last.

Screen Shot 2017-06-03 at 12.36.37 AM

If you were to arrange them in search results, this is probably how you’d want to do it.

The Unbearable Lightness of Lead Scores

What does it really mean that a lead now has 100 points? 20 points? Is the 125-point lead 25% more likely to convert than the 100-point lead? Is the 100-point lead 100X more likely to convert than a 1-point lead?

If you’re assigning these kinds of point scores to your leads and you can’t answer those questions, you may be doing lead scoring the wrong way. Perhaps it’s because you’ve seen all the other marketers doing it this way, and you think it must be this way. But must it be this way? Does the chorus of marketing angels sing out, “Es Muss Sein!” (It Must Be!) when you add arbitrary numbers of points to leads based on traits or activities whose effect on likelihood of conversion you might not have a damned clue about?

NO!

I know it may seem simpler to continue using this sort of points system. That’s what tools like Marketo and Eloqua support. But let’s not be slaves to the constraints of Marketo and Eloqua. Don’t let sometimes garbage-y software tell us what to do! Instead of saying “action A earns a prospect X points”, try thinking of it in the following way:

“Leads that performed action A (or a set of actions ‘A’) have had a conversion rate of X%”

“Leads that performed Action B had a conversion rate of Y%”

“Leads that didn’t perform either Action A or Action B had a conversion rate of Z%”

It’s fairly basic segmentation following analysis. It might not have the sexiness of “lead scoring”, but it serves the same purpose and it does the job more effectively. Instead of an arbitrary points system, you can rank your new segments of leads by their odds of conversion, and you can choose which segments convert at a high enough rate to be worth sending over to your Sales team.

You can even call it a lead score to humor any of your marketing coworkers who like to leverage nonsensical business jargon. Or maybe armed with this new knowledge you can bring about a paradigm shift in their thinking.

Math for Marketers: Rover’s Overbooking Problem (Part 2)

Previously we looked at a simple version of Rover’s overbooking problem. Once again, a disclaimer that this isn’t necessarily how Rover’s algorithm works, it’s just a framework that can help you think through how to solve this type of problem.

So let’s ratchet up the degree of difficulty by assuming that our dog sitters are in the following situation:

Jane: 7 requests, 2 vacancies, 40% booking rate (60% non-booking rate), $30 revenue per booked stay

Rob: 5 requests, 2 vacancies, 15% booking rate (85% non-booking rate), $30 revenue per booked stay

When we were working with sitters with single vacancies, we really only had to worry about the probability of absolutely none of the dog owners agreeing to book a stay. No matter what the number of inquiries was, there’s always only one way to get that outcome–same as, for example, there’s only one way to roll snake eyes with dice. For every other outcome where you have one or more agreeing to a stay, there are multiple ways of getting that combination–sticking to the previous example with dice, there are multiple ways of rolling a five or six or seven and so on with two dice.

As you probably didn’t come here looking for a rehash of your junior high or high school math, I’ll refer you to Wikipedia’s page on Pascal’s Triangle, particularly the section on Combinations. From there you’ll learn that you can get the number of ways to get 0, 1, 2, 3, 4, 5, 6, 7, or 8 people agreeing to a stay out of a set of 8 inquiries (Jane’s case, if we sent her the next lead). Where n = the number of inquiries and k is the number agreeing to a stay, use the following formula:

Screen Shot 2017-05-31 at 10.55.06 PM

You learn that there are 1, 8, 28, 56, 70, 56, 28, 8, and 1 ways of getting 0, 1, 2, 3, 4, 5, 6, 7, or 8 people agreeing to a stay, respectively.

Now that we know how many combinations are possible, we have to find out the probability of getting one of those combinations.

The probability of 0 people booking a stay is 0.6^8 * 1. We multiply by 1 because there’s only one combination of 0 booked.

The probability of 1 person booking a stay is ((0.4^1) * (0.6^7)) * 8. We multiply by 8 at the end because there are 8 combinations booking.

The probability of 2 people booking a stay is ((0.4^2) * (0.6^6)) * 28. We multiply by 28 because there are 28 combinations of getting 2 people booking.

To avoid being too repetitive, you can generalize this as follows…

k = Number of People Booking A Stay — a variable that we will work through

n = Total Inquiries — 8 in Jane’s case

P(B) = Probability of Booking — 40% in Jane’s case

P(!B) = Probability of Not Booking — 60% in Jane’s case

C(n, k) = The number of combinations that result in “k” people booking a stay → n!/(k!(n – k)!)

 

Probability(k) = ((P(B)^n) * (P(!B)^(k – n)) * C(n, k)

And from there we would calculate expected value (EV) by taking the probability of each combination and multiplying that by the revenue from that combination. So you’d get something like:

EV(0) = P(0) * $0 = 0.0168 * $0

EV(1) = P(1) * $30 = 0.0896 * $30

EV(2) = P(2) * $60 = 0.2090 * $60

EV(3) = P(3) * $60 = 0.2787 * $60 (Stays at $60 because Jane can only book a max of two stays)

EV(4) = P(4) * $60 = 0.2322 * $60

EV(5) = P(5) * $60 = 0.1239 * $60

EV(6) = P(6) * $60 = 0.0413 * $60

EV(7) = P(7) * $60 = 0.0079 * $60

EV(8) = P(8) * $60 = 0.0007 * $60

 

You would then add those all up to get your total expected value, which works out to about $56.30.

If we sent the next lead to Jane–who had 7 leads, but then would get bumped up to 8, her expected value would be $56.30.

Sending the next lead to Jane means not sending the next lead to Rob, so he stays at 5 leads. Running the same calculation for Rob, we find that his expected value is $21.63.

Adding up both Jane’s and Rob’s expected value gets us to ~ $77.94.

What about doing this the other way, where we send the next lead to Rob (bringing him to six leads) and not to Jane (leaving her at 7 leads)? A slightly higher expected value of approximately $79.79. In this scenario, it would be slightly better to steer the lead toward Rob rather than Jane.

These sorts of calculations for fairly static scenarios can be done somewhat easily in Excel, and hopefully I’ll get around to attaching some Excel files that you can use as a template. The real fun, however, is in being able to apply these lessons to an infinite number of scenarios. That’s going to be too tough of a job for Excel or SQL, however, so in the next post on this topic we’ll cover how to do this in Python.

(UPDATE: here’s Part 3 and how to do this in Python)

Math for Marketers: Rover’s Overbooking Problem (Part 1)

In a world with too few leads, you send your leads to the people who are most likely to close them. In Rover.com’s case, we did this by placing our best sitters (determined algorithmically) at the top of search results.

This worked great during normal times, but it produced sub-optimal results during peak holiday seasons when we had many more dog owners seeking sitters than our top sitters could handle.

One little disclaimer: what follows is merely a theoretical framework on how to deal with this sort of problem. Maybe Rover implemented a solution just like this, maybe they did something a little bit different. I’m not about to divulge such secrets to you! But I think you still might find this problem solving framework useful.

 

Problem: Too Many Dogs, Not Enough Sitters

Imagine this scenario: it’s the week before the 4th of July and you go to Rover.com to search for a sitter for the long weekend. Rover, looking to maximize the likelihood of you booking a stay, serves up sitters ordered from most likely to least likely to close the deal.

You then click on one of the sitters at the top of the results, maybe the sitter at the top. You submit a request for sitting, and you think, “Great, someone will be able to look after my dog this weekend.” Unbeknownst to you, however, 30 other people also submitted requests. And the sitter only has room for one more dog. Let’s say that on average this sitter books 40% of her requests. Think you’ll still have a sitter? I’ll do that math for you: you only have a 1 in 4.5 million chance of none of those other people booking the stay.

This created a bad experience for every stakeholder in the Rover universe.

  • Rover missed out on revenue from dog owners not being able to book a stay
  • Dog sitters who had availability missed out on a chance to make some money
  • Dog owners could potentially get their weekend plans ruined by not having a sitter

 

Thankfully, with a little bit of math (and when you get to some of the more advanced versions of this problem, a little bit of Python or some other scripting language) you can devise a framework for solving this problem.

 

Solution 1: Set an Arbitrary Threshold, Hide The Inundated Sitters

Probably the simplest approach would be to set some threshold where if a sitter has X times as many open inquiries as available spots, don’t show her in the search results.

You can just arbitrarily declare “if a sitter has 5 times as many leads as available spots, don’t show her”, though perhaps your boss might feel slightly more comfortable with a slightly less arbitrary declaration like “if there’s a 90% or greater chance that the sitter will be unavailable, don’t show her”.

To find out whether there’s a 90% (or whatever number you decide on) chance of this, you’ll have to do some math.

Let’s look at a sitter, Jane, who books stays on 40% of her inquiries, therefore there’s a 60% chance that any given inquiry does not result in a stay. Let’s also assume that Jane only has room for one more dog.

To calculate the probability that none of the existing inquiries (leads) convert, thereby preventing the next dog owner from being able to book a stay, you would do the following calculation:

Probability of Vacancy = (Probability of Non-Stay)^(Number Of Leads)

In our sitter Jane’s case, let’s say she has 5 inquiries:

Probability of Vacancy = (60%)^5 = 7.78%

So there’s a 7.78% chance Jane will be available, meaning there’s an 92.22% chance she’ll be unavailable. That’s above the 90% threshold that we set arbitrarily, so you would exclude this sitter from the search results.

But there’s that “arbitrary”/“arbitrarily” word popping up again. And while this method may be an improvement, it might still produces sub-optimal outcomes.

 

Solution 2: An Expected Value Approach

Imagine that there are only two sitters available for the weekend: Jane and Rob. We established earlier that Jane has a 40% chance of booking a stay (or as we’ll treat it below, a 60% chance of not booking a stay). Rob, however, doesn’t book at as high a rate. It may be that he’s just not an appealing a sitter, or perhaps he’s super picky about which dogs he chooses to let stay at his place. Either way, lower conversion rate of 15% (or, again as we’ll treat it below, an 85% chance of not booking a stay).

Given Rob’s lower conversion rate, we rank him below Jane and place him lower in the search results during normal times. But the week before a major holiday isn’t a normal time. So do we steer the visitor toward Jane or to Rob? For simplicity’s sake, let’s assume both would produce $30 in revenue from booking a stay.

Expected Value to Rover (Steer to Jane) = Expected Value from Jane + Expected Value from Rob

= $30 * (1 – (0.6^5)) + 0

= $27.67

If the math looks strange to you, here’s what we did:

  1. Start with the revenue associated with the stay ($30 in this case)
  2. Calculate the odds of at least one of the five dog owners (the first four that Jane already had, and the fifth that would come in if we steered this next owner her way) being able to come to an agreement with Jane – in this case you start with100% (or “1”) and subtract the odds of none of them agreeing to a stay (0.6^5). As written above: (1 –  (0.6^5)).
  3. Multiply the revenue associated with the stay by the odds that at least one of the dog owners agrees to a stay to get your expected value: $30 * 0.9222 = $27.76
  4. You add that to the expected value from not steering anyone to Rob: $0.

 

Having established that the expected value of steering traffic to Jane is $27.76, let’s run the same analysis for steering the visitor to Rob.

Expected Value to Rover (Steer to Rob) = Expected Value from Jane + Expected Value from Rob

= ($30 * (1 – (0.6^4))) + ($30 * (1 – (0.85)^1))

= $30.61

Walking through that problem:

  1. Assume revenue for both sitters is $30
  2. Get Jane’s expected value with four leads rather than five: $30 * (1 – (0.6^4)) = $26.11
  3. Get Rob’s expected value with one lead: $30 * (1 – (0.85)^1) = $4.50
  4. Add the two together: $26.11 + $4.50 = $30.61

The expected value of steering this next visitor to Rob is $30.61, which is greater than the $27.76 we’d have gotten if the visitor went to Jane. In practical terms, what we would want to do is either exclude Jane from the search results, or at least place her below Rob–either way, do something to make it more likely that our visitor chooses Rob rather than Jane. Reminder: despite his lower conversion rate on average, in this case it makes more sense to send the lead to Rob rather than Jane.

That was a fairly simple problem. If you’re still following along, next we’ll take a look at how to solve this when you’re dealing with sitters who have not only multiple inquiries, but also multiple vacancies. That will require brushing up on some more of your probably long forgotten math, re-learning things like Pascal’s Triangle, particularly the section on Combinations. Totally worth it if you’re like me and you believe not just that the leads must flow, but that they must flow to people who are going to help you generate the most revenue.

Continue to Part 2

The Wrong Way to Do Lead Scoring

First, the right way: make sure you have data on prospect behaviors or demographic traits, as well as conversion data. Run analyses to determine whether prospects with a behavior/action or demographic trait converted at a higher rate than those who didn’t. Easy. (Not really, but not as scary as you might think)

Now, the wrong way, maybe it’s the way that your boss told you to do it or maybe it’s something you picked up from Marketo’s or Eloqua’s forums:

Step 1: Identify the actions that your prospects can take (e.g. visit your website or particular sections of your site, open an email, or attend a webinar)

Step 2: Assign a number of points to each action. Probably mostly arbitrary numbers.

Step 3: Give prospects some arbitrarily defined points each time they perform one of your arbitrarily chosen actions.

Step 4: Make further arbitrary distinctions by declaring a lead good and ready for sales when it has earned an arbitrary number of points.

Step 5: Pat yourself on the back and go to happy hour.

Sadly, this is how most places seem to do lead scoring, and it’s a terrible way to do things. Full disclosure: while I pride myself on generally not doing terrible things, I’ve fallen into the trap of doing lead scoring this way in the past, so don’t beat yourself up over it. Even if you did things the wrong way, there’s probably still a way to make use of all that work in Steps 1 through 4. And when you do it right, you can still pat yourself on the back and go to happy hour (Step 5). All you have to do is make sure you have all this prospect data stored somewhere, and that you can link those prospects to conversions (however your org defines them).

Data You Will Need To Do This The Right Way

  • Prospect actions and dates (preferably with times) that they took each action
  • Date (preferably with time) that the prospect was sent over to Sales
  • Date that the prospected converted (if it did at all)

Once you have that database of prospect actions and conversions ready to go, run some simple analyses. Did prospects who attended webinars prior to getting sent over to Sales convert at a higher rate than those who didn’t? Did prospects who visited the website more frequently convert a higher rate? Did prospects who got to particular pages convert at a higher rate than those didn’t visit those pages?

Some may think well OBVIOUSLY those segments converted at a higher rate. However, that’s not necessarily the case. In my own experience at PitchBook, some of the signals that marketers might think of as being signs of great leads actually weren’t. Some examples:

1. More Visits / Pageviews → Higher Interest? → Higher conversion?

FALSE! It turned out that some people were just visiting the site frequently because they loved the content from PitchBook’s outstanding free newsletter. They liked reading our news but had no need of our product.

And over the years, these prospects just kept visiting more and more and reading our free articles, racking up more and more points, but honestly never getting any more interested in our product.

What we ended up finding was that prospects who had fewer visits (but not zero) actually converted at a higher rate than those who had more visits.

2. Visiting Key Pages/Product Information Pages

FALSE! Sort of. At one point in time we found that leads who visited our “product” pages rather than our News pages with our free articles about the VC and Private Equity industry did convert at a higher rate than those who had only been to News pages. So that follows the intuitive line of thinking, however, over time, it changed: some of the Product page visitors converted at a worse rate.

“Some” visitors, because the prospects who filled out a lead form asking to get contacted of course continued to convert at a high rate. It was the remaining prospects–the ones who hadn’t filled out a form but visited the key pages–who were now converting at a lower rate.

Why? It’s hard to say for sure, but our guess is that over time we became better at getting visitors to fill out forms. As our conversion rate optimization efforts improved, anyone who might have been even a little bit interested in our product became more likely to fill out a form. The remaining visitors (note: if someone filled out a form, they got sent to Sales, we were left figuring out what to do with the ones who hadn’t) that we could draw on to send to Sales weren’t interested enough to fill out a form even though we’d made it easier for them, and so it followed they made for a bad lead when we tried to send them over. It was sort of like a negative survivor bias: prior to our conversion rate optimization efforts, more of the borderline interested prospects didn’t fill out a form, so our pool of potential leads still contained a decent number who might convert, but after improving our site, our remaining set of prospects to deliver “survived” our attempts to get them to fill out a form to become an inbound lead because they frankly just didn’t find our offer very compelling.

Of course, you may see something different. It could be that conventional wisdom works in your case, though that may just be because you’re not only bad at lead scoring and marketing analytics, but also bad at conversion rate optimization. But if you are better than that or if you want to become better than that, if you care to find the truth and if you care as much about making the best leads flow to your sales team as I do, find the time to do these analyses and run these tests. Your Sales team might even thank you for sending them slightly less garbage-y leads.

Tears In The Rain

I started working in Marketing in 2003, and I’ve seen some leads you people wouldn’t believe…

Generally speaking, I despise marketing “thought leadership”, but I’m a great big ol’ hypocrite so I’m going to do something approximating that here.