BONUS EPISODE: Brandon Bateman's Exclusive Take - Using Data to Make Smarter Marketing Calls
Bonus episode! Usually this podcast features guests, but in this special episode we have an exclusive one-on-one interview with Brandon Bateman. Get ready for rare insight directly from the mind of our founder as we go deep on why most marketing decisions fail. Through colorful stories and analogies, Brandon breaks down core concepts like sample size, margin of error, and confidence intervals. You'll learn a simple yet powerful framework to test if your marketing data is statistically significant (protip: it probably isn't). The goal is to help you anchor your decisions in cold hard data, not emotions or gut feel. If you like frank conversations about entrepreneurship and marketing, you'll dig this rare solo episode with exclusive access to our CEO. 0:00 Intro - The value of understanding data 1:30 Defining key terms - performance vs results 4:30 Coin flipping analogy 7:00 Marketing performance vs marketing results 9:20 Margin of error and confidence intervals 15:17 Don't let emotions drive marketing decisions 19:27 Anchoring to the data - Being a truth warrior 23:00 Concluding thoughts and call to action
Crack the code on real estate marketing success today. Listen now.
Hello and welcome back to another episode of the Collective Clicks podcast. This is your host, Brandon Baitman, and today I'm going to be doing something a little bit different. I don't have a guest today; it's just me, and I want to share with you some concepts that have changed for me. A lot of how I look at marketing and business has changed, including the way that I make day-to-day decisions. This episode is going to cover some definitions of different terms and an understanding of concepts and math—basically, all that boring stuff you probably learned once upon a time but might not be fully applying in the way that you could right now. For some of you, this might feel like a slap in the face, but for others, it could be a "come to Jesus" moment where you realize what is hurting you from your current beliefs in your business. I hope that as we unpack some of the things I've learned recently and go over these definitions, we can all gain a better understanding and grow together. We can all be making better marketing decisions.
A lot of what I want to talk about today has to do with sample size. Most people understand that a larger sample size is more representative of the population, which is absolutely true. But I want to add some additional definition and clarity to this concept by defining key terms—performance versus results. Understanding the difference between these two concepts can be very helpful.
So, let’s define each of these quickly to ensure we have a common understanding of the terms I'll be using today. This will help us learn better how they affect our decisions.
First, "result." This is obviously something that I care about a lot, and something you probably care about a lot when it comes to marketing in your business. A result is a consequence, effect, or outcome of something. It’s basically what happened—that’s simple. Many people think that performance is the same thing, but it’s not. If we look at the definition of performance, it is the action or process of carrying out or accomplishing a task or function. Another definition is the capabilities of a machine, vehicle, or product, especially when observed under particular conditions.
I think that’s interesting because performance is basically the thing that happens before the result. It’s the underlying factor that leads to the result. For example, let’s talk about flipping a coin. If I flip a coin over and over again, the performance of the coin in terms of getting heads versus tails is that it should give me heads 50% of the time and tails 50% of the time. I know that even before I flip the coin because I understand how coins work. This is innate to the system, not necessarily what happened.
Whereas a result is the measurement of that. So, if I flip the coin four times and get three heads and one tail, what does that tell me? The performance of the coin is 50% heads, yet the result is 75% heads. Notice that performance and results aren’t always the same thing; sometimes they differ. It turns out that the longer you let it go, or in other words, the larger your sample size, the more the performance and results tend to come together. Performance will be roughly the same across the whole sample, and then results will approach it. So, theoretically, if you flip a coin a million times, it’s pretty likely that you will be very close to 50% heads and 50% tails. This is how results work with a larger sample size—they approach performance.
The real question is how long it takes for them to be the same thing. How this applies to our marketing is that we’re always spending money on marketing and wondering if we have enough data. It could be a day, a week, a month, six months, or even two years. Do we have enough data to know that this actually works? How many times do you have to flip a coin to know that it actually flips at 50%? And not just blindly believe it based on how coins are supposed to work.
It turns out you will never fully understand that results and performance will never be completely the same. Why? Because there will always be about 50% of outcomes where you could be above 50% heads and 50% where it’s below, due to the bell curve. Even if it becomes a very tight bell curve, there’s always this bell curve around the 50% mark. Therefore, you can’t actually do that without introducing something called margin of error.
We start by asking a question that’s a solvable problem. For example, I could ask, “How many times do I have to flip a coin in order to know with 90% confidence that it gets heads at least 40% of the time?” That’s a solvable problem. I want to be able to prove, with 90% confidence, that even though the coin is targeted towards 50%, it’s at least getting heads 40% of the time. This means that the bell curve around the 50% probability just has to get tighter and skinnier until, at some point, over 90% of the bell curve is to the right of that 40% probability. I hope this makes sense. If not, I’ll explain the actual outcome and reality of what this means. It’s the same thing with marketing.
Instead of aiming for 50% heads on our coin, let’s assume we’re looking for a 5x return on ad spend (ROAS). We want to spend $10,000 on ads and generate $50,000 in revenue—that’s a 5x ROAS. Just like when we flip a coin, we have a percentage chance of getting what we want each time we spend a dollar. Every dollar spent on marketing has a chance of achieving that return. For example, if you have $20,000 deals (which is typical for many markets; some of you might deal with larger or smaller amounts depending on your exit strategy), and let’s say your coin was really targeted towards a 5x ROAS, it would mean you have a 0.25% chance each time you spend a dollar to get $20,000.
Marketing isn’t about spending money and getting an immediate return; it’s about each dollar spent having a chance of achieving the desired outcome. Over the long term, results and performance will come together. However, this is really scary for many statisticians because it involves sparse outcomes.
When flipping a coin, you have a 50% chance of heads and a 50% chance of tails. But when spending a dollar on marketing, you have a very small likelihood of success each time, which means you’re measuring a rare outcome. It’s challenging to model this.
Let’s apply the same concept. If we want to achieve at least a 4x ROAS and be about 95% confident that’s true, we need to account for margin of error. We work with 170 clients, so this means we will still be accurate for a certain percentage of our clients if we perform this analysis correctly. Everyone says 95% confidence is solid, but I see the bad end of that statistic often. We’re aiming to measure at least a 4x ROAS.
To understand this, we recently ran a simulation with a data scientist to determine how much money we need to spend on this marketing channel to have a good idea if it works. We conducted three billion trials to see how probability plays out and then reverse-engineered the results to determine the required budget.
How many times did we make this money, and how many are grouped in these different ways? We can actually get down to a number, and this number represents how much money you should spend on a new marketing channel. In order to know, with up to that 20% margin of error, that it's performing at the level you expect it to or that the underlying coin, for example, should have, it's the same concept with the coin. How many times do we have to flip the coin to know that it actually is a fair coin? This is how many dollars we have to spend on a marketing channel to know that it's a fair marketing channel, assuming it is actually fair.
Here's the other thing about this: I've told this to a few people, and they all assume that I'm talking about our strategies, what we've measured in digital marketing. This is all purely theoretical. Now, it does have actual applications, but this is just basic laws of statistics. We're just looking at what is the probability of something. Theoretically, this applies to all marketing channels. So if you want 95% confidence and you want the 20% margin of error and you close deals in $20,000 increments—like, for example, if you did a bunch of deals that are all worth $10—then you would have a less sparse data set. You wouldn't need quite as much money. But if all those things are true, which they are true for most of our clients, you would need to spend $240,000 on a marketing channel that does work to prove that it indeed does work. I don't know about you, but to me, that number was shocking. We analyzed it in a whole bunch of different ways and kept on getting answers similar to that when we were doing the calculations.
That's not to say this is a bulletproof analysis or it couldn't be done better or it couldn't mimic the real world better. But here's the thing: Statistics exist as a model around reality. Statistics help us understand reality in a better way because the gap between what you think you know and the reality of the world is the gap that, if you close it, you can accomplish a lot more things. Right? And I know that so many of you listening to this are entrepreneurs—has to be over 95% of you. Right? I'm an entrepreneur too. I know how it works. You don't always believe in rules. Sometimes you think you're the exception in everything. Right? Sometimes you walk outside and want to jump really high and just think that you can fly because you're delirious like that, but there's gravity. Right? This is no different. Right? Statistics—there's no evil person that invented statistics and told you this isn't true. Right? Just like we have the theory of gravity, and that tells us how it works. It’s probably not even called the theory of gravity. I'm not that strong in science, but you get the point. Just like we have these different scientific theories that explain physics, statistics explain the natural laws of math, numbers, quantities, and populations.
That is really insightful. Right? And that can be frustrating at first when you learn these things because you want to just go outside, you want to jump, and you want to fly. But if you learn how gravity works and you learn all the other physics, then you start to learn the kind of things that you need to build something like an airplane. Right? So instead of just wanting to make something work in a world where it shouldn't, you start to learn if it were to work, how would that have to happen?
Let me tell you how people go wrong with this over and over again. This is both in what you could call positive ways and negative ways, although I think the outcomes are generally negative. So number one is people get way too excited or bought into marketing campaigns based on their results. What you care about at the end of the day is the performance of the campaign more than you care about the results. Right? Because that's more predictive of the future. Yet we get really caught up on the result. If you look at it, you may have heard me compare marketing to finance and accounting. Right? So many marketers are basically accountants. They are the bean counters of revenue. Right? "I spent this, I got this." For a lot of people, if you ask them what marketing is, they will tell you that is what marketing is. You throw a bunch of stuff at a bunch of marketing channels, you measure how it performed, you find what was working well, and you double down. You find what wasn't working well, and you cut it. That's a very narrow view of marketing that is more similar to accounting. Right? We're trying to figure out what happened. You know what? That measures the result and then assumes performance based on the result.
The best marketers are like CFOs. What does a CFO do? Yes, they look at the accounting. Right? That's an important piece of it. We need the accountability, we need the measurability. But they're not looking at it to see this is where we are and we're predicting the future exactly the same. They're trying to look at it, understand the model, and then predict things that haven't yet happened. Things like modeling out what your diminishing return might be. Things like understanding how the sample size is affecting the previous results that you're looking at. All of those kinds of things. And people just go wrong with this. Right? So then you have these situations where you're going way too heavy on campaigns that aren't actually working, or on the other side, people are cutting marketing campaigns that aren't working. I believe in this so strongly, and I've been observing this so much in working with so many different companies in this industry—wholesalers, flippers that are spending this money on marketing—and asking them why they're making the decisions they're making, not just about the marketing channels that we manage for them, but other marketing channels enough so that I believe that far more often than not, we're making decisions based on random variation alone. Something's changing, and there are a few ways that this goes. Right? You can just picture that a marketing channel is going to be up and down, up and down, up and down. It's not systematic like that. It's like you'd like to flip a coin and see heads, then tails, heads, then tails, and heads, then tails. Right? But it doesn't happen like that. Over a big sample, you should have roughly the same number of heads as you do tails. And if you don't, that would be fairly unlikely, but it could happen. But it doesn't happen like heads, tails, heads, tails, heads, tails.
Here's what people do, and this is where we get tricked. Either you start out on the low, and then you never let your campaign get to the point where it's performing, and you cut it early. I see this happen: Don't let emotions drive marketing decisions with new marketing channels all the time. You're a business. You have a lot of marketing channels that are working for you. You add a new one, spend a lot of money, you don't see the result right away, you get scared, you cut it, and you go back to the old ones. That's a very common scenario. Number two: The results are good at first, you get way too excited, and then the results get bad, and you get way too upset about it. We have some clients like this where they get results right away, and then the minute things take a turn for the worse from the result standpoint. Think of it like you're flipping your coin and you just got three heads in a row, and you're super excited, and then now you've got two tails. Right? And the second they get that second tail, they're done. They say, "It used to work, and now it doesn't anymore." Just like if I'm flipping my coin, I got my heads and I got my tails, you could say the result was, if you want heads, right, you could say the result was good, and then it went bad. And it's really easy to get caught up in that, even though we're just flipping coins that generally don't have differences and variation.
Now, the tricky thing here is that marketing channels actually are increasing or decreasing in performance over time. But that performance is all in this noise of the results going crazy up and down. Right? So that's the second thing that can happen. Right? You see things go negative and then you take a turn. Now, here's the thing that happens after that that's a big problem, and I see this all the time with people self-managing their PPC. You'll find that, especially when you're dealing with data, if you're not really savvy with it and really aware of this effect, you'll find that whatever you're looking for will exist in the data. For example, people managing their own PPC campaigns, it might be going well, it might dip. Right? So now it has dipped. Then they look at it, and they say, "Well, you know, I think my landing page could be better." Right? "Okay, so I'm going to get a different landing page," or, "I don't like those keywords, so I'm going to change that." Or maybe if you're not managing your own PPC, you think, "Well, this agency is not performing well. I feel like they're not spending enough time looking at the campaigns," or whatever the case is. Right? Everybody kind of makes up their own thing that may or may not be grounded in the actual truth. They make that up and project that on their marketing performance to explain it because we're dealing with things here that are hard to explain. Right? And we feel better as humans when we feel like we understand them. Right? So they say that that is bad. So if it's just a business owner working with an agency, maybe they change the agency they're working with. If it's someone managing their own PPC, maybe they
change the landing page or the keywords or the creative or whatever they're doing.
The problem is that sometimes that might work, and sometimes it might not work. It might also have worked if you didn't change it, but it might not have worked if you did. But this is the interesting part. If you just keep on making those changes, you have a problem where you can't actually assess the true performance of your campaign. Because you know what? For marketing, you're not getting a reliable measurement of performance, even though we're spending a lot of money on it. We're getting this measurement, but it's not as reliable. So what people do is they either stick with it, they make changes, they stick with it, they make changes, and they never actually get it. Right? They might improve the performance of a campaign by making these changes. They might improve the performance of their landing page. It might improve the quality of their lead. They might get better, but they never actually get the right measurement of whether it was good or bad. Right? They might improve their PPC spend. They might fix things. They might improve it. Right? But the problem is they never actually get a real measure of the actual performance of their marketing. So they're constantly getting stuck in the data, stuck in the weeds, and their overall performance just becomes noisier and noisier.
To sum up: the analysis is not perfect. We're never going to have that perfection, but we need to measure how marketing works in order to learn how to get better. And a huge mistake is making changes to your marketing channels too early. That is a huge mistake. People are way too quick to react to what the numbers tell them because it's in the noise. And it's just as much in the noise as your results are up and down. If you just think about this and how you can look at your results in a better way and understand that you're actually dealing with a noisy data set, you might get to a better outcome. Right? But overall, understand that you have to understand that your marketing is going to be noisy and the more you can think about it in terms of probability, and the more you can understand it in terms of how much data do you have and how reliable is it, then you can improve your marketing results. Right? And so many people just ignore that. They don't look at their marketing campaigns that way. And when you start to look at it this way, you make better decisions, you take a longer-term approach, and you're able to think about it and say, "What am I actually trying to accomplish here?"
Sign up to our Newsletter
Ready to join the big leagues?
Start with a free strategy consultation.