What is a contingency trap? Why do marketers sometimes walk into it when split testing? And what does sociology have to do with that?
What’s contingency?
In 1997, Niklas Luhmann, a German sociologist and prominent advocate of systems theory, wrote a book explaining how contingency is shaping our lives. In Luhmann’s opinion, the world is a black box. Societies are complex and as humans we don’t always understand what’s going on.
The issue is, though: We are wired to make sense of the events that are happening around us, we tend to apply explanatory patterns (“Erklärungsmuster”) to everything – and we are constantly trying to make rational decisions without having all the information. Therefore, in order to be functioning, we have to reduce the complexity of reality.
Our brain even makes stuff up so that reality fits our perception. Following this line of reasoning, humans historically attributed everything to a higher power that couldn’t be explained. Luhmann calls this a contingency formula (“Kontingenzformel”).
We even use those patterns to convince people around us that our reality is more valid than theirs.
The Marketing Contingency Trap
“Let’s launch the page like this and run an A/B test later”. Sounds familiar? In many cases, it’s the voice of a person outsourcing a present problem to the future – in order to get alignment for their own idea. In this case, AB-testing is not just a method anymore, it becomes an enforcement-instrument.
When split testing is exploited like this, it works as a contingency formula which will reveal the most authentic version of marketing-reality. Split testing is the instrument that allows us to look into the black box and unravel the complexity of KPI-driven decisions.
So, what’s the problem with this approach – using the A/B test possibility in order to push through an idea?
It’s not a problem per se, but it can become one if:
- a split test is not really an option (300 daily visitors at 5% CVR → 51 days of testing
)
- heuristics are ignored
- the person doesn’t really care about the split test, but instead wants to push through an idea (because then your culture might be broken)
- attribution is not feasible and possible spurious correlation is ignored
- there are too many constraints which might distort results
- you set up a test without a clear a priori hypothesis
- it is probable that the test will never be implemented
“We test everything!”
Sometimes, marketers will say stuff like “We test everything”. Nope, you don’t. I’ve never heard of someone testing this color #ff0000 for the body text of a corporate blog. Why is that? It’s for the same reason we don’t test squared tires on cars. It’s because in many cases heuristics make testing unnecessary or even irrational – especially if you’re trying to make wise decisions while allocating limited resources.
But since data is the new God and AB-testing seems like magic for the misinformed observer, the sentence above is marketing blasphemy.
As a matter of fact, I’m a big fan of split testing. I love data and I like to work data-driven. But it’s important to know the limits of it, instead of walking into the contingency trap by instrumentalizing a method that definitely has a raison d’être – but which is neither the holy grail nor 42 .
In my opinion, one reason why split testing has become so popular is because it improves the marketer’s life and is set up so easily. On the one hand, it helps us to make informed choices and be more KPI driven. On the other hand, it has become a communication tool that can answer the question “which button is clicked more often”, and can be used to push through ideas.
The latter scenario can be a symptom of a broken communication culture. Don’t walk into that trap.