The Art and Science of Decision Making

The way our brains work, our cognitive biases and inherent irrationality as well as the quality of our decisions and forecasts & predictions are a popular topic lately. There is a growing literature on these subjects that is very interesting. I recently read four of them and presented some insights to the team. Here it is in blog format and more detail for our followers! It's rather long(~2000 words) so make sure you have a good ten minutes before diving in.

A note on who this is relevant for: At FindHotel prediction usually refers to predicting the "visitor value" for various situations and forecasting refers to forecasting our revenues & profits for the coming months: areas purely in the domain of our data analysts & scientists. But are they really that limited? No! It’s relevant for every decision we make. Examples are endless:

  • How long will this project take?
  • How should we prioritize our tasks?
  • Which technology stack should we use for this project?
  • Will this candidate add value to our team?

Planning, hiring, technology, it’s everywhere. It’s also very relevant for personal decisions on finance, relationships, career etc. So this is relevant for everybody!

A quick overview of the books:
{% responsive_image path: images/image2016-3-621_59_39.png %}

"The Black Swan" by Nassim Nicholas Taleb (2007):

Taleb, a former stock trader turned scholar on uncertainty, coined the term Black Swan referring to rare & unpredictable events with huge impact. According to him these events determine the future and the little things that we try to predict are inconsequential in the long run. We should not try to predict the probability of Black Swans but be prepared for their impact to limit their damage or to even benefit from them. The advice is not to take big risks or make big predictions but to make a lot of smaller ones to spread a small share of your resources over many high risk & high return assets: the Barbell strategy. His conclusion is that we know much less than we think and even "experts" cannot predict the future but that things are easy to justify in hindsight. These days he is busy proving the misuse of statistics in research.

"Thinking Fast and Slow" by Daniel Kahneman (2011):

Our brain has two modes of thought: System 1 which is fast, instinctive, emotional and System 2 which is slow, deliberate, rational. System 1 is necessary for all the thousands of quick or even subconscious decisions we make daily but it also prone to cognitive biases & mistakes. So while humans are not as good at making decisions as we might think, we can train ourselves to recognize and avoid the pitfalls and the fallacies of System 1. Also an important part of the book is a description of "Prospect theory", which improves on the "Expected Utility Theory" by also taking risk & probability into account. This made Kahneman, a psychologist, winner of a Nobel Prize in Economics.

The Signal and the Noise: Why So Many Predictions Fail — but Some Don’t” by Nate Silver (2012):

Silver, a statistician who moved from being a consultant to baseball analytics and then to political prediction has recently become something of a pop star with his success predicting elections in the US. According to him most predictors out there are either biased or using flawed statistics. Combining different and unique data sources(including others’ predictions) is likely to give you a lead over them. And in the bigger picture, prediction approached scientifically can improve our understanding and speed up learning in many areas of social and natural sciences. Like many others, he is also against using Frequentist statistics because it simplifies too much and commonly lends itself to misuse or misinterpretation. He advocates using the probabilistic approach of Bayesian statistics (which we have also recently gotten into at FindHotel). In such an approach good predictions require a good understanding of the domain and incrementally updating your understanding with new data points. Also, making money from poker is not as easy as it used to be.

Superforecasting: The Art and Science of Prediction” by Philip E. Tetlock (2015):

“Experts are roughly as accurate as a dart-throwing chimpanzee“: This now popular term made Tetlock, a political scientist, rather (in)famous. He is also know for co-founding the Good Judgement Project which is sponsored by the US intelligence community to crowd-source and benchmark the predictive skills of regular people against those of the experts in politics & economics. Outcome: the top lay forecasters predict 30% better than intelligence experts with access to classified information. The book goes over the pitfalls of prediction and the traits of these “superforecasters”.

Takeaways:

Summarizing the main takeaways, mainly the cognitive biases from Kahneman and the traits of good forecasters from Tetlock:

1. Break a problem into parts

  • Answer multiple easier questions instead of answering one hard one. It will make you more accurate. (see the Fermi problem for a numerical approach)
  • Example: Hire by criteria
    • Instead of trying to evaluate if a candidate is good overall, evaluate him based on 4-5 quantifiable criteria.
    • Use targeted questions that will make it possible to objectively grade the criteria.
    • The result will be a more objective and hopefully better decision.

2. Take the Outside view

  • Don’t only look inside your company / project / question
  • First look at the outside world, what does that tell you about your question?
  • Then adjust that with internal information
  • Example 1:
    • “Steve is very shy and withdrawn, invariably helpful but with very little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail. Is Steve more likely to be a librarian or a farmer?”
    • Librarian. Easy, right? Actually, as there are significantly higher number of farmers than librarians (especially for males), Steve is much more likely to be a farmer! Getting lost in the details of the problem and ignoring the bigger picture is very easy.
  • Example 2: Project planning, estimating how long a project will take
    • "Planning fallacy": People tend to be very optimistic when breaking a project down into parts to predict how long it would take
    • Use "Reference class forecasting" instead. First find out how long similar (you can stretch this) projects took inside and outside your organization (as well as their outcome and resources at their disposal). Taking their average is a much better place start your prediction. Afterwards you can adjust this (up or down) based on your detailed planning or due to special circumstances regarding your organization.
    • Also, always ask “what could go wrong” before, and do postmortems afterwards.

3. Update, Measure and Calibrate

  • Update forecasts & decisions as you get new information. Changing does not mean you were wrong, it means you are learning.
  • Measure them and evaluate their accuracy. Work on improvements based on that feedback loop.
  • If you don’t measure the accuracy of your forecasts, you will not know if you are any good and even worse you won’t improve. This is the sad situation in most of politics, economics and science.
  • Examples from FindHotel:
    • We frequently update our forecasts for monthly revenue & profits as we get more data but we do not store these forecasts and measure their accuracy in a systematic way. Room for improvement!
    • It’s not easy to predict how long development projects take and we are no exception to this. Our Front-End team, with their PO, made great improvements though. By recording the forecasted time required for each sprint and measuring the outcome, they were able to significantly improve their accuracy over time. Now their sprint planning accuracy is near perfect!

4. Avoid the Anchor Effect

  • The first number you see or hear has too much impact on your decision. It becomes an "anchor".
  • It’s not easy to ignore the anchor, but with discipline it’s possible.
  • Examples:
    • Negotiation: The asking price/rent for a house is a very strong anchor. Even if the asking price is not realistic and does not fit your valuation, your offer is still likely to be more based on the asking price than your own valuation. In such a situation you are likely to just adjust the asking price down rather than offering your own valuation. The difference can be huge. The trick is to be the one setting the anchor, or to refuse to negotiate in the presence of an unreasonable anchor.
    • Discounts: Some cool item is discounted from $500 to $300? Great, go buy it! But you might be better of first evaluating if it’s really worth $300, let alone $500!
    • Question: Is the population of Sri Lanka more or less than 70M people? How much is it? Take a guess before Googling.
    • Anchor effect is even observed with relation to numbers that have nothing to do with the issue at hand. From Kahneman’s research: “participants observed a roulette wheel that was predetermined to stop on either 10 or 65. Participants were then asked to guess the percentage of the United Nations that were African nations. Participants whose wheel stopped on 10 guessed lower values (25% on average) than participants whose wheel stopped at 65 (45% on average)”. Incredible!
  • Anchoring effect isn’t necessarily only relevant for numbers. The same thing can be said for ideas, especially if they come from a person of authority.

5. Regression to the mean

  • A measurement that is extreme is more likely to measure closer to the average the next time. Things average out. It’s important to expect this and not be surprised:
    • Children of very tall parents are more likely to be shorter than their parents.
    • Students with the top grades in a test are more likely to have a lower average score in the next test.
    • If you’ve had the best pizza of your life in a restaurant, the next time you go there you are likely to be disappointed relative to your first visit.
    • And from our business: Our top destinations from February, are likely to perform significantly worse in March.
  • Usually happens when making decisions on a non-random sample and/or without enough data, which leads us to the next point.

6. Not enough data / Statistical Significance

  • The single biggest and easiest to fall into pitfall in data analysis. Making a decision based on a small data sample will lead to wrong decisions and waste your time.
  • It’s better to make no decision and wait rather than continuously chasing changing signals (more likely noise) which costs a lot of effort and likely leads to more bad than good decisions. Regression to the mean will come into play.
  • "How much is enough data?" is not an easy question to answer. We face this problem a lot since we are marketing hundreds of thousands of destinations, a very sparse data set. Probably the best approach is to use a combination of statistical methods, the historical behavior of the data and your domain experience to decide. And err on the side of caution. Even better, a simple model will beat the expert.

7. A simple model will beat the expert opinion

  • No matter how smart or experienced of a decision maker, analyst, recruiter etc. you are, you should depend less on your personal (and likely biased) decision when facing a decision and trust simple models where possible.
  • If you are lucky like us and have smart data scientists that can build complex machine learning models to make decisions, great! If not it’s not a problem. A simple model can be really simple such as “Only move to the next hiring stage candidates that scored at least an average of 7 and no score below 5 in any criteria”. Makes your life easier!

8. Other traits of good forecasters / decision makers

  • Considering alternate views & what could go wrong
  • Being "actively" open-minded
  • Being humble and self-critical
  • Not believing things out of faith or habit
  • Having grit
  • Having a growth mindset (one of our most important hiring criteria)

Still interested in more?

Thanks for reading all the way down here, I hope you were as interested as I was! There are many more books in similar subjects, including more from the same authors. Are there any you recommend? Please share in the comments! Also you can:

Lee Clissett

Lee is a product and technology enthusiast from the UK. With more than 10 years experience in product leadership he is obsessed with building a world-class product company.