The importance of understanding toilets and politics

tl;dr: Understand things before you have an opinion

In today’s world, there is much discussion about fake news, about political movements becoming more extreme and about a divided society. Unfortunately, I rarely hear anyone discuss why our society is diverging and what can be done to prevent this. So please bear with me while I introduce a psychological issue which is a promising tool for cooling off and understanding the origin of heated political disagreements.


Picture a flush toilet and ask yourself: how well do you understand how this toilet works? Maybe rate it from 1-7? Are you above the average, which would presumably go for a 3 or 4? Now please stop reading and explain to yourself how that toilet you (hopefully) use every day works! Go through every step before reading on.

Did you explain where the water comes from, why there is water down there in the first place, how the toilet knows how much water to flush, how it refills the correct amount,…? Do you still think you understand it as well as you assessed a few seconds ago?

You may be under the illusion of explanatory depth, termed and examined by Leonid Rozenblit and Frank Keil in 2002, in short, IOED. People feel they understand complex phenomena with far greater precision, coherence and depth than they really do.

One of the most important reasons for IOED is the confusion of higher and lower levels of analysis. Most complex systems are hierarchical in terms of explanations of their natures. In explaining a cell phone, one might describe the components such as a camera, buttons, loudspeakers and apps. If then asked what a camera is, you might start explaining flashes, apertures, lenses etc. The illusion of explanatory depth occurs when we gain a surface layer understanding and then stop asking any questions!

Another reason for IOED is the rarity of production: we rarely give explanations and therefore have little information on past successes and failures which would help us classify our knowledge. In contrast, we often tell narratives of events or retrieve facts; hence it is often easy to assess our average level of knowledge in these cases by inspection of past performance.


In case I you are still reading (thanks, I guess), you may be starting to wonder why I am boring you with toilets. To be fair, you will (almost) never need to know how these things work, this is simply the division of cognitive labor and ultimately how our society can function on a high level.

But of course, the IOED extends well beyond toilets, to how we think about scientific fields, mental illnesses, economic markets, politics and virtually anything we are capable of (mis)understanding. Not understanding how toilets work is one thing, not understanding the history of Jerusalem and all the involved parties and still having a strong opinion on how this should be handled – that is a very different thing. Today, the IOED is profoundly pervasive, given our access to infinite information which we consume in large quantities – however, most do this in a superficial manner. Most of us consume knowledge widely, but not deeply!

Fortunately, understanding the IOED allows us to combat political extremism. In 2013, Philip Fernbach and colleagues demonstrated that the IOED underlies people’s policy positions on issues like single-payer health care, a national flat tax, and a cap-and-trade system for carbon emissions. As in Rozenbilt and Keil’s studies, Fernbach first asked people to rate how well they understood these issues, and then asked them to explain how each issue works and subsequently re-rate their understanding of each issue. In addition, participants rated the extremism of their attitudes on these issues both before and after offering an explanation. Both self-reported understanding of the issue and attitude extremity dropped significantly after explaining the issue – people who strongly supported or opposed an issue became more moderate. These studies suggest that IOED awareness is a powerful tool for cooling off heated political disagreements.


In a time where income inequality, urban-rural separation and strong political polarization have fractured us over social and economic issues, recognizing our own (at best) modest understanding of these issues is a first step to bridging these divisions.

The next time you are having an intense debate about Trump’s politics, unconditional basic income or your educational system, take a step back and contemplate on whether you are in a position of real understanding or rather throwing superficial arguments at one another.

And as always, stay curious!


For deeper insights on this topic, I can highly recommend Dr. Fernbach’s book, “The Knowledge Illusion: Why We Never Think Alone“, Keil and Rozenblit’s original paper: “The misunderstood limits of folk science: an illusion of explanatory depth” and, of course, the Wikipedia page on flush toilets!

The Conjunction Fallacy

You think your rational, right? Given all facts and enough time to decide, you can always come up with the correct solution?

Well, really sorry to break it to you, but you are far from being a rational boolean agent! In fact we humans are sh*t at statistical decisions.

In this, and maybe some upcoming posts I will explain some cases where humans don’t act according to simple logic. These flaws may come in handy, once you are aware of them! Obviously, the best approach is to test these fallacies on you, the reader.
So, assume following description of Heiner is true. Given these facts, order the 6 options below by their probability meaning the most probable option first and then ending with the least probable option:

Heiner is 34 years old. He is intelligent, but unimaginative, compulsive, and generally lifeless. In school, he was strong in mathematics but weak in social studies and humanities.

No complaints about the description, please, this experiment was done in 1974. 

A:  Heiner is an accountant.
B:  Heiner is a physician who plays poker for a hobby.
C:  Heiner plays jazz for a hobby.
D:  Heiner is an architect.
E:  Heiner is an accountant who plays jazz for a hobby.
F:  Heiner climbs mountains for a hobby.

Take a moment to rank these six propositions by probability. Write them down so you can`t cheat.

In a very similar experiment conducted by Tversky and Kahneman in 1982, 92% of 94 undergraduates at a well-known American university gave an ordering with A > E > C, did you do too? The ranking E > C was also displayed by 83% of 32 grad students in the decision science program of Stanford Business School, all of whom had taken advanced courses in probability and statistics.

There is a certain logical problem with saying that Heiner is more likely to be an account who plays jazz, than he is to play jazz.  The conjunction rule of probability theory states that, for all X and Y, P(X&Y) <= P(Y).  That is, the probability that X and Y are simultaneously true, is always less than or equal to the probability that Y is true. Violating this rule is called a conjunction fallacy.

Imagine a group of 100,000 people, all of whom fit Heiner’s description (except for the name, perhaps).  If you take the subset of all these persons who play jazz, and the subset of all these persons who play jazz and are accountants, the second subset will always be smaller because it is strictly contained within the first subset.

Why would highly educated students with knowledge of statistic still fail this test? Why did you fail the test even though you knew it was a test? One explanation would be a misunderstanding of the statements, a problem with wording and framing. Maybe you understood “A:  Heiner is an accountant” as “Heiner is an accountant but he doesn’t play Jazz”.

It is then possible that “E: accountant & plays jazz” > “A: accountant & doesn’t play Jazz”. Another problem is that many people will maybe mix probability and plausibility – meanings “what is plausible”, and “whether there is evidence”. But we’ll talk about this later, some more tests first.

Tversky and Kahneman (1983), played undergraduates at UBC and Stanford for real money:

Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequences of greens (G) and reds (R) will be recorded.  You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you chose appears on successive rolls of the die.


65% of the subjects chose sequence 2, which is most representative of the die, since the die is mostly green and sequence 2 contains the greatest proportion of green rolls.  However, sequence 1 dominates sequence 2, because sequence 1 is strictly included in 2.

This clears up possible misunderstandings of “probability” as stated above, since the goal was simply to get the $25.

Another experiment from Tversky and Kahneman (1983) was conducted at the Second International Congress on Forecasting in July of 1982. The experimental subjects were 115 professional analysts, employed by industry, universities, or research institutes. Two different experimental groups were respectively asked to rate the probability of two different statements, each group seeing only one statement:

“A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”

“A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”

Estimates of probability were low for both statements, but significantly lower for the first group (1%) than the second (4%). Since each experimental group only saw one statement, there is no possibility that the first group interpreted (1) to mean “suspension but no invasion”.
This excludes the first explanation I stated!

So, adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable.

There are many other fallacies and psychological effects that show how limited and non-bayesian our brain actually is. Though this one really hit me hard!

Take a bit of time the next days to figure out where this effect is important. When a politician tells you how he wants to achieve something it seems more reachable, right? In politics and economics predicting the future is important but basically no one is good at it. By adding of details and explanations everything seems more probable. Especially the last test example with the USA is impressive. It almost seems like our brain is happy not to think about why something will happen and happily accepts the explanation of a polish invasion, even though this makes the whole statement less probable!

The reason why this fallacy is growing in relevance is this new technology, the internet (Neuland).

Thanks to the vast amount of information (or misinformation) on the internet, we are all able to build stories, ideas and “facts” on the fly. Nowadays, we can all easily be misguided by confirmation bias, our natural tendency to search for information that confirms our beliefs and to ignore that which threatens our beliefs.

 The problem is that the abundance of blocks makes it very easy to string together stories supporting A, B, and C. The internet makes readily accessible vast numbers of marginally relevant or manifestly false details about almost everything and everybody. Because of this, confirmation bias and the conjunction fallacy are very easy traps to fall into. Untrue stories are believable not only because of our partisanship and our confirmation bias, but also because of the proliferation of information on the internet. Anyone can now pick and choose from the internet’s vast trove of “facts” to colorfully embellish a simple story and make it superficially plausible.

This in itself is a whole other topic I should maybe not squeeze into this post, but anyway, can’t erase it now…

There are many, many other situations in life the conjunction fallacy applies to. So the next time you are tempted to believe something or someone, trust mathematics not yourself!

And as always, stay curious!

Superintelligent Artificial Intelligence

“This is all speculation and is solely an intectual game!”, you might say. Well, then sit back and enjoy me wasting my time on irrelevant topics
. But maybe, just maybe I can convince you that we have to take this more serious and consider actions to take?

Tl;dr: Too long; didn’t read: We will all die!


General intelligent minds have been anticipated since the 1940s. But like some other technologies (looking at you, nuclear fusion!), the expected arrival date moves forward as time goes by. What was to be in 20 years in 1950 is to be in 25 years now in 2016.

An agent is super intelligent when he is better at solving a wide range of “intellectual problems” than any human being. These problems could range from solving a Rubix Cube, to setting up the best 3-year strategy for your company or understanding how superconductivity at 290 °C could work.

Today a agent can only barely solve the first problem but the other two are far beyond its scope. Still, it is impressive how far our intelligent computers have come. John McCarthy once said: “As soon as it works, no one calls it AI anymore”

In the 50s experts were of the opinion that a machine beating a man at chess is surely the ultimate triumph of AI – which it wasn’t.  We have beaten humans in several games e.g. Chess – 1997 Deep Blue  ; Joepardy – 2010 by IBM’s Watson and DeepMinds AlphaGo beat a world class Go player in March 2016!

Almost everyone would agree, however, that general intelligence as defined above goes beyond beating humans at a particular game. Donald Knuth once remarked: “AI are good at doing the thinking (chess, maths, logics…), but they fail at doing things, animals and humans do without thinking.” I find this thought very intuitive: We can create things that mimic our “out loud” thinking, such as calculating or solving a simple puzzle but we can’t reproduce anything more subtle.

Paths to superintelligence

Over the last decades, experts have thought of many different ways to achieve post human intelligence. I will present the most common ideas briefly and then move on with an depth analysis of artificial computer intelligence from a seed AI.

Whole Brain emulation project: This idea is fairly straightforward. Take a brain, slice it into really thin pieces, scan them and then emulate an exact copy on a computer. When done correctly, we will have transferred the humans brain to an emulated (not simulated!) brain. (Is that the same person you might ask? Well, stop getting of topic!!!) This project is interesting in the way that we know/think it works. There is no: “We need to the figure out how to figure out how to achieve that” The Whole Brain emulation is going to take some advances in imaging and emulation and will cost a lot money but in the end, it could provide a working brain in a computer that we can then change and improve.

Another rather controversial plan is simple biological selection: figure what genes lead to higher intelligence, create a society where only the most intelligent humans mate, improve their offspring by selective ex vitro fertilization (designer babies) and then wait and repeat. Ignoring very serious moral issues, this method is simply very slow and doesn’t promise accelerated IQ growth like other methods.

Simulated Evolution: Again, this concept is easy to understand. Start with seed code simulated in an environment that challenges the agent to evolve by changing its own code. By letting every “generation” solve a wide range of problems you can direct the evolution process towards a smarter agent.

Seed AI: Code a seed AI based on values and motivation as well as copying human learning processes, help it develop, teach it new techniques then teach it to develop itself and work on it, till it has reached human level intelligence and you’ve got yourself an artificial intelligence teaching itself to become smarter.

So, let us propose the agent reaches human or above human level intelligence as experts propose will eventually happen. What effect would this have?

Intelligence Explosion

Now the thing a human-intelligent AI can do that we can’t do is self-improvment. It can change its code or add hardware in order to become better at solving the problems it wants to overcome. This self-improvement is obviously an exponential growth. Think of that graph showing the world population from 0 A.D. to today. The last 150 years look like an explosion in comparison to the rest! This is what would happen to a self-improving agent.

I may remark at this point that it is important to talk about how fast such an explosion from below human level to above human level intelligence occurs. If this happens very slowly meaning years, maybe even decades then we might have a controllable competition for the first human intelligent AI between nations and/or companies. A fast takeoff on the other hand seems less desirable because we have less to no control of the situation. A fast takeoff or even a medium one, would entail the creation of a decisive advantage of one agent over all others, possibly creating a singleton = a single global decision-making agency. This isn’t unprecedented as for example with the nuclear bomb in 1949. USA could have put all efforts into stopping any other nations development of a nuclear bomb, creating a worldwide monopoly.

Cognitive Superpowers

We should not limit our thinking of what an AI can do and what it can’t and we shouldn’t project todays PCs onto the agent. We have an idea how a human with an IQ of 130 is considered smart and will be more likely to excel at academics than somebody with an IQ of 90. But when considering an agent with an IQ of 6000; we have no idea what that means!

Here are some superpowers to consider: Intelligence amplification (make yourself smarter); strategizing (achieve goals and overcoming opponents), social manipulation, computer hacking, technology research and economic productivity just to name a few. Having these abilities make it possible to overcome everyone else even if that is humanity itself!

Doomsday for humanity

Here is a simple scenario:

Researchers make a seed AI, which becomes increasingly helpful at improving itself. This is called the pre-criticality phase. Next, we enter the recursive self-improvement phase which means the seed AI is the main force for improving itself and brings about an intelligence explosion. It perhaps develops all superpowers it didn’t already have. In a covert preparation, the AI makes up a robust long term plan, pretends to be nice, and escapes from human control if need be. This may include breaking through potential barriers, by getting access to the internet, persuade whole organizations etc. Now the agent has full control and moves on the overt implementation where the AI goes ahead with its plan, perhaps killing the humans at the outset to remove opposition. Once we are on this track, there is no way to stop it.

The Motivation Problem

The whole story boils down to this question: Can we control the agent in a way that its actions will benefit us in a way we want? This may seem like an easy job, right? Just put it into the code.

Unfortunately, it is not easy at all. Consider we can hard code and define the goal of a seed AI, and it can’t change it then this still doesn’t change anything. Let me give you some examples:

Final goal: Make us Happy. – AI implants electrodes stimulation the pleasure center in every human being. This is an example for so-called perverse instantiation because the AI does what you ask, but what you ask turns out to be most satisfiable in unforeseen and destructive ways.

Other examples fall under the category of Infrastructure profusion: in pursuit of some goal, an AI redirects most resources to infrastructure, at our expense:
The AI is supposed to solve the Riemann hypothesis. Considering the AI doesn’t find the solution right away, the AI might build up infrastructure and computing power. That means exploiting everything on earth, then building Van Neumann probes and mining all neighboring solar systems in an epic cosmic endeavor and so on. Humans are the first thing such an agent would get rid of.

Sure, there are clever people coming up with clever ideas to solve the motivation problem to control the AI and not to end up destroying mankind by mistake. The scope of what these methods are and how they could help is well beyond this already very long blog post. But the real problem remains!

Closing remarks

If we create something super intelligent and we haven’t found a solution to the motivation problem, then humanity will most likely end. This is in many ways hard to grasp and sounds like fantasy, even to me. But the people who know the most about this topic think there is an 80% probability we will have created AI until 2075; that’s still in my lifetime!

What I am trying to say, I think, is that our society needs to think and talk more about this. Even though it seems far away, it is not something we can be serious enough about!

And as always, stay curious!