Taleb’s Black Swan – Part 1: Knowledge is (probably) toxic

Knowledge is (probably) toxic.

I believe the title of this post neatly sums up the main insight of N. N. Taleb’s book The Black Swan. Taleb is a glib prophet seeking to warn us about the dangers of information. Not just too much information, nor the wrong type of information, but rather the danger inherent to the very act of acquiring information.

Just like interacting with radioactive material without proper protective gear, being around “knowledge” without having safeguards in place will leave us deformed and more vulnerable to the world.

Taleb’s genealogy of knowledge

Taleb’s insight that knowledge is toxic relies on an underlying evolutionary biological story about the origin of the brain’s function. While he hints at this story in places, I believe that he leaves it underdeveloped, and thus I will seek to creatively flesh it out in this post.

This account asks: why did the function of acquiring knowledge emerge at all in organisms? Even if we don’t understand the mechanisms that brought about the brain’s functions of knowledge acquisition, a guiding purpose for developing these functions is immediately evident: survival.

In order to survive, organisms need to acquire information about their environment, process that information, and adapt themselves accordingly. Even very simple single celled organisms can sense that they are being touched, and recoil in response (a good strategy for preserving themselves). They intake a constant stream of data through the numerous cilia covering their outer membrane.

This data requires a certain interpretive paradigm in order for it to be meaningful, which is to say that the organism can exercise judgment in valuing a phenomenon as good or bad. A piece of data must be deemed conducive to survival or not conducive to survival in order for the organism to properly react, but this requires a system of valuation and judgments.

At the single celled level, this paradigm remains instinctual and simple, while nonetheless necessary. But, as both prey and predator became more complex, they need more sophisticated apparatuses for acquiring and evaluating data about their environment.

Consider the example of a monkey suddenly becoming aware of a nearby rustling bush. The monkey can see the bush rustling, it can hear the bush rustling, and it can probably smell whatever agent is responsible for the rustling bush. The rustling bush might be a fellow monkey, and thus no threat at all. But if that’s not the case, the monkey needs to ascertain that lightning quick and react accordingly if it has any hope of survival. The more sophisticated apparatus the monkey has at its disposal, the more information it has at its disposal in order to make that judgment.

This appears to be both a blessing and a curse. On the one hand, the monkey has more information which it can work with. If it sees and hears the bush rustling, but smells one of its fellow monkeys, it can instantly judge that the phenomenon is not a threat. It has successfully avoided wasting energy running away from a falsely perceived threat.

On the other hand, the monkey now has more information it needs to synthesize. Synthesizing information takes time, but it can’t take too much time, otherwise the monkey will become tiger food. It’s a delicate balancing act. If the monkey miscalculates that the rustling bush is a friendly monkey, but the bush actually conceals a tiger, he’s done for. If the monkey deliberates too long in order to come to the most accurate conclusion, he’s also done for. So the monkey’s brain has to make both the right call and use the least amount of time and energy.

This is where probability enters the scene. In order to make the right call in the least amount of time, the brain takes a short cut. It employs an interpretive paradigm which allows it to make the right choice more often than not. It bakes some assumptions into its synthesis process, and while these assumptions aren’t 100% accurate 100% of the time, they are accurate enough often enough that the net benefit is overall positive.

Notice then that what we call “knowledge” developed in order to achieve a certain goal within a certain set of parameters. The monkey’s brain had to make the right call to survive, but it had a number of other conditions it also had to satisfy. The monkey needed to make this judgment quickly, otherwise it would be eaten while it was pondering the next best decision. Thinking things through thoroughly is time consuming and inefficient. Further, the monkey could only support a brain of a certain amount of processing power, because it could only gather and consume a certain amount of nutrients to fuel that organ. Basically, the development of knowledge was constrained by the need to balance (1) the energy needs of the brain, (2) the need to make accurate judgment calls in identifying and reacting to life-or-death situations, and (3) the need for sufficient processing power to make those judgments within a short time frame.

Modern Maladaptation

Taleb’s primary insights turns on the simplicity of these situations in which our ancestors first developed and employed knowledge. He points out how in these situations no single variable could cause the utter breakdown of the organism’s interpretive paradigm.

To illustrate this, Taleb uses the example of guessing the average weight of each person in a crowd of 100 people. No single person in that group could compose a significant enough percentage of the total for your guess to be wildly off. In fact, you could go into the problem already possessing a fairly accurate range of possible guesses. Even an extreme outlier like the world’s heaviest man (1,400 lbs) would only compose 10% of the total weight of a crowd of average adult humans.

This examples illustrates that the situations in which our knowledge functions developed were situations in which the most disastrous variable (1) fell within a defined and predictable range of possibilities and (2) was never bad enough to truly confound us. In short, we had a good idea of what the problem would likely be, and we’d never have too much trouble knowing how to respond. This meant that the monkey brain could employ crude probabilistic paradigms with a high degree of effectiveness.

Ultimately, knowledge needs to optimize for identifying the most disastrous variable (the one which would cause the most dramatic change in the situation under evaluation), and then reason from there.

But what if the organism finds itself in a situation where the most disastrous variable doesn’t fall within a predictable range? Further, what if the effect that the disastrous variable can have is amplified by the nature of the situation? The organism will find that its probabilistic paradigms are maladapted for such systems of interaction. The organism will find itself waltzing into mine fields which it thought were just fields.

This is the situation which Taleb contends modern humans have found themselves.

Imagine again the crowd of humans, but this time we’re measuring the crowd’s collective financial net worth (again, I am employing Taleb’s example). In this situation, it’s very possible that a single individual in the crowd could compose 99% of the total net worth of the entire crowd. If Bill Gates was standing in a crowd of 99 Chilean peasants, his net worth would likely asymptotically approach 100% of the crowd’s net financial worth.

Taleb wants us to see that (1) our world is increasingly more structured by systems and networks more like the example of Bill Gates standing in a crowd of Chilean peasants, and that (2) our knowledge functions are maladapted for reasoning and making judgment calls in situations like this.

In this situation, not knowing that Bill Gates was in the crowd would be disastrous for your prediction about the distribution of wealth in the crowd, whereas in the first situation, not knowing that the fattest man in the world was in the crowd would not cause you to be nearly as far off about what percentage of the total weight of the crowd each individual constitutes. In the first situation, the situation is structured such that missing information does not significantly alter how you would act, but in the second situation, that single piece of missing information completely changes the entire situation.

Taleb calls these paradigm-shattering variables “black swans.” A black swan is a single variable which can disprove your entire hypothesis. For instance, many people believed for a long time that all swans were white. But all it took was a documented sighting of just one black swan in order to undo this long held theory. The asymmetry is palpable. A paradigm requires a mountain of data points in order to be convincing, but only one to be entirely disproven. No paradigm can be definitively proven, but a paradigm can certainly be definitively disproven. 

Coda:

Interacting with Taleb’s book has been exceedingly fruitful for me. So, I intend to post further investigations into the ideas which have sprung from my reading of The Black Swan. In my next post, I will investigate the role that feedback loops play in confounding human attempts at understanding with many important modern systems which structure our lives.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s