The Black Swan
This was the first book that I have read from Nassim Taleb. Having not heard anything about him before reading, he seemed to live quite an interesting life. From what I remember, he said his childhood was long stretches of intense boredom interrupted by the chaos of a civil war. He did a lot of reading as a kid, something I am a bit envious of since I did not. He went to college, got into derivative trading, and developed tail-risk strategies and methods for simulating randomness. What resonated most with me wasn’t the quant trading or the “fuck-you” money — although appealing in their own way — it’s that he seems to actually synthesize information, draw conclusions, and work through his ideas rather than just assert them. That’s the mode worth operating in, and this book is a good example of it.
I thought it was very well written, although I have heard the critique that it is a bit redundant, which I did not experience since I have not read any of his other books and completed the book in two days. The examples were particularly illustrative, showing the difference in how we think when faced with lopsided statistics. One of the first things I found striking was how he defined black swans: that they do not have to be disasters, only that they have to be rare or unexpected, carry a disproportionate impact either positive or negative, and be retroactively predictable with hindsight.
The immediate question upon finishing was how to meaningfully apply this. Where does the framework actually matter and where is worrying about it just wasted effort? I recall Taleb addressing this but remember it being fuzzy — he has another book that goes deeper on it. He does mention that he is still fooled by randomness, perhaps more than most, but that he tries to limit his exposure so as not to blow up, and for things outside the Extremistan domain, to simply continue being human. I had difficulty at first discerning the line between Mediocristan and Extremistan and still do, but my working heuristic is to assume Extremistan by default and then ask whether I am overexposed and whether it actually matters.
This book has been quite formative in how I have been thinking about alignment and some of the ML history I have been reading. LLMs could be classified as a black swan and fit the definition almost perfectly. Rarity: there is nothing in the past that could have pointed to what ChatGPT and other LLMs have become. Extreme impact: it fits both extremes. Retrospective predictability: this criterion has been invoked as an argument for why we need to push the development of AI so extensively, because it is inevitable and we have to be first. I do not think the trajectory of OpenAI was in any way inevitable. The amount of capital put into OpenAI before it had any path to realizing a return was not obvious at all. The combination of OpenAI’s founders was crucial to what the AI field is today.
Rarity is the hard criterion to quantify for SAI. Can something be classified as rare if it seems clearly within the realm of expectations? The third requirement — retrospective predictability — is almost trivially satisfied, but not in the usual way. If things go well, we will look back and explain exactly why abundance was inevitable. If things go badly, we will not be around to philosophize about it at all. To me it seems only a matter of time before we have to confront what Agnes Callard would call an untimely question — one with no deadline, no expert who can close it, and no good moment to ask it, only the fact that we are already living an answer to it. Whether or not SAI formally qualifies as a Black Swan, the practical lesson is the same: when the downside of being wrong is extinction, you do not get to rely on the average case. Taleb’s argument about not being overexposed was designed for financial tail risk, but the logic scales. Nick Bostrom makes a version of this case in “Astronomical Waste” — that the expected cost of catastrophic failure, calculated against what we stand to lose, is so large that the asymmetry demands serious weight. You do not need to assign a high probability to catastrophe for it to dominate your decision-making. That is precisely the Black Swan point.
Overall I enjoyed the book thoroughly and would recommend it. It changed my perspective and gave me quite a lot to think about and explore.