Taggart Tufte

Book reviews on AI safety, philosophy of science, technical non-fiction and as well as a few of my favorites.

Open Socrates

Agnes Callard | Finished April 2, 2026 · Reviewed April 14, 2026 | ★★★★☆
philosophy-of-scienceepistemologypractical-philosophy

Open Socrates took me almost 20 days to finish. For context, I read the first five Stormlight Archive books — nearly 200 hours of audio — in 17 days. 20 days for a single book means I was struggling with it. I am glad I did not stop. It is one of those books I did not fully buy into until I finished it, got the big picture, and talked through it with someone else who had also read it. Looking back, I really enjoyed this book — it is an exception to the rule that long reads are ones I did not like. The first part, which was about the Tolstoy problem and untimely questions, was something I was wrestling with during my fall semester. I was really trying to figure out what I wanted to do with my life and what is really meaningful to me. I could not keep living 15 minutes at a time when I had major life decisions coming up quite soon: what am I going to do after college, do I really care about trying to get rich, and what does the future look like if being smart doesn’t mean anything? All of these questions I kept avoiding or giving answers I didn’t really believe in, because they were uncomfortable and something I had built my identity on. These are what Callard calls untimely questions — questions that are uncomfortable and structural to how you are currently living. Socrates seemed so welcoming, open-minded, and eager to be convinced he was wrong that you can confront these untimely questions and deeply engage with them. That combination — genuine openness and genuine commitment to being wrong — is something I find genuinely rare.

The thinking-in-pairs argument that Callard makes also really resonated with me. Two people jointly examine a problem using asymmetric roles: one commits to a claim sincerely, and the other asks probing questions to see how it holds up. The sincerity requirement is an integral part of the process — if you are not committed to what you are claiming, being refuted doesn’t prove anything. This is challenging in practice if both parties are not committed to thoroughly examining the problem and are not willing to have their minds changed. Having these sorts of conversations is quite difficult if you are in the very human mindset where being wrong is something to be ashamed of, or where a refutation of your opinions is considered a personal slight. Thinking is not something you can do in isolation — challenging your blind spots is where the real thinking occurs.

The issue is finding someone you can have a conversation like this with. To me it feels more like idealistic armchair philosophy than something universally applicable — both people have to be willing to engage and willing to “lose.” The barrier to entry is high, and Callard’s model implicitly requires an interlocutor already oriented toward genuine inquiry, which means it tends to work on exactly the people who need it least as a corrective. But there are two separate cases here, the personal and the societal, and I don’t think that critique undermines the personal one. Personally, finding even two or three people willing to genuinely engage is worth it. I don’t engage with these conversations to fix people’s opinions to match my own in some selfless way. I am doing it because the quality of thinking that comes out of it is something I cannot replicate alone, and I am thankful that I have people in my life who can have these conversations with me.

The field of alignment is full of untimely questions: What do we want SAI to value? When do we say computer processes are sentient? Is what we are currently developing actually safe? There is no expertise that can answer these questions, and there is no good time to ask them, since we are already acting on the answers implicitly. We steamroll ahead building systems to optimize objectives with no concrete end goal. This is the one domain where the selfish case and the collective case collapse into each other. I do not want to die, and I do not want the people I care about to die — and the people building these systems are implicitly living answers to untimely questions without examining them, which makes that everyone’s problem including mine. Pretending to know all the answers and asserting your position confidently is not productive when trying to address these problems. A thorough examination of these goals — thinking in pairs, defining the terms, following their implications through a good-faith conversation — is not an abstract norm. It is self-interested at civilizational scale.