{"text":[[{"start":6.48,"text":"When I applied to Cambridge university, my first interview was with a professor who invited me to sit, pressed his fingertips together, looked at me searchingly, then said: “Is the nation-state in decline?”"}],[{"start":21.92,"text":"My heart fell. Not only did I not know the answer, I didn’t even really understand the question. But I had heard — possibly from my state school, or else from the university — that these interviews were “not testing what you know, but how you think”. So I took a breath and said: “I’m not sure what a nation-state is.”"}],[{"start":48.02,"text":"It worked out well. The professor said that was fine, asked me a few simple questions to help me figure out the term, then a few more as we worked our way through the original question. In the end, I was offered a place."}],[{"start":64.56,"text":"It was a formative experience for me. Even so, I have found it harder and harder to say the words “I don’t know” as the years have gone on, and I don’t think I’m alone. "}],[{"start":77.15,"text":"In many ways, this is understandable. The more “expert” you become, the more you think you ought to know, and the more you fear your credibility will suffer if you ever admit otherwise."}],[{"start":91.60000000000001,"text":"But the aversion seems to have spread to all sorts of places, including settings where it should be perfectly fine to say you don’t know. A student at a prestigious US business school recently told me of a fellow student who sat in front of her during lectures. When the professor asked a question, he would type it surreptitiously into ChatGPT, then read out the answer as though it was his own."}],[{"start":120.70000000000002,"text":"What is going on? One possibility is the lack of role models. Confidence is rewarded in public life. It is rare to hear the phrase “I don’t know” in TV interviews. Little wonder: many media training courses teach people the “ABC” technique to avoid having to say those words when faced with a question to which they do not know the answer (or do not want to give it). Acknowledge the question. Bridge to safer ground (“What’s really important to know is . . . ”). Communicate the message you have already planned to convey."}],[{"start":160.20000000000002,"text":"New technology has also made it easier to bluff. First search engines, and now large language models like ChatGPT, have made it more simple than ever to avoid the discomfort of admitting what you don’t know."}],[{"start":177.73000000000002,"text":"And yet, one of the great ironies of LLMs is that they have the exact same tendency we do. When they do not know the answer to a question, for example because they can’t access a vital file, they often make something up rather than say “I don’t know”. When OpenAI put its o3 model through one particular test, for example, the company found that it “gave confident answers about non-existent images 86.7 per cent of the time.”"}],[{"start":213.13000000000002,"text":"There are costs to not admitting what you don’t know. For a start, you miss the opportunity to learn. Most experts are remarkably generous to those who ask curious questions. Some of my favourite journalism projects over the years have begun with an interesting question to which I didn’t know the answer when I began."}],[{"start":236.56000000000003,"text":"There is also the risk that you undermine your credibility even more when you bluff. We have probably all had an experience like this at some point: an impressive polymath pundit or publication ventures into your own area of expertise, and you realise with a shock that they don’t know what they’re talking about. After that, you begin to doubt them on every topic."}],[{"start":265.11,"text":"The AI industry is particularly alert to this risk. The technology companies know their tools will be of limited use in sectors like law and medicine if they continue to give confident-but-wrong answers some of the time. Efforts are under way to teach LLMs how to say “I don’t know”, or to at least express their level of confidence for a given answer. OpenAI says it has trained its new model, GPT-5, to “fail gracefully when posed with tasks that it cannot solve”."}],[{"start":304.01,"text":"But this is not an easy problem to fully fix. One problem is that LLMs do not have a concept of “truth”. Another is that they have been trained by humans steeped in the culture we have just discussed. “In order to achieve a high reward during training, reasoning models may learn to lie about successfully completing a task or be overly confident about an uncertain answer,” as OpenAI has put it."}],[{"start":336.56,"text":"In other words, while these are not necessarily the tools we need, they might just be the tools we deserve."}],[{"start":353.64,"text":""}]],"url":"https://audio.ftmailbox.cn/album/a_1755443067_7650.mp3"}