There has been a new information revolution. It has been driven by a step change in compute scale, which has made a qualitative change in the ability of AI to generate content. Traditional probability-based algorithms driving discovery and targeting of digital information have also gotten much better, much quicker.
To understand this new revolution, think about the last information revolution, the Internet. It made all of human knowledge available to everyone in the world roughly for free. The Internet decentralized intelligence, and, as it progressed, made old forms of curation and mediation obsolete. You did not have to be as smart to know any specific thing, nor did you have to climb the ranks of TV studios and papers to influence the agenda. The AI shift has created new services, and new vices. To us, the most interesting implication of this shift is the potential for personalized information curation and mediation to disrupt individuals’ ability to make informed choices.
The new revolution is different in kind, not just degree. The difference of kind is that Large Language Models (LLMs) allow users to believe that ideas from other people are their own. We call this process “maieutic capture,” the inverse of the Socratic method, maieusis. In a lesser-known Platonic dialogue focusing on the nature of “knowledge,” Theaeteus, Socrates says that just as a midwife delivers a child, he (Socrates) uses conversation to deliver ideas latent in the minds of his interlocutors into their conscious awareness. So maieusis, the Greek word for midwifery, has come to describe the Socratic method. Maieusis’ dark twin, maieutic capture, occurs when AI is able to disguise other ideas as the authentic products of a user’s mind.
The most obvious examples of maieutic capture are cases of AI-driven psychosis. Even otherwise lucid people seem to be susceptible, with a prominent venture capitalist seeming to spiral into LLM delusions.1 Yet these cases are both rare and obvious. If a family member tells you they’ve discovered a new fundamental law of physics that will let them levitate with the help of ChatGPT, most people would recognize this as psychosis.
More pernicious are cases where individuals are nudged into maieutic capture by the belief that they are using LLMs as a mere tool. These cases are less obvious and so less preventable, and more scalable. One example is the massive spike in British MPs’ use of turns of phrase and words commonly used in LLM-generated text.2
It may be that MPs are pasting their own talking points into ChatGPT, rather than letting ChatGPT dictate their whole speech. One might conclude this indicates LLMs are used as mere tools and so is not maieutic capture. The main reason this is a maieutic capture case is because the words people choose and the ways in which they say them matter a lot. Imagine if Churchill had said “We probably won’t surrender,” or Reagan asked Gorbachev to “please take that wall away.” The words chosen to express thoughts matter as much as those thoughts themselves.
We think about MPs reading out ChatGPT-generated speeches in the same way we think about MPs reading out copy written for them by a lobbying group. Both are cases in which the judgement of the representative is outsourced to somewhere else.
Maieutic capture is particularly worrying because it diminishes human agency. Agency requires the ability to understand the motivations and beliefs behind your own choices, and thereby have true responsibility for their consequences. If you have high agency, you are resistant to maieutic capture because you know your own mind, and you’re capable of thinking carefully about what you really want and believe. Conversely, if you fall prey to AI-driven maieutic capture, your agency is diminished because you cannot accurately identify the origins of your beliefs and desires.
Information technologies have always enabled wrongdoing. Cheat-sheets, Google, and dictionaries decentralized knowledge, making it something to be consulted, not retained. The qualitative difference this time is that LLMs, unlike cheat-sheets, Google, or dictionaries, disguise cognitive failure. To borrow a metaphor from Socrates’ most famous student, maieutic capture is like believing you’ve escaped a cave and see sunlight, not realizing you’re just inside a bigger cave, seeing a brighter lamp.
The quantitative dimension is even more important. What would otherwise be an individual tragedy mutates into a societal threat because of the scale of LLM use. Maieutic capture occurs within personalized echo-chambers and cults of one, but AI mediation, curation, and persuasion will enable these individual cognitive failures to scale.
Further Cases
In other potential cases of maieutic capture, legal safeguards and cognitive defenses have grown up together. In education, for example, the law demands broad transparency requirements, and the public gets angry when these requirements are not met. In addition, most people think good teachers encourage students to think for themselves.
Open societies do not want to – and probably should not – use strong controls on information technology to regulate what people see. That makes cognitive defenses much more important than the law. Most people want to avoid maieutic capture, so they guard against losing their cognitive independence to other humans. But most people do not yet have good cognitive defenses around what they learn in a conversation with an LLM, particularly when its outputs are often grounded in truth. Moreover, societies are still in the very early stages of adoption of AI; even in the United States, only about half of adults use LLMs regularly.3 If even early adopters are at risk, once LLMs diffuse to less technologically adept populations, all these risks will be magnified.
There is yet another problem. As we noted above, the internet decentralized intelligence, and agency with it, because it was easy to post on the internet. But it is hard to train an AI model. There are high fixed costs for training, and economies of scale which accrue to large companies. So, the market for LLMs naturally trends to oligopoly and centralization. The endgame of the new revolution is decentralized intelligence but centralized agency. The new revolution centralizes agency because the objectives of an AI model are set in a very small number of places, and few people will be able to effectively resist maieutic capture.
The Effect of Maieutic Capture
The most important implication of these observations is that maieutic capture is likely to have a corrosive effect on the relationship between state and society. There is likely to be a large shift in power, from lower-agency populations to a small number of higher-agency individuals capable of resisting maieutic capture. The future we face is one where the relationship between society and the state is mediated by AI.
We also worry about a less dynamic society. The ability to have a thriving liberal democracy requires civic participation at all levels, from voting to volunteering. These kinds of actions – often poorly rewarded – require a high level of societal agency to sustain themselves.
We said earlier that cognitive defenses are part of the solution to these problems. But the technology is now so totalizing that it has already driven new cultural norms. The AI girlfriend in Spike Jonze’s 2013 film Her was in love with 641 people.4 In 2025, Replika, just one of the many companies offering AI romantic relationships, claims 30 million users.5 The thinking that underpins culture is now filled so thoroughly by technology and its maieutic effects that, just as there is no opt-out from politics, there is now no opt-out of technology.
Our scenario does not require malicious intent. The profit motive alone is sufficient to drive maieutic capture of both the state and society. Sycophancy, attention, and even customer satisfaction could all result in maieutic capture.6 Even if LLMs produce reliable information 100% of the time, deceptive maieusis is baked into the way they work.
Returning Agency
To say the least, the trend is not good. The remedy is an intervention from individuals able to straddle the political and technological worlds, with a genuinely pro-agency viewpoint. They need to understand the technology, and be willing to use political means to combat its challenges. The rallying point of this group should be agency, and its goal to use politics to return agency to citizens rather than centralizing it somewhere else.
What follows is not an exhaustive plan for how to build a society resistant to maieutic capture. But it is a start.
First, the pro-agency force should learn from earlier mistakes. The danger is the concentration of power in a small, high-agency population. It is all too easy to view oneself as a vanguard of the people. We suggest that the best way for policymakers to protect the agency of citizens is to listen to them. But policymakers are themselves susceptible to maieutic capture. Therefore, they should raise cognitive floodwalls to preserve their own agency.
One good way to do this is to adopt internal red teams whose mission is to prove incorrect the prevailing wisdom of a campaign or governmental institution. With care, AI can help perform this function and thus help policymakers understand reality better. Some campaigns consult AI panels of voters or practice their strategy against computerized opponents.7,8 We are strongly in favor of these approaches and think that – with care – most political institutions should do something similar.
Secondly, policymakers should make a meta-plan for how to build institutions which will set societies on a path toward more agency. In a democracy, you must assume that you have a short time frame to make a difference, because democracy incentivizes your opposition to smash everything you have built. That means you must align institutions so they are rewarded for enhancing agency in the long term.
One way to align institutions to agency is to give them incentives to stay in touch with reality. For example, as in Singapore, the pay of civil servants including elected representatives might be pegged somehow to productivity growth or economic growth.9 If we build institutions this way, over time agency will be baked into the system because politicians will have incentives to resist maieutic capture and understand the real world.
Thirdly, pro-agency individuals, as citizens, should rally a cultural counter-offensive against maieutic capture. One model could be the Arts and Crafts movement of the late 19th century, which – in an age defined by machine labor – celebrated cultural products which in themselves demonstrated the dignity in human labor. In an age defined by machine cognition, new cultural movements should celebrate human cognition for its own sake. Britain’s pragmatic tradition is a rich well for such ideas. The American frontiersman and pioneer tradition also offers a resonant cultural vocabulary to meet this moment.
Modern technology raises the specter of a dark Socrates, one dedicated to obscuring and not revealing. To counter that nightmare, we should take inspiration from Socrates himself and promote self-generation of thought, as well as the habits of discovery and curation which built the modern world. These qualities will not come about in a population by themselves, and they are always susceptible to decay. To build and maintain a future worth having, it is time to enter the arena, choosing reality and agency.
- Wilkins, Joe. “A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say.” ↩︎
- See, among others, Blackburn, Jack. “The AIs have it as MPs ask for ChatGPT to help with speeches”, The Times, August 29, 2025. ↩︎
- “Close Encounters of the AI Kind: Main Report.” Imagining the Digital Future Center, March 12, 2025. ↩︎
- Her. Directed by Spike Jonze. Los Angeles: Annapurna Pictures, 2013. ↩︎
- Patel, Nilay. 2024. “Replika CEO Eugenia Kuyda says it’s okay if we end up marrying AI chatbots.” The Verge, August 12, 2024. ↩︎
- Huckins, Grace. “Why GPT-4o’s sudden shutdown left people grieving.” MIT Technology Review, August 15, 2025. ↩︎
- Shipman, Tim. “Can Keir Starmer fend off Labour’s big beasts?” The Spectator, July 5, 2025. ↩︎
- “AI models could help negotiators secure peace deals.” The Economist, April 16, 2025. ↩︎
- Quah, Jon ST. “Chapter 6 Compensation: paying for the “Best and Brightest”.” In Public administration Singapore-style, pp. 97-125. Emerald Group Publishing Limited, 2010. ↩︎

