Intelligent Beasts

Intelligent Beasts

Old world attitudes, centaurs, and their lessons in AI temperance.

In 2024, tech venture capitalist Marc Andreessen famously remarked, “Medieval people were better prepared for the era ahead of us – with AI, robots, and drones – than we are today because medieval people took it for granted that they lived in a world with higher powers, spirits, angels, and demons.”1

A man whose business is the future chiding his audience with a reference to the past. Merely rhetoric or had he identified a real concern? 

Andreessen is not the first to apply medieval themes to AI. A common allusion is the centaur, “a human brain enabled by machine power.”2 In the modern allegory, the human intellect can achieve more thanks to the generative tool, the workhorse. It’s a helpful mental model. It is also one misunderstood.

The centaur — with the upper body of a man and the lower of a horse — appears as early as 1000-900 BC, during the Early Iron Age in Greece.3 Though most closely associated with Ancient Greek mythology, its tradition carried into the Middle Ages. In these pre-Christian and early Christian eras, centaurs represented the dualistic nature of man, as divided by reason and impulse, or an aspect of God and an aspect of the animal.4

But whereas moderns perceive that the animal nature has been mastered by the human, the Medievals held that the horse remained feral, that its sensual impulses could often overpower the intellect. The modern centaur is an advantage to society, an intellect-workhorse, while the medieval centaur is a disadvantage — an intelligent beast. 

Understanding this contrast puts Andreessen’s words in a different light: medieval people could understand the risks of AI, because they weighed how the intellect could be malleable. Whether an equine lower body or an AI tool, an instrument can operate on the intellect, as much as the reverse. 

The two different views of the centaur analogize counterposing AI-user relationships: the content recommendation algorithm and the diagnostic algorithm. The former demonstrates the medieval centaur trope, while the latter demonstrates the modern centaur trope. Knowing the abstract and concrete differences between the two can help both software engineers and users embrace AI in a way that draws out the workhorse, not the beast. 

A Medievalist Content Recommendation Algorithm

At the most abstract level, an algorithm is a solution to a problem. The content recommendation algorithm solves the problem of what to recommend next for a given user. For companies like Netflix, Meta, and TikTok, the recommendation algorithm “succeeds” according to a single benchmark: user engagement.5 This is the overall connection between technology and the user, defined by metrics like sharing, liking, clicking on ads, and the rate of return to and time spent on the app. To maximize user engagement, the algorithm begins with a question: how did a user like this user engage with content like this content?6 At the onset, the algorithm operates according to presumed social properties, not known individual properties. But once the user begins to chart a unique behavioral record on the platform, the algorithm uses this data to tailor its recommendation and test out new content, juggling paths more and less traveled.7 TikTok is especially known to prioritize exploration and propose new content to its users, compared to other social media services.8

While claiming to mold to the preferences of the user, the content recommendation algorithm itself molds these preferences, to the detriment of the intellect. With user engagement as the benchmark for success, the algorithm tends to activate and reinforce the subconscious impulse for pleasure.9 Clicks, likes, shares, and the scroll are most often caught as reactions, which most often derive from sensory or sensual cues. Through the iterative learning process, based on the single principle of user engagement, the algorithm learns to become addictive, knowing no good in the mean. 

According to Stanford psychiatrist Anna Lembke, a content platform simulates a hypodermic needle, where the hit is not just a technical phenomenon, but a socio-technical phenomenon, which feeds on the approval of mainstream culture.10 And as American companies fear the charge of censorship, each one claiming to uphold free speech, it is cleanest to operate on the principle, “the-algorithm-knows-best.”11 But this is no active collaboration with the user. It is the rhythms of scrolling behavior, a subconscious attention to the platform, quantified by an engagement on the interface, that determines a feed. 

The damage to the intellect is visible on a societal level. Teenagers struggle with attention and incapacity for learning.12 Time management skills, mental well-being, and the strength of interpersonal ties have likewise declined.13 In 2012, the NIH named such a thing as an Internet Addiction Disorder (IAD), advocating for its inclusion in an updated Diagnostic and Statistical Manual of Mental Disorders (DSM).14

This is the medieval centaur, the beast directing the intellect. But the allegory can also help us frame the nature of the malady: we have not evolved out of our base impulses for comfort or pleasure, for the gratification of the senses. Nor is it the impulse itself that is “evil,” but the trivial nature of it, the inability to see beyond the first and nearest desire. Neither the medieval centaur nor the TikTok algorithm operates on the understanding that good things come to those who wait. 

Modernists in the Diagnostic Recommendation Algorithm

One algorithm may create clinical issues, but another can help resolve them. In medicine, for example, doctors that use diagnostic algorithms stand to increase their intellectual capabilities. Hospitals routinely use AI to coordinate administrative flows, but these tools can also provide diagnostic recommendations, scanning radiographic images, genetic information, patient histories, and other data.15 The diagnostic algorithm can ingest and integrate data from multiple sources, catching details and making connections that would otherwise go undetected.16

The diagnostic algorithm draws and shares insights from data, but it does not execute those insights without human discretion. This phenomenon, known as a human-in-the-loop, requires a human being to exercise judgment on the solutions an algorithm suggests. As doctors already study images to make diagnoses, the diagnostic algorithm only increases the interpretability and optionality of these images.17 As part of the diagnosis, doctors must understand what the automated insights mean, which encourages their learning in step with the tool, sharpening iron against iron. Insofar as doctors remain in the loop, the diagnostic tool expands their intellectual force. In addition, diagnostic algorithms increase the administrative scope of doctors.18 Through accelerating the process of diagnosis, AI tools enable doctors to dedicate more time to surgeries, compassionate care, and the most high-risk cases. 

Diagnostic algorithms have had special success in helping detect cancer at early stages, especially breast cancer.19 They have been able to predict the onset of diabetes, cardiovascular disease, and neurological disorders, which thus prevents the need for medical care in the first place.20

Here we see the modern centaur helping to characterize the diagnostic recommendation algorithm, as a tool that increases the power of medical judgment and the scope of medical treatment. The doctor is an executive intellect who oversees the digital workhorse. 

Mythical Battles

The medieval and modern centaurs of AI differ according to the problems they solve, specifically the ends and stakes of those problems.

A recommendation algorithm should serve an end beyond the user. The medieval centaur seeks pleasure for himself, while the modern centaur pulls the plough and feeds the village.

For the content recommendation algorithm, the end is the user on his feed; for the diagnostic, it is the patient in the hospital room, a person for whom the doctor is responsible (or some such similarly consequential action). In this latter AI-user relationship, the user (the doctor) does not experience the recommendation, but he is responsible for its success. He or she must exercise intellect for the benefit of another. This altruistic AI-user relationship is most evident in the healthcare and defense industries, where doctors or operators use algorithms to assist patients or missions, accountable for the successes and failures of those initiatives. 

Stakes 

The stakes of an algorithm have direct connection to the accuracy of an algorithm. The imbibing medieval centaur need not know the time of day, so he does not care to learn. The village will starve if there is no food, so modern centaur must know how to work.

Because temporal pleasure is low-stakes, content recommendation algorithms can afford to be wrong, and often are. Engagement rates on social media platforms are below 1%, besides TikTok, whose engagement rate is just above 5%.21 Social media platforms compensate for their inaccuracies by building addictive features into the user interface, such as auto-play or infinite scroll.22 As a result, the AI tool generates only a perception of value to the user, with little reality beneath that perception. Because life and health are high-stakes, doctors can little afford to be wrong. Their AI tools cannot off-load errors to the physical interface. No tool should hide behind aesthetics, as even the decorated swords of the Middle Ages were blade and steel. 

Algorithms should not only pursue altruistic ends, but charge towards the problems with the highest stakes, those involving life and principle. Just as necessity drives innovation, the stakes of an algorithm likewise drive its accuracy. 

Chiron the Wise

Not all centaurs from the Ancient Greek and medieval periods had inept intellects. When not under the excessive influence of wine, they could be wise, like the centaur Chiron, tutor of the god of medicine, Asclepius.23 The medieval and modern centaurs are not fixed literary markers along an arc of human development. They remain, as always, tropes for our choosing. 

Andreessen may be right: the Medievals were better prepared for AI because they granted the existence of spirits outside human control. But this caution is not an absolute: the hope of AI lives in its real productive capabilities and a humility towards our dominion over creeping things. By charging towards altruistic problems with high stakes, the modern myth and a present hope, may be made manifest.

  1. “Marc Andreessen On Everything,” Joe Rogan Experience #2234, November 27, 2024. ↩︎
  2. Bradley L. Boyd and Tiffany Saade, “Human Cognitive Autonomy,” The Republic, April 1, 2025. ↩︎
  3. Matthew Loyd, “The Centaur of Lefkandi: A remarkable Late Protogeometric figurine,” Ancient World Magazine, March 2, 2021. ↩︎
  4. Marga Patterson, “Centaurs in Art: Duality Throughout the Ages,” Daily Art Magazine, May 18, 2022. ↩︎
  5. Arvind Narayana, “Understanding Social Media Recommendation Algorithms,” Knight First Amendment Institute at Columbia University, 2023, 18. ↩︎
  6. Ibid, 22. ↩︎
  7. Ibid, 23. ↩︎
  8. Ibid, 21. ↩︎
  9. Ibid, 36. ↩︎
  10. Bruce Goldman, “Addictive potential of social media, explained,” Stanford Medicine News Center, Octover 29, 2021. ↩︎
  11. Arvind Narayana, “Understanding Social Media Recommendation Algorithms,” Knight First Amendment Institute at Columbia University, 2023, 34. ↩︎
  12. BT Sharpe, et al., “Dopamine-scrolling: a modern public health challenge requiring urgent attention,” Royal Society for Public Health, April 12, 2025; Xing Zhang, et al., “Exploring short-form video application addiction: Socio-technical and attachment perspectives,” Telematics and Informatics 42, 2019. ↩︎
  13. Ibid.; Ibid. ↩︎
  14. Hilarie Cash, et al., “Internet Addiction: A Brief Summary of Research and Practice,” National Library of Medicine, National Institute of Health, November 8, 2012. ↩︎
  15. Shiva Maleki Varnosfaderani and Mohamad Forouzanfar, “The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century,” Bioengineering 11, no. 4 (February 2024). ↩︎
  16. Ibid. ↩︎
  17. Ibid. ↩︎
  18. Lisa D. Ellis, “The Benefits of the Latest AI Technologies for Patients and Clinicians,” Harvard Medical School, August 30, 2024. ↩︎
  19. Ibid. ↩︎
  20. Ibid. ↩︎
  21. Arvind Narayana, “Understanding Social Media Recommendation Algorithms,” Knight First Amendment Institute at Columbia University, 2023, 18. ↩︎
  22. BT Sharpe, et al., “Dopamine-scrolling: a modern public health challenge requiring urgent attention,” Royal Society for Public Health, April 12, 2025. ↩︎
  23. Marc A. Shampo, “Medical Mythology: Chiron the Centaur,” Mayo Clinic Proceedings 67, no. 2 (February 1992). 
    Sources: ↩︎