Learning and play
Machine intelligence, trained primarily on the archive of human language, inherits both its brilliance and limitations: planar abstraction without embodiment, prediction without participation. This compromise was initially pragmatic. Rather than parse the messiness of reality, models learned from its readily available, albeit reductive effigies: descriptions, depictions, and recordings.
But truly intelligent machines must acquire the capacity to perceive, anticipate, and improvise within the unfolding dynamics of reality. From the relationships of intent, action, and consequence, to the irreversibility of time and spatial awareness.
Building general artificial intelligence isn’t just about scaling data and compute; it’s about teaching machines to play.
Play, but a trivial or hedonistic pursuit, has been the cradle of human learning: it comprises rule‑bound yet voluntary activities that let people rehearse identities, negotiate norms, or imagine futures without the full expense of failure. (Johan Huizinga’s Homo Ludens, 1938)
One might argue that play has been evolution’s way of lowering the expected cost of error while maximizing informational return. This open‑ended experimentation (where we invent or revise rules) stands in stark contrast to work, or generally, constrained contests (where we master existing rules, and failure has severe consequences).
Modern artificial intelligence already echoes this dialectic in algorithmic form. For example, reinforcement‑learning agents oscillate between exploration (playful forays into untested actions) and exploitation (disciplined use of proven strategies), mathematically recapitulating the play-work continuum as they iteratively simulate, critique, and refine environmental understanding.
World and action models extend the dichotomy: agents first “dream” within compressed simulacra of reality to test possibilities at low cost, to derive hypotheses that accelerate real‑world learning. Capable of evaluating effectively infinite actions over short horizons—an exploration rate and diversity impossible in physical environments.
Like how play shaped human evolution, video games have a long history of profoundly influencing consumer behavior and catalyzing significant innovation—often pushing the boundaries of what is technologically possible: from complex 3D calculations & representations to game theory and the birth of modern artificial intelligence.
It is, perhaps, no coincidence that many research labs, including DeepMind and OpenAI, have deep roots in gaming. Or that Nvidia, the world’s most valuable company in 2025, started as a gaming company in 1993.