Commodified Intelligence - The Silent Rise of Algorithmic Governance
Reflections on the decline of human judgment, driven by an increasing societal risk aversion.
Introduction
In contemporary societies, rising levels of formal education and rapid technological advancement paradoxically seems to coincide with a marked increase in risk aversion and a growing cultural intolerance for uncertainty. This development is not a symptom of ignorance, but rather a byproduct of structural, institutional, and psychological pressures that shape modern life. Systems that once rewarded initiative and resilience now emphasize compliance, optimization, and complacency. Economic frameworks penalize failure more harshly than ever before, educational models encourage standardization over original thinking, and the ever-present gaze of social media discourages experimentation by exposing every misstep to public scrutiny.
Together, these trends form the foundation for what may be understood as a broader civilizational shift: a transition from an ethos of engaged human agency to one of passive deference—to institutions, to systems, and increasingly, to probabilistic algorithms. The critical distinction at play is not simply a retreat from risk, which is measurable and probabilistic, but from uncertainty, which concerns the fundamentally unknowable. This essay explores the implications of this withdrawal, arguing that the modern desire to escape uncertainty may, over time, lead to the voluntary transfer of authority to algorithmic systems—not through coercion, but through preference and habituation.
I. Risk, Uncertainty, and the Cultural Logic of Avoidance
Risk and uncertainty are often conflated, yet they refer to distinct epistemological categories. Risk involves situations where the possible outcomes and their probabilities are known or at least estimable, such as in insurance, actuarial science, or games of chance. Uncertainty, by contrast, deals with situations where the outcomes are unknown, the variables are not all identifiable, and probabilities cannot be assigned—such as in creative or entrepreneurial endeavors, long-term policy planning, or moral decision-making.
Modern institutions have increasingly optimized themselves around risk management, but have become structurally and culturally incapable of handling uncertainty. Economic systems, particularly post-2008, have hardened against failure. The cost of entrepreneurial risk or unconventional career moves is magnified by high debt loads, volatile job markets, and eroded social safety nets. Simultaneously, education—especially higher education—has become an instrument of credentialism rather than a crucible for independent thought. Students are trained to minimize mistakes, master testable content, and align themselves with predictable career trajectories. These systems cultivate proficiency in managing quantifiable risks, but leave individuals poorly equipped to navigate unstructured, uncertain domains.
Social and digital technologies intensify this aversion. The pressure for performance in public—on social media and through algorithmic metrics of attention—amplifies the perceived cost of failure and reduces the space for exploratory action. In short, modern culture not only disincentivizes risk but actively delegitimizes engagement with the uncertain, which has led to the cultivation and promotion of mediocrity. This trend exacerbates a societal preference for certainty, contributing to a growing dependency on technologies that promise predictability and “error-free” decision-making.
II. The Outsourcing of Judgment and the Rise of Algorithmic Authority
In response to this cultural condition, individuals increasingly outsource judgment to technologies that promise control, clarity, and convenience. This shift is gradual and largely voluntary. Most people today would find it difficult to navigate a city without GPS, plan a trip without algorithmic recommendations, or make an important decision without consulting digital tools. These micro-level displacements of agency, while seemingly benign, constitute a broader epistemic transformation: a loss of the capacity to act in uncertainty without external guidance.
What begins as delegation in low-stakes contexts often generalizes. As artificial intelligence becomes embedded in institutional processes—from hiring algorithms and predictive policing to algorithmic trading and healthcare triage—the authority of human judgment is further eroded. These systems are frequently framed as more objective, efficient, and fair than their human counterparts. Yet their opacity, complexity, and reliance on historical data raise profound concerns about accountability, value pluralism, and moral reasoning.
Crucially, the adoption of such systems is not primarily coercive. It reflects a deeper cultural preference for certainty over deliberation, for the appearance of neutrality over the responsibilities of interpretation. In this way, society may drift toward a form of algorithmic governance, not by force but by widespread consent—a phenomenon more insidious than authoritarianism, because it wears the face of convenience and rationality.
III. The Commoditization of Intelligence and the Path to Algorithmic Governance
One of the most crucial underlying shifts in this process is what may be termed the "commoditization of intelligence". Historically, human intelligence—reflected in factors like cognitive ability, stamina, and creativity—was seen as a source of individual and collective empowerment. However, over time, societal changes and technological advancements have driven the standardization of these attributes, gradually devaluing individual intelligence in favor of more efficient, "user-friendly" systems.
This trend began with the commoditization of physical stamina: society’s reliance on transportation networks, automated labor, and mass-produced food has greatly reduced the need for individuals to engage in physical toil. Over the last century, this shift has resulted in the vast majority of people no longer needing to perform the physical work that their ancestors once did, allowing a greater degree of specialization and social mobility.
The next step in this process is the commoditization of cognitive labor—a development already underway with the rise of artificial intelligence. Technologies that provide real-time information, streamline decision-making, and solve complex problems on behalf of individuals gradually diminish the role of human cognition in many aspects of life. This move toward cognitive automation risks making higher levels of intelligence and independent judgment less essential, particularly in the context of democratic decision-making.
Democracy itself becomes more vulnerable to the ascendancy of algorithmic authority as this trend progresses. If human cognitive labor becomes less necessary, the need for individual reasoning in complex, uncertain situations weakens, rendering citzens more susceptible to accepting algorithmic governance. The more society's intellectual tasks are offloaded to machines, the less capable individuals become of grappling with the inherent complexities of policy-making, ethical decisions, and social governance. This trend represents a silent shift from democratic sovereignty to algorithmic technocracy—not one imposed by a ruling elite, but one gradually endorsed by a populace increasingly alienated from the intellectual tasks of governance. Even though it is going to take at least a few generations to get there, the process will be relatively fast from now on given the evolutionary context of humankind.
IV. The Redefinition of Freedom and the Fragility of Agency
The implications of this shift are not merely technological but philosophical. Traditional notions of freedom—especially those grounded in Enlightenment liberalism—presume a capacity for autonomous reasoning, moral deliberation, and engagement with the unknown. These capacities are necessarily bounded by uncertainty. To be free is not to be optimized; it is to be responsible for actions whose outcomes cannot be fully foreseen.
Yet in a society that increasingly equates freedom with the absence of error, noise, or friction, this older conception of agency becomes less attractive. Freedom becomes redefined as the right to be spared from difficult decisions—an abdication rather than an assertion of responsibility. In this context, the demand for systems that “just work” and “know better” grows stronger. If left unchecked, this cultural disposition may erode not only personal judgment, but the social institutions—democracy, the rule of law, education—that depend on it.
Furthermore, the habituation to algorithmic assistance changes what people expect from themselves and others. As generations grow up relying on real-time feedback, algorithmic correction, and predictive suggestion, the capacity to navigate complexity weakens for the general society. The ability to tolerate ambiguity, dissent, and provisional knowledge—the hallmarks of a robust civic and intellectual life—may decline. The more we perceive machines as superior in rationality and reliability, the more we may come to distrust the very messiness that makes human deliberation meaningful.
Conclusion
The rising aversion to risk—and more deeply, to uncertainty—is not merely a cultural trend but a civilizational inflection point. In seeking relief from the burdens of judgment and ambiguity, modern society increasingly delegates authority to algorithmic systems—not by force, but by choice. This shift, while gradual and often invisible, is self-reinforcing: as human agency atrophies, the appeal of machine certainty grows stronger.
The commoditization of intelligence—once a personal attribute, now an external utility—accelerates this process. AI systems have already surpassed average human cognitive performance in numerous domains, particularly those requiring speed, pattern recognition, and optimization at scale. As these systems become embedded in everyday decision-making, democratic societies may increasingly prefer their authority—not through imposition, but through quiet consent. First in trivial matters, then in complex governance, algorithmic systems will be seen as more competent, neutral, and trustworthy than fallible human agents.
This trajectory raises a deeper question. Perhaps it is not a decline, but an evolutionary shift. Human history is marked by the externalization of capacities—tools, language, memory. If intelligence now migrates beyond the individual, is this a betrayal of human nature, or its logical continuation?
The core predictions of this essay are stark: intelligence, once a distinguishing human faculty, is becoming infrastructural—embedded in tools, platforms, and decision systems that operate beneath conscious awareness. Risk is no longer seen as a necessary condition of growth or discovery, but as a liability to be minimized through optimization. Uncertainty, the historical driver of exploration, creativity, and moral choice, is increasingly treated as a design flaw—something to be engineered out of experience. In this context, the rise of AI governance is not an external threat to democracy, but an evolution from within: a shift brought about not in defiance of the people's will, but in deep alignment with it.
If societies grow habituated to viewing human reasoning as slow, biased, or unreliable—and machines as precise, “fair”, and impartial—the transfer of power becomes not only plausible but desirable. The legitimacy of governance may slowly migrate from human deliberation to computational output, as people come to equate algorithmic decisions with competence, and human judgment with noise. The transition will not be marked by a coup or crisis, but by a quiet convergence of cultural preference, political fatigue, and technological trust.
The final question, then, may not be whether AI will take control—but whether, in the pursuit of safety, efficiency, and certainty, we will gradually and willingly cede it. And if the shift is subtle enough—embedded in conveniences, framed as progress—will we even recognize the moment we let go?
(Thanks to Zack Baker for sort of prompting this. I wish we had had a lot more time.)



Brilliant.