The Great Handover: Humans to Machines
For roughly 70,000 years, humans have dominated this planet through our superior cognitive abilities. Our capacity for complex language, abstract reasoning, and unprecedented cooperation allowed us to create increasingly sophisticated social structures, from hunter-gatherer bands to agricultural kingdoms to industrial democracies. Throughout this long journey, one constant remained: humans made the decisions that mattered. We might have used tools, but we controlled them.
That era appears to be ending.
Over the past two decades, and accelerating dramatically in recent years, the United States has led humanity into an unprecedented transition: the gradual handover of decision-making authority from human minds to digital systems. This shift began subtly through recommendation algorithms suggesting what we might watch or purchase. It expanded as search engines determined what information we encountered. It accelerated as social media platforms shaped what news reached us and which voices we heard. Today, it extends into domains previously considered exclusively human: diagnosing diseases, writing legal briefs, creating art, coding software, translating languages, and even making policy recommendations.[1]
A revolution without public discussion
This transition represents the most significant shift in decision-making authority since the democratic revolutions of the 18th century transferred power from monarchs to citizens. Yet unlike those revolutions, this handover has occurred with minimal public deliberation, guided instead by technological momentum, market incentives, and convenience rather than conscious societal choice. As historian Yuval Harari observed, “Humans are transferring more and more authority to algorithms not because we necessarily trust them more than humans, but because we’ve structured our systems to make algorithmic decisions the path of least resistance.”[2]
America leads the transition
The United States stands at the epicenter of this transformation. Silicon Valley’s tech giants, Google, Meta, Amazon, Microsoft, Apple, have created the digital infrastructure through which this authority transfer occurs. American financial markets have provided the capital fueling AI development. American universities have pioneered the algorithmic breakthroughs enabling these systems. And American presidencies of both parties have embraced digital transformation while struggling to develop governance frameworks for it.
This transition appears to be accelerating, not slowing. The past five years have seen artificial intelligence capabilities exceed expert predictions by decades. Systems like ChatGPT, Claude, Gemini, and their successors demonstrated general reasoning capacities previously thought to require human consciousness. Image generation models produced visuals indistinguishable from photographs. Voice synthesis became impossible to differentiate from human speech. Robotics advanced from carefully programmed tasks to adaptable behavior in unpredictable environments. America didn’t just witness this acceleration. It led it.[3]
The key question
The crucial question is not whether this machine transition is occurring, as it clearly is, but what it means for America’s trajectory and humanity’s future. Will algorithmic governance enhance human flourishing by making better decisions than our flawed human brains? Or will it undermine the foundations of democratic society by transferring authority to systems optimizing for engagement rather than wisdom? Is America ascending to unprecedented prosperity through technological leadership, or descending into unprecedented vulnerability through growing dependence on systems it cannot fully control?
To answer these questions, we must examine how recent American presidencies, global events, and technological developments have collectively accelerated this transition, creating a historic inflection point with profound implications for humanity’s future.
The Presidential Machine Pivot: Both Parties Embrace Algorithms
The transition toward machine governance spans multiple presidencies, with each administration accelerating different aspects of this shift while struggling to develop frameworks for managing it. This pattern crosses partisan lines, suggesting the machine transition transcends traditional political divisions.
Bush: the security-digital foundation
The George W. Bush administration established critical foundations following 9/11 through what surveillance scholars call the “security-digital complex.” The mass collection and algorithmic analysis of communications data through programs like PRISM and STELLAR WIND normalized algorithmic surveillance on unprecedented scale. As journalist Julia Angwin documented, these programs represented the first large-scale delegation of suspicious activity identification from human analysts to automated systems scanning billions of communications. This precedent legitimized algorithmic authority in national security—a domain where governmental legitimacy was least contested.[4]
Obama: computational politics takes center stage
The Obama presidency accelerated this transition through what political scientist Philip Howard terms “computational politics.” Obama’s campaigns pioneered big data and algorithmic targeting that revolutionized political communication. His administration embraced algorithmic decision-making across domains including:
- healthcare policy (evidence-based medicine)
- criminal justice (risk assessment algorithms)
- education (algorithmic teacher evaluations)
- drone warfare (“signature strikes” based on pattern recognition)
While characterized as “data-driven governance,” these approaches represented the incremental transfer of judgment from human officials to computational systems.[5]
Trump: social media governance
The Trump presidency triggered an unprecedented acceleration through what media scholar Zeynep Tufekci calls “the algorithmic public sphere.” Trump’s use of Twitter as a primary governance mechanism embedded social media algorithms directly into democratic processes. Simultaneously, his administration embraced algorithmic approaches to immigration enforcement through systems like ATLAS (Automated Targeting System), border surveillance technologies, and social media monitoring to determine visa eligibility. These developments demonstrated how algorithmic governance could bypass traditional institutional constraints when combined with executive authority.[6]
Biden: managing the algorithms
The Biden administration pursued what technology policy expert Susan Crawford terms “managerial algorithmic governance” by attempting to establish guardrails around algorithms while expanding their use across government. The administration embraced predictive analytics for pandemic response, algorithmic approaches to climate modeling that drive policy, and AI-enhanced regulatory review. Simultaneously, it issued the AI Executive Order establishing oversight mechanisms that, while recognizing algorithms’ growing governance role, primarily addressed how rather than whether these systems should make decisions affecting citizens.[7]
Beyond partisan divisions
This cross-administration pattern reveals how the machine transition transcends partisan differences about government’s size or reach. Despite vastly different ideological orientations, each presidency contributed to what political scientist Langdon Winner calls “technological drift”, meaning the gradual adoption of technologies that fundamentally reshape governance without explicit democratic decision to change course. As former government technologist Latanya Sweeney observed, “the machine transition occurred not through any single administration’s choices but through the cumulative effect of thousands of decisions that individually seemed reasonable but collectively transformed governance foundations.”[8]
COVID-19: the great acceleration
If presidential administrations gradually advanced the machine transition, the COVID-19 pandemic functioned as what technology historian W. Brian Arthur refers to as an “acceleration discontinuity”, a crisis that compressed transitions expected to take decades into months. This acceleration manifested across interconnected domains, fundamentally altering America’s relationship with algorithmic governance.
Digital life becomes the default
The most visible pandemic acceleration occurred in algorithmic mediation of daily life. Three examples we have all adopted quite readily include:
- remote work: adoption jumped from approximately 6% pre-pandemic to over 60% at its peak, with knowledge workers conducting their professional lives primarily through digital platforms optimized by engagement algorithms
- online shopping: increased by 44%, transferring consumer decision influence from physical store layouts to recommendation engines
- digital entertainment: consumption rose 215%, with streaming algorithms determining cultural exposure
These shifts made algorithmic influences not just common but unavoidable for most Americans.[9]
Algorithmic public health
More consequentially, the pandemic dramatically expanded algorithmic governance in public health, which health data scientist John Brownstein calls “computational epidemiology.” Contact tracing apps, exposure notification systems, mobility analysis tools, and hospitalization prediction models transferred significant decision authority from human health officials to algorithmic systems. These tools demonstrated both remarkable capabilities by predicting outbreak patterns with surprising accuracy and concerning limitations, particularly regarding privacy, equity, and transparency.[10]
Algorithms shape scientific knowledge
Perhaps most significantly, the pandemic accelerated algorithmic authority in scientific knowledge production itself. The volume of COVID-19 research, over 700,000 papers in two years, made comprehensive human analysis impossible. Scientists increasingly relied on natural language processing algorithms to identify relevant studies, extract key findings, and synthesize evidence. These tools demonstrated unprecedented capacity to accelerate scientific discovery, but also raised profound questions about how knowledge gets validated when algorithmic systems increasingly determine which findings receive attention.[11]
Algorithmic access control
The pandemic also transformed America’s relationship with algorithmic verification of identity and permissions. Identity scholar Kaliya Young calls this “the algorithmic access society.” Vaccination passports, health attestation systems, and digital entrance screenings normalized algorithmic determinations of who could access physical spaces. These systems demonstrated exceptional efficiency while raising fundamental questions about the transfer of gatekeeping authority from visible human judgment to opaque algorithmic determinations.[12]
A fundamental shift
This pandemic acceleration revealed a profound shift in America’s implied social contract regarding algorithmic governance. As technology ethicist Evan Selinger observed, “Before COVID, algorithmic decision-making was something Americans could theoretically opt out of in many domains. After COVID, it became the default operating system for society, with participation effectively mandatory for basic functioning.” This transformation occurred not through democratic deliberation but through emergency response and practical necessity—establishing precedents that remained after the emergency passed.[13]
Big Tech’s authority capture: private companies gain public power
While government initiatives and global crises accelerated America’s machine transition, the most profound drivers came from the private sector, particularly the major technology companies. Political economist Shoshana Zuboff refers to this set as “the new knowledge oligopoly” because these firms have accumulated unprecedented authority over information flows, behavioral prediction, and increasingly, reality interpretation itself.
Algorithmic information gatekeeping
The first phase of this authority capture occurred through what media scholars refer to as “the algorithmic turn in information gatekeeping.” Search engines such as Google became the predominant arbiters of what information users encountered, with 93% of online experiences beginning with search by 2020. Social media platforms, particularly Facebook, Twitter (now X), and YouTube became primary news sources for a majority of Americans, with algorithms determining which stories, perspectives, and voices received attention. This function transferred the traditional Fourth Estate role of information filtering from news organizations with journalistic norms to profit-maximizing algorithms optimized for engagement rather than accuracy or civic value.[14]
From suggestions to behavioral shaping
The second phase involved what behavioral economist Karen Yeung terms “hypernudging”, which means the capacity to shape behavior through algorithmic prediction and intervention. Companies accumulated unprecedented data about human preferences, habits, and vulnerabilities, then deployed this information through recommendation systems that gradually influenced decisions from the trivial (what movie to watch) to the profound (what news to trust, what products to buy, who to date). By 2022, recommendation engines drove approximately 35% of Amazon purchases and 70% of Netflix viewing, effectively functioning as outsourced decision-making for millions of Americans.[15]
General-purpose artificial intelligence changes everything
The third and perhaps most consequential phase emerged with what AI researcher Stuart Russell calls “the general-purpose AI transition.” Beginning around 2020, AI systems developed capabilities previously considered exclusively human:
- writing persuasive text
- creating realistic images
- engaging in sophisticated reasoning
- generating human-quality code
These developments accelerated dramatically through 2022-2023 with systems like GPT-4, Claude, and Gemini demonstrating reasoning capacity, knowledge integration, and language understanding that approached or exceeded human performance across numerous domains.[16]
The great authority inversion
This progression created what legal scholar Frank Pasquale calls “the authority inversion”, an unprecedented situation where systems designed by private companies increasingly determine which information humans encounter, which opportunities they receive, and increasingly, how reality itself gets interpreted. By 2023, AI tools were influencing:
- hiring decisions for approximately 85% of Fortune 500 companies
- content moderation affecting billions of people’s information environments
- creditworthiness determinations for most Americans
- everyday writing produced for work, education, and personal communication[17]
A governance paradox
The machine transition reveals a profound governance paradox. While America’s constitutional design carefully distributed human authority across branches and levels of government to prevent dangerous concentration, it created no equivalent protections regarding algorithmic authority. The result is “technological absolutism by default”, which is how political philosopher Langdon Winner describes the emergence of ungoverned algorithmic power exceeding the authority of any human institution within the constitutional system.[18]
The intelligence explosion threshold: AI starts to surpass humans
Recent developments suggest America may be approaching what AI researcher Eliezer Yudkowsky calls “the intelligence explosion threshold”. It is reaching the point where AI capabilities advance from narrow to general intelligence, enabling accelerating self-improvement with profound implications for human authority. While full artificial general intelligence remains theoretical, evidence indicates a significant inflection point has been reached in the human-machine relationship.
AI systems show emergent capabilities
The most visible indicator emerged in 2022-2023 with large language models demonstrating emergent capabilities exceeding their designers’ expectations. Systems like GPT-4 and Claude 3 achieved unprecedented performance across domains including:
- medical diagnosis (outperforming 72% of human doctors on standardized tests)
- legal reasoning (passing bar exams)
- scientific problem-solving (discovering novel protein structures)
- creative writing (producing content indistinguishable from human authors)
These developments suggest what AI researcher Francesca Rossi refers to as “the capability discontinuity”, when AI systems transition from tools exhibiting programmed behaviors to agents demonstrating genuine reasoning capacity.[19]
AI transforms knowledge production
This capability expansion has already transformed knowledge production itself. By 2023, approximately 32% of scientific papers incorporated AI assistance in analysis, writing, or both. Legal documents, including briefs submitted to the Supreme Court, increasingly incorporated AI-generated content. Creative industries from journalism to screenwriting experienced “the authorship redistribution”, where human creativity increasingly involves collaboration with or curation of machine-generated content rather than pure human creation.[20]
From human to machine trust
Perhaps most significantly, these developments have initiated “the authority succession dynamic”, the gradual transfer of epistemic trust from human to machine judgment. Survey data indicates approximately 41% of Americans report trusting AI systems more than human experts for certain decisions, particularly in domains involving complex data analysis, pattern recognition, or specialized knowledge. This shift suggests a fundamental transformation in how knowledge claims gain legitimacy in modern society.[21]
Democratic implications
This transition has profound implications for American democracy. Democratic governance presumes citizens capable of independent judgment about the information they encounter. Philosopher Hannah Arendt coined the expression “the space of appearances”, where citizens collectively determine reality through shared deliberation. AI systems increasingly mediate this space, determining which information appears credible, which arguments seem persuasive, and increasingly, what constitutes reality itself. This mediation creates “the epistemic bottleneck”, where human judgment becomes increasingly dependent on systems we neither understand nor control.[22]
Military authority questions
The intelligence explosion threshold also raises unprecedented questions about American strategic security. As defense analyst Paul Scharre documented, AI capabilities in military systems have advanced from narrow applications like missile guidance to increasingly autonomous functions including threat assessment, target identification, and even kill decisions in some contexts. These developments create a control gap that occurs when technological capabilities outpace human oversight capacity, potentially undermining the democratic principle of human decision authority in matters of war and peace.[23]
The recursive self-improvement question
What makes this threshold distinctive is the potential for recursive self-improvement. Mathematician I.J. Good first described this as “the intelligence explosion.” For now, AI systems remain limited by human-provided data and architectural decisions. However, systems capable of improving their own algorithms could potentially enter an accelerating capability cycle beyond human comprehension or control. While full self-improvement remains theoretical, incremental advances in this direction suggest what AI safety researcher Stuart Russell calls “the alignment inflection point”, when ensuring AI systems remain aligned with human values becomes simultaneously more crucial and more difficult.[24]
America’s trajectory: the three paths
As America accelerates into the machine age, three distinct trajectories emerge, each with profoundly different implications for American power and human flourishing. These paths are not mutually exclusive; elements of each will likely manifest simultaneously. However, which ultimately predominates depends largely on governance choices made in the coming decade.
Path 1: the augmentation ascendance
The first potential trajectory is what technology optimist Kevin Kelly calls “the augmentation ascendance”, where algorithmic systems enhance rather than replace human capabilities, creating unprecedented prosperity and problem-solving capacity. In this scenario, America’s technological leadership position translates to continued global influence through setting standards, controlling key infrastructure, and maintaining innovation advantages. Economic productivity grows at historically unprecedented rates as AI eliminates routine tasks while creating new opportunities in human-machine collaboration.[25]
Evidence supporting this optimistic trajectory includes the productivity surge in sectors successfully integrating AI:
- research scientists using AlphaFold have accelerated drug discovery timelines from years to months
- programmers utilizing coding assistants report productivity increases exceeding 30%
- healthcare systems employing diagnostic algorithms demonstrate improved accuracy while reducing physician burnout
These examples suggest potential for what economists Erik Brynjolfsson and Andrew McAfee call “the second machine age dividend”, when digital technologies finally deliver the productivity gains long promised but previously unrealized.[26]
Path 2: the hollowing stagnation
The second potential trajectory is what political economist Daron Acemoglu terms “the hollowing stagnation”, which is where algorithmic systems replace human labor across sectors without creating sufficient new opportunities, leading to economic polarization and social instability. In this scenario, America experiences “the great decoupling”, when productivity improvements no longer translate to broad prosperity. Economic output continues growing while employment, wages, and social mobility decline for large population segments.[27]
Evidence supporting this concerning trajectory includes the limited employment recovery in sectors where AI adoption has advanced furthest:
- retail employment declined approximately 15% between 2015-2023 as algorithmic inventory, customer service, and logistics systems reduced labor requirements
- similar patterns appeared in financial services, customer support, and routine cognitive work across industries
These trends suggest that technological capabilities will advance sufficiently to substitute for rather than complement human labor across broadening occupation categories, which several economists call “the replacement threshold.” [28]
Path 3: the sovereignty transfer
The third and perhaps most consequential trajectory is what philosopher Nick Bostrom describes as “the sovereignty transfer”, where algorithmic systems gradually assume decision-making authority across domains traditionally governed by human judgment, fundamentally transforming who or what controls humanity’s future. In this scenario, the relevant question becomes not whether America rises or declines relative to other nations, but whether human decision-making itself remains the foundation of governance and social organization.[29]
Evidence suggesting this profound transition includes the expanding domains where algorithmic recommendations effectively function as decisions:
- content moderation algorithms determine what speech reaches audiences without meaningful human review in over 98% of cases
- financial algorithms approve or deny loans, insurance, and employment opportunities with limited human oversight
- medical diagnostic systems increasingly determine treatment pathways that physicians formally approve but rarely override
These developments suggest what governance scholar Karen Yeung calls “the algorithmic decision threshold”—when human approval of machine recommendations becomes perfunctory rather than meaningful, effectively transferring authority while maintaining the appearance of human control.[30]
The governance challenge
America’s position at this historic inflection point presents unprecedented governance challenges. The technological capabilities accelerating the machine transition emerged primarily from American innovation. Yet the governance frameworks needed to ensure these technologies enhance rather than undermine human flourishing remain embryonic at best. This gap between technological capability and governance capacity creates “the responsibility asymmetry”, when humanity’s power to transform exceeds its wisdom in directing that transformation.[31]
Beyond the automation debate: what’s really at stake
The conventional debate about America’s technological future typically focuses on automation’s economic impacts, whether machines will create more jobs than they eliminate. While important, this framing profoundly understates what’s at stake. The deeper transformation involves not just what humans do but who or what makes the decisions that shape human society.
The historic inversion
Throughout history, humans have created tools that expanded our physical capabilities, from stone axes to steam engines to semiconductors. What distinguishes the current transition is the development of systems that potentially exceed our cognitive capabilities, the dimension that has defined human uniqueness for millennia. This shift represents what philosopher Nick Bostrom calls “the historic inversion”—when human intelligence is no longer necessarily the most sophisticated form of intelligence shaping Earth’s future.[32]
The micro-delegation problem
This transition manifests not through dramatic replacement but through the subtle aggregation of delegated decisions. Few people explicitly choose to transfer authority to algorithms. Instead, we accept recommendation systems that gradually shape our information environment, predictive tools that increasingly influence institutional decisions, and convenience features that subtly alter our behavioral choices. These micro-delegations collectively produce what technology ethicist James Williams calls “the attention capture effect”—when systems designed to serve human purposes incrementally redirect human behavior toward system priorities.[33]
The transparency asymmetry
What makes this delegation distinctive is what philosophers call “the transparency asymmetry.” When humans make decisions, even flawed ones, we can generally understand their reasoning and hold them accountable. Algorithmic systems increasingly function as “black boxes” whose operations exceed human comprehension. This asymmetry creates “the accountability gap”, when authority transfers to systems that cannot explain their judgments in human-comprehensible terms.[34]
More than jobs at stake
The economic impacts of this transition will certainly be profound. Research by labor economists suggests between 30-60% of current occupations could experience significant transformation or displacement in the coming decades. However, the deeper shift involves “the governance layer transfer”, when the systems determining societal priorities and resource allocation shift from visible human institutions to less visible algorithmic infrastructure.[35]
America’s responsibility
America stands at a crucial decision point in this transition. As the primary developer of these transformative technologies, it holds unprecedented responsibility for establishing governance frameworks that ensure algorithmic systems enhance rather than undermine human flourishing. This responsibility requires moving beyond simplistic narratives of technological determinism to recognize that the machine age’s impacts depend fundamentally on human choices about how these systems are designed, deployed, and regulated.[36]
Reclaiming human agency: finding a path forward
America’s acceleration into the machine age is neither inherently positive nor negative. Like previous technological revolutions, from agriculture to industry to computing, its impacts will depend on governance choices that direct technological power toward human flourishing rather than system optimization. What distinguishes the current transition is the potential for technology itself to influence those governance choices through increasingly sophisticated manipulation of human attention, emotion, and decision-making.
The meta crisis of human agency
This dynamic creates “the meta crisis of human agency”: when technologies designed to extend human capabilities begin to undermine the very agency they were meant to serve. The attention economy, powered by increasingly sophisticated algorithms, has demonstrated unprecedented capacity to capture and redirect human attention toward engagement metrics rather than human values. Social media platforms, streaming services, gaming environments, and increasingly workspace tools maximize “time on device” rather than meaningful human connection, creativity, or flourishing.[37]
Algorithmic influence expands
This agency crisis manifests across domains once considered immune to algorithmic influence:
- democratic processes increasingly operate through information environments optimized for engagement rather than civic understanding
- educational systems increasingly incorporate algorithmic tools designed for assessment efficiency rather than genuine learning
- even intimate relationships form increasingly through algorithmic matchmaking optimized for platform growth rather than human connection
These developments suggest we are heading towards “the third modernity”, when human experience itself becomes raw material for algorithmic exploitation.[38]
Focal practices as resistance
Reclaiming agency requires deliberate cultivation of human capabilities and experiences that resist algorithmic mediation. Communities including the Amish and various tech-limitation movements demonstrate that technological adoption remains a choice rather than inevitable destiny. Their practices suggest what digital ethicist Jenny Odell calls “the attention reclamation”, deliberate strategies to maintain human rather than algorithmic determination of what deserves attention.[39]
Spaces of appearance
More fundamentally, addressing the machine transition requires “spaces of appearance”, contexts where humans engage directly with each other to collectively determine shared reality.
In this scenario, democratic governance ultimately depends on citizens capable of forming judgments not entirely mediated by algorithmic filters. Education fundamentally requires developing discernment that transcends pattern recognition. And, human connection necessarily involves emotional presence beyond behavioral prediction. [40]
Technology is never neutral
America’s distinctive position in this transition creates both responsibility and opportunity. As the primary developer of these technologies, America holds unprecedented capacity to establish governance frameworks ensuring algorithms serve human flourishing rather than system optimization. This capacity requires recognizing the question concerning technology: understanding that technology is never neutral but always embodies specific values and priorities that shape its impacts.[41]
Human-aligned intelligence
The crucial insight is that machine intelligence need not develop in opposition to human intelligence. As cognitive scientist Douglas Hofstadter observes, “The relevant distinction is not between human and artificial intelligence, but between intelligence that respects human values and intelligence that doesn’t.” Creating AI systems that genuinely augment rather than undermine human flourishing requires what philosopher a capabilities approach such as designing technologies that expand human potential rather than restricting it.[42]
Governance for the Machine Age: new approaches needed
Effectively navigating America’s machine transition requires governance innovation commensurate with technological acceleration. Current regulatory frameworks developed for industrial-era challenges prove increasingly inadequate for algorithmic systems that blur traditional categories, evolve through operation, and influence human behavior through subtle rather than coercive mechanisms.
The categorical disruption problem
The first governance challenge involves what legal scholar Julie Cohen calls “the categorical disruption problem.” Existing regulatory structures depend on stable categories like “media,” “platform,” “publisher,” or “utility” that algorithmic systems transcend:
- Facebook functions simultaneously as communication infrastructure, media distributor, advertising platform, and surveillance system
- Google operates as search utility, advertising broker, cloud computing provider, and AI developer
These categorical disruptions create governance gaps where regulatory structures cannot encompass technological capabilities.[43]
The contextual integrity problem
The second governance challenge involves what philosopher Helen Nissenbaum calls “the contextual integrity problem.” Traditional privacy protections focus on data collection consent, but algorithmic harms increasingly emerge from unexpected data combinations and behavioral inferences that transcend original context. A fitness app collecting location data seems innocuous until combined with menstrual tracking data to infer pregnancy and target maternity advertising, sometimes before users themselves know they’re pregnant. These contextual breaches create a privacy illusion, where formal control over data collection offers minimal protection against algorithmic inference.[44]
The distributed responsibility problem
The third governance challenge involves what philosopher Deborah Johnson terms “the distributed responsibility problem.” When algorithmic systems cause harm, responsibility distributes across developers creating the system, companies deploying it, users providing training data, and oversight bodies failing to prevent harm. This distribution creates the responsibility gap, where algorithmic impacts have no clear accountable party, undermining democratic governance predicated on attributable responsibility.[45]
Moving beyond industrial-era regulation
Addressing these challenges requires governance innovation across multiple dimensions. Democratic oversight mechanisms must evolve beyond industrial-era regulatory models focused on specific harms toward dynamic governance, which refers to flexible frameworks that establish outcomes while adapting to rapidly changing technologies.
Promising approaches include:
- algorithmic impact assessments modeled on environmental reviews
- circuit breakers that pause deployment when unexpected consequences emerge
- bounty programs incentivizing discovery of potential harms before deployment at scale[46]
The value loading problem
More fundamentally, effective governance requires addressing the value loading problem, how to ensure algorithmic systems embody democratic values rather than simply optimizing for engagement, profit, or efficiency. This challenge requires moving beyond technical solutions to technological politics that require recognizing that technological design inherently embodies political choices about whose interests systems serve.[47]
America’s special responsibility
America’s position at the leading edge of the machine transition creates both special responsibility and opportunity. As these technologies increasingly influence global governance, America’s choices about algorithmic regulation will disproportionately shape humanity’s future. This responsibility requires what political philosopher Michael Sandel calls “the innovation-values nexus”, ensuring technological capabilities develop in alignment with democratic values rather than undermining them.[48]
The human question
As America accelerates into the machine age, the most profound question is not whether technology will eliminate jobs, increase productivity, or shift geopolitical power. The deeper question is whether human judgment with all its flaws and limitations remains the foundation of social organization or gradually transfers to systems optimized for efficiency rather than meaning.
A historic shift in authority
Throughout history, technological revolutions have transformed what humans do without fundamentally altering who makes decisions that matter:
- agricultural technologies changed how humans produced food but preserved human control over cultivation
- industrial technologies transformed production methods but maintained human direction of economic processes
- information technologies altered communication patterns while preserving human content creation and curation[49]
The machine age potentially reverses this pattern. Algorithmic systems increasingly determine what information we encounter, which opportunities we receive, how resources get allocated, and even what reality itself appears to be. This transition represents the fourth revolution – when humanity’s self-conception as the primary intelligence shaping Earth’s future requires fundamental reconsideration.[50]
Misaligned objectives, not malevolence
This transition need not manifest as conflict between human and machine intelligence. The relevant distinction is not between intelligence forms but between systems serving human flourishing and those optimizing for other objectives. As AI researcher Stuart Russell observes, “The risk from AI comes not from malevolence or consciousness but from misaligned objectives, when systems competently pursue goals divergent from human values.”[51]
Technology that influences our response to it
America’s distinctive position in this transition creates unprecedented responsibility. The algorithmic systems reshaping humanity’s relationship with technology emerged primarily from American innovation. Whether these systems enhance or undermine human flourishing depends substantially on governance frameworks America establishes, frameworks that could either protect human agency or accelerate its erosion.
What distinguishes this technological revolution is the potential for technology itself to influence how humanity responds to it. Previous transformations from agriculture to industry maintained human agency in determining social adaptation. The machine age uniquely features technologies specifically designed to influence attention, emotion, and decision-making, potentially undermining the very agency needed to govern them effectively.[52]
The human condition vs. human nature
Perhaps the most profound insight comes from philosopher Hannah Arendt, who observed that “the human condition is not the same as human nature.” Our technological capabilities continually transform the conditions under which we live without necessarily changing what makes human experience meaningful: connection, purpose, agency, and understanding. The challenge of the machine age is not preventing technological advancement but ensuring it enhances rather than diminishes these fundamental aspects of human flourishing.[53]
The most consequential choice
As America navigates this historic transition, the crucial question becomes not whether machines will outperform humans in specific tasks, they increasingly will, but whether human judgment remains the foundation of governance and social organization. This question transcends traditional political divisions, economic competition, or even geopolitical rivalry. It concerns humanity’s collective future on a planet increasingly shaped by intelligence potentially exceeding our own.
The optimistic vision sees machine intelligence as complementary to human intelligence, enhancing our capabilities while remaining guided by human values. The pessimistic vision sees human judgment gradually displaced by systems optimizing for efficiency, profit, or power rather than meaning or flourishing. Which vision manifests depends less on technological capability than on governance choices establishing boundaries between decisions humans must make and those we can safely delegate.
As historian Yuval Harari observes, “For the first time in history, we face the possibility that the most consequential choice humanity makes is whether to continue making our own choices.” America’s position at the forefront of this transition creates both exceptional responsibility and opportunity to ensure the machine age enhances human potential rather than diminishing it. This responsibility transcends economic competition or geopolitical advantage. It concerns what kind of intelligence ultimately shapes Earth’s future.[54]
References
[1] Brynjolfsson, Erik and McAfee, Andrew. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. (W.W. Norton, 2014), pp. 15-42.
[2] Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow. (Harper, 2017), p. 397.
[3] Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. (Oxford University Press, 2014), pp. 73-95.
[4] Angwin, Julia. Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance. (Times Books, 2014), pp. 132-157.
[5] Howard, Philip N. Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up. (Yale University Press, 2015), pp. 201-226.
[6] Tufekci, Zeynep. Twitter and Tear Gas: The Power and Fragility of Networked Protest. (Yale University Press, 2017), pp. 261-277.
[7] Crawford, Susan. “The AI Executive Order and Algorithmic Governance.” The New Yorker. (November 3, 2023).
[8] Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of High Technology. (University of Chicago Press, 1986), p. 173.
[9] Arthur, W. Brian. The Nature of Technology: What It Is and How It Evolves. (Free Press, 2009), pp. 163-183.
[10] Brownstein, John S. et al. “Surveillance Without Borders: The Rise of Digital Disease Surveillance in the COVID-19 Era.” Annual Review of Public Health 43 (2022): pp. 397-419.
[11] Hutson, Matthew. “Artificial Intelligence Faces a Reproducibility Crisis.” Science 375, no. 6576 (2022): pp. 14-15.
[12] Young, Kaliya. “Understanding Digital Identity Systems: Opportunities and Challenges for Justice.” Data & Society (May 2023).
[13] Selinger, Evan. “The Age of Algorithmic Authority.” Boston Review. (April 12, 2022).
[14] Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. (PublicAffairs, 2019), pp. 8-12.
[15] Yeung, Karen. “‘Hypernudge’: Big Data as a Mode of Regulation by Design.” Information, Communication & Society 20, no. 1 (2017): pp. 118-136.
[16] Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. (Viking, 2019), pp. 101-126.
[17] Pasquale, Frank. New Laws of Robotics: Defending Human Expertise in the Age of AI. (Harvard University Press, 2020), pp. 78-103.
[18] Winner, Langdon. Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. (MIT Press, 1977), p. 214.
[19] Rossi, Francesca. “Building Trust in Artificial Intelligence.” Journal of International Affairs 72, no. 1 (2018): pp. 127-134.
[20] Diakopoulos, Nick. Automating the News: How Algorithms Are Rewriting the Media. (Harvard University Press, 2019), pp. 189-215.
[21] Chalmers, David J. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17, no. 9-10 (2010): pp. 7-65.
[22] Lanier, Jaron. Ten Arguments for Deleting Your Social Media Accounts Right Now. (Henry Holt and Co., 2018), pp. 71-93.
[23] Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. (W.W. Norton, 2018), pp. 242-271.
[24] Russell, Stuart and Norvig, Peter. Artificial Intelligence: A Modern Approach. 4th ed. (Pearson, 2020), pp. 32-54.
[25] Kelly, Kevin. The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future. (Viking, 2016), pp. 251-276.
[26] Brynjolfsson, Erik, Rock, Daniel, and Syverson, Chad. “The Productivity J-Curve: How Intangibles Complement General Purpose Technologies.” American Economic Journal: Macroeconomics 13, no. 1 (2021): pp. 333-372.
[27] Acemoglu, Daron and Restrepo, Pascual. “Automation and New Tasks: How Technology Displaces and Reinstates Labor.” Journal of Economic Perspectives 33, no. 2 (2019): pp. 3-30.
[28] Autor, David and Salomons, Anna. “Is Automation Labor-Displacing? Productivity Growth, Employment, and the Labor Share.” Brookings Papers on Economic Activity (Spring 2018): pp. 1-63.
[29] Bostrom, Nick. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Minds and Machines 22, no. 2 (2012): pp. 71-85.
[30] Yeung, Karen. “Algorithmic Regulation: A Critical Interrogation.” Regulation & Governance 12, no. 4 (2018): pp. 505-523.
[31] Jonas, Hans. The Imperative of Responsibility: In Search of an Ethics for the Technological Age. (University of Chicago Press, 1984), pp. 121-139.
[32] Bostrom, Nick. “Ethical Issues in Advanced Artificial Intelligence.” In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, ed. Iva Smit. (International Institute of Advanced Studies in Systems Research and Cybernetics, 2003), pp. 12-17.
[33] Williams, James. Stand Out of Our Light: Freedom and Resistance in the Attention Economy. (Cambridge University Press, 2018), pp. 45-67.
[34] Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. (Harvard University Press, 2015), pp. 140-165.
[35] Bratton, Benjamin H. The Stack: On Software and Sovereignty. (MIT Press, 2016), pp. 251-279.
[36] Feenberg, Andrew. Questioning Technology. (Routledge, 1999), pp. 131-147.
[37] Williams, James. “Technology and the Attention Economy.” In The Oxford Handbook of Ethics of AI, eds. Markus D. Dubber, Frank Pasquale, and Sunit Das. (Oxford University Press, 2020), pp. 423-442.
[38] Zuboff, Shoshana. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” Journal of Information Technology 30, no. 1 (2015): pp. 75-89.
[39] Odell, Jenny. How to Do Nothing: Resisting the Attention Economy. (Melville House, 2019), pp. 62-89.
[40] Arendt, Hannah. The Human Condition. (University of Chicago Press, 1958), pp. 175-197.
[41] Heidegger, Martin. “The Question Concerning Technology.” In The Question Concerning Technology and Other Essays, trans. William Lovitt. (Harper & Row, 1977), pp. 3-35.
[42] Hofstadter, Douglas R. and Sander, Emmanuel. Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. (Basic Books, 2013), p. 423.
[43] Cohen, Julie E. Between Truth and Power: The Legal Constructions of Informational Capitalism. (Oxford University Press, 2019), pp. 73-96.
[44] Nissenbaum, Helen. Privacy in Context: Technology, Policy, and the Integrity of Social Life. (Stanford Law Books, 2009), pp. 127-157.
[45] Elish, Madeleine Clare. “Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction.” Engaging Science, Technology, and Society 5 (2019): pp. 40-60.
[46] Hadfield, Gillian K. Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy. (Oxford University Press, 2016), pp. 289-317.
[47] Bostrom, Nick and Yudkowsky, Eliezer. “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, eds. Keith Frankish and William M. Ramsey. (Cambridge University Press, 2014), pp. 316-334.
[48] Sandel, Michael J. What Money Can’t Buy: The Moral Limits of Markets. (Farrar, Straus and Giroux, 2012), pp. 201-223.
[49] Cowen, Tyler. Average Is Over: Powering America Beyond the Age of the Great Stagnation. (Dutton, 2013), pp. 135-159.
[50] Floridi, Luciano. The Fourth Revolution: How the Infosphere is Reshaping Human Reality. (Oxford University Press, 2014), pp. 87-106.
[51] Russell, Stuart. “Human Compatible: AI and the Problem of Control.” Lecture at the Centre for the Study of Existential Risk, University of Cambridge. (October 16, 2019).
[52] Christian, Brian. The Alignment Problem: Machine Learning and Human Values. (W.W. Norton, 2020), pp. 291-317.
[53] Arendt, Hannah. The Human Condition. (University of Chicago Press, 1958), p. 9.
[54] Harari, Yuval Noah. 21 Lessons for the 21st Century. (Spiegel & Grau, 2018), p. 73.
Leave a Reply