Geology of Wounds

Tags

, , , , , , , , , , , , ,

We hold our place in lines and waiting rooms.
Below, the shifting geology of wounds.
No one’s aware of what the body bears—
The hairline cracks, the jaw clenched on its prayers.

The body remembers all the little deaths—
The flinch before the shadow. The held breaths.
Trauma doesn’t live in what we say—
It lives in how we brace against the day.

We walk with more than what a life accrues.
We carry hand-me-down grief, inherited blues.
The dead still move in us—their clenched jaw, their gait—
Their unfinished sorrow, our embedded trait.

My ulcer holds the argument I swallowed.
My spine still bows toward masters I followed.
Disease is not invasion—it’s the body’s voice,
Saying what I couldn’t when I had no choice.

There are fissures within me where my ancestors meet.
Their arguments echo. Their losses repeat.
The dead don’t ask permission to remain—
They burrow into marrow, into brain.

We find each other not by joy but scars—
The ever-present bruise, invisible bars.
No need to explain what the body has known.
We see our kind by how they hold their own.

The wound doesn’t vanish. It just grows quiet.
The body stops bracing for the next riot.
Healing isn’t ignoring the constant ache—
It’s when the jaw unclenches for its own sake.

You don’t get fixed. You just get more aware,
You learn the strata of the weight, how much to bear.
The cracks stay cracks. But now light passes through.
You become the window someone looks into.

The God We Obeyed

Tags

, , , , , , , , , , ,

We woke to roosters, without being told,
To woodsmoke curling through the cold;
The body knew its own slow need—
We ate when hunger sowed its seed.

No bells divided dawn from dusk,
The farmer shucked the yellow husk,
The child ran barefoot through the hay,
And no one cared how long we’d stay.

Then came the tower, grim and tall,
Its iron face above the stall;
It spoke in hours, sharp and clear,
And something ancient disappeared.

The months were numbered, one through twelve,
No longer seasons named themselves;
The planting moon became a date,
And nature waited at the gate.

The church bell told us when to pray,
The factory whistle seized the day;
Our hands were not our own to fold—
We marched to drums our masters hold.

The clock became the god we obeyed,
Its iron voice could not be swayed;
Each hour a room without a door—
Hollow souls and nothing more.

We swallowed our meals, we hurried through love,
The stars became strangers we’d heard stories of;
Each year quicker, each moment pulled tight—
We scheduled the dawn and cancelled the night.

There was no hour left for play,
No breath that wasn’t sold away;
We hid our laughter like a crime
And spent our joy on borrowed time.

We stopped complaining, stopped our ears,
And paced in silence through the years;
The pulse that once was wild and free
Now ticked in time obsequiously.

But sometimes, late, we lift our eyes
And find the strangers in the skies;
They do not tick, they do not chime—
They burn outside the walls of time.

Empire Of Extraction: AI, Capitalism, And The Unraveling Of The Biosphere

Tags

, , , , , , , , , , , , , , , , , , ,

A Brave New AI World

The 21st century is witnessing a convergence of crises unprecedented in both scale and complexity. At the forefront is the rapid acceleration of artificial intelligence (AI), a technology whose development and deployment have become emblematic of broader shifts in global power, economic extraction, and environmental destabilization. AI’s rise is not occurring in a vacuum; it is deeply interwoven with the intensification of capitalist extraction, where the relentless pursuit of profit and efficiency drives not only technological innovation but also the exploitation of labor, data, and natural resources on a planetary scale. Simultaneously, the biosphere—the intricate web of life that sustains human civilization—is facing collapse, threatened by climate change, biodiversity loss, and the exhaustion of ecological limits.

These forces—AI, capitalism, and ecological crisis—are not isolated phenomena. They are deeply entangled, each amplifying the risks and contradictions of the others. The ideology and operations of the AI industry, as meticulously documented in Karen Hao’s Empire of AI, provide a revealing lens through which to examine these dynamics. Through detailed reporting and analysis, Hao exposes how the ambitions of companies like OpenAI, and the visionaries and power brokers behind them—figures such as Sam Altman and Elon Musk—are not merely technological in nature. Rather, they are political, economic, and imperial projects, seeking to reshape society and the planet in the image of their own interests and ideals.

The story of AI’s ascent is thus inseparable from the broader story of industrial civilization’s trajectory. As Hao’s work and critical scholarship on contemporary capitalism reveal, the AI industry is both a product and a driver of the current world order: one that is marked by the concentration of wealth and power, the extraction and commodification of both human and nonhuman life, and the perpetuation of social and ecological inequalities. The drama within OpenAI—its founding ideals, internal power struggles, and eventual capitulation to commercial pressures—mirrors the larger crisis of governance and legitimacy facing industrial society as it approaches its ecological limits.

At the same time, the global reach of tech conglomerates—epitomized by Elon Musk’s ventures in sub-Saharan Africa and beyond—demonstrates how technological ambition and capitalist expansion continue to reproduce systems of exploitation and exclusion on a planetary scale. These dynamics are not relics of a feudal past, as some theorists suggest, but rather the latest iteration of capitalism’s internal transformations, as it adapts to new opportunities for extraction and control in the digital age.

This essay draws on Hao’s Empire of AI, critical analyses of capitalism’s evolution, and contemporary accounts of global tech power to explore how the ideology and operations of the AI industry reflect and accelerate the impending unraveling of both the biosphere and industrial civilization. The narrative is not merely technological; it is a story of political economy, ambition, and ecological reckoning—a story that demands urgent reflection and action as we confront the intertwined futures of technology, society, and the Earth.


The Rise of AI Empires: Ideals, Power, and Dispossession

OpenAI’s Founding Myth and Its Unraveling

OpenAI’s inception was steeped in utopian ambition. Its founders—Sam Altman, Elon Musk, and other Silicon Valley luminaries—proclaimed a mission to develop artificial general intelligence (AGI) for the benefit of all humanity, not just shareholders or a privileged elite. They structured OpenAI as a nonprofit, promising transparency, openness, and collaboration, and explicitly rejecting the profit-driven secrecy that had come to dominate the tech sector. The organization’s very name reflected this ethos: “Open” AI, a commitment to sharing research and collaborating widely, with the ultimate goal of ensuring that AGI would be a universal good, not a private asset.

Yet, as Karen Hao’s Empire of AI reveals, these ideals quickly collided with the realities of technological ambition and the immense capital required to pursue it at scale. Within less than two years, OpenAI’s leaders realized that the path to AGI would demand resources far beyond what their initial philanthropic commitments could support. This financial strain precipitated a power struggle at the highest levels, with both Musk and Altman vying for control. Altman ultimately prevailed, but Musk’s departure in early 2018—and the withdrawal of his funding—marked the first major fracture in OpenAI’s founding narrative. The episode, as Hao notes, was an early indicator that OpenAI’s project was as much about ego and power as it was about altruism.

To fill the financial void, OpenAI underwent a dramatic transformation. Altman engineered a new legal structure, creating a for-profit arm (OpenAI LP) nested within the nonprofit, enabling the company to raise capital, commercialize its technologies, and provide investor returns. This pivot culminated in a landmark $1 billion investment from Microsoft in 2019, fundamentally altering OpenAI’s trajectory. The company began to aggressively commercialize products like ChatGPT, pursue ever-higher valuations, and adopt a culture of secrecy and insularity that belied its original promises of openness. The nonprofit structure persisted in name, but the organization’s governance experiment—intended to safeguard the public interest—collapsed under the weight of internal power struggles and the relentless logic of capital. The dramatic ouster and subsequent reinstatement of Altman in 2023 was the final, public unraveling of OpenAI’s founding myth, exposing the extent to which decisions about the future of AI were being made by a small, elite circle behind closed doors, with even employees left largely in the dark.

AI as Extractive Regime: Labor, Data, and Resources

Hao’s central metaphor for the AI industry is that of a new kind of global regime—one that echoes the extractive dynamics of historical colonialism, but operates through digital means. The AI industry does not wield overt violence, but it seizes and appropriates resources essential to its vision: the creative labor of artists and writers, the personal data of billions, and the land, energy, and water needed to power massive data centers and supercomputers. The labor required to clean, annotate, and prepare these vast datasets is often outsourced to the world’s most vulnerable populations, who work under exploitative conditions for meager wages.

This extraction is global and deeply unequal. In Kenya, for example, data laborers are paid starvation wages to filter out toxic content (such as hate speech, violence, and sexual content) from AI training datasets, exposing themselves to psychological harm with little recourse or support. Data centers are frequently sited in rural or marginalized communities, both in the Global South and in the U.S., because land and resources are cheaper and local resistance is less likely to be heard or effective. These centers often consume water and energy at scales that far exceed the needs of local residents, diverting critical resources away from communities that may already be facing scarcity.

Karen Hao cites a Bloomberg analysis showing that two-thirds of new data centers are being built in water-scarce areas, often tapping directly into public drinking water supplies. For example, in Chile, Google proposed building a data center that would use a thousand times more freshwater annually than the local community it would neighbor. To illustrate the enormous energy needs of AI, Hao references a McKinsey report estimating that, on the current trajectory, global AI infrastructure will require two to six times the annual energy consumption of the state of California within five years. Many of these data centers are sited in regions where energy grids are already strained, and in some cases, coal plants slated for retirement have been kept running or restarted specifically to serve new data center demand.

In Memphis, Tennessee, Elon Musk’s “Colossus” data center is powered by about 35 unlicensed methane gas turbines, pumping thousands of tons of toxic pollutants into the community, which already faces environmental injustice and limited access to clean air. Meanwhile, the benefits of AI—wealth, power, and technological prestige—are concentrated among a handful of tech giants and their investors, with little benefit to the communities whose labor and resources make these technologies possible.

The industry’s logic is further reinforced by its control over the narrative of progress. Companies like OpenAI justify their extractive practices by invoking the promise of future technological salvation: AGI, they claim, will one day solve climate change, eradicate disease, and deliver abundance for all. Yet, as Hao and others have documented, this narrative serves primarily to legitimize the ongoing concentration of power and the perpetuation of global inequalities. The costs—ecological degradation, social dislocation, and economic precarity—are externalized onto the world’s most vulnerable, while the rewards accrue to the already powerful.

From Utopian Experiment to Oligarchic Power

The story of OpenAI’s rise is emblematic of a broader transformation within capitalism itself. As Addison and Eisenberg argue, the emergence of tech oligarchs like Altman and Musk does not signal a return to feudalism, but rather a shift in the mechanisms of capitalist accumulation and control. The AI industry’s business model—rooted in data extraction, monopoly power, and rent-seeking—represents an intensification of capitalist dynamics, not their abandonment. The creation of private jurisdictions, the capture of public goods, and the pursuit of unprecedented scale are all hallmarks of a new phase of capitalist development, one that is increasingly indifferent to democratic oversight or ecological limits.

At the same time, the global ambitions of figures like Musk—whose projects in sub-Saharan Africa and elsewhere seek not only economic returns but also political and cultural hegemony—underscore the ways in which tech companies are reshaping the world order. These ventures often reproduce systems of exploitation and exclusion familiar from earlier eras of imperialism, but now mediated by algorithms, platforms, and data flows rather than armies and bullets. The result is a new form of extractive dominance, one that is digital, planetary, and deeply entwined with the fate of the biosphere and industrial civilization itself.


Surveillance Capitalism and the Logic of Scale

From Industrial Capitalism to Data-Driven Oligarchy

The AI industry’s business model is not a rupture with capitalism but an intensification of its deepest tendencies. While some commentators have described the rise of tech giants as a new “neofeudalism,” historians and critical scholars argue that what we are witnessing is a profound transformation within capitalism itself, not a return to a medieval past. The power wielded by figures like Sam Altman, Elon Musk, and the corporations they lead is rooted in the logic of capital: relentless expansion, the pursuit of monopoly, and the extraction of new forms of value.

Whereas industrial capitalism was driven by the production and sale of material goods, the new regime—what Shoshana Zuboff terms “surveillance capitalism”—extracts value from the data, behavior, and even the emotions of users. In this model, people are not just consumers but also the raw material: their clicks, searches, posts, and private communications are harvested, analyzed, and commodified. Tech companies like OpenAI, Google, and Meta have built vast fortunes by turning the intimate details of daily life into products for advertisers, governments, and other corporations. As Addison and Eisenberg note, this is not feudal rent extraction but a novel form of capitalist accumulation, where the boundaries between public and private, work and leisure, are systematically dissolved.

The logic of surveillance capitalism has also normalized a culture of mass datafication and extraction. AI developers treat everything as data to be captured, sanitized, and consumed by their models—books, artworks, social media posts, even the faces and voices of people around the world. This approach has led to pervasive surveillance not just online, but in physical spaces, with the gaze of AI-powered systems falling disproportionately on vulnerable and marginalized populations, especially in the Global South. The result is a digital extractivism that mirrors and amplifies older forms of colonial exploitation, now justified in the name of progress and innovation.

AI’s Insatiable Appetite: Energy, Data, and Ecological Cost

The defining feature of this new phase of capitalism is its “logic of unprecedented scale and consumption.” The pursuit of ever-larger AI models has unleashed a global race for data, energy, and computational power. Training state-of-the-art models like GPT-4 requires not only astronomical amounts of data but also immense quantities of electricity and water. As Karen Hao reports, GPT-4 is over 15,000 times larger than its predecessor from just five years earlier, which translates directly into exponentially greater energy, data, and financial resource requirements.

This scale is not a technological inevitability but a strategic choice, driven by the imperatives of capital and competition. OpenAI’s relentless push for bigger models has set the rules for the entire industry, forcing rivals like Google and Baidu to divert resources and centralize their research efforts in order to keep up. The resulting concentration of power and resources has choked off alternative approaches to AI development, narrowing the field to a handful of corporate giants with the capital to sustain the costs of scaling.

The ecological consequences are staggering. Data centers now consume vast amounts of energy and water, with some projections warning of a future where the planet is “covered with data centers and power stations,” creating a “tsunami of computing…almost like a natural phenomenon.” Attempts to “green” these operations—through renewable energy or more efficient cooling—are dwarfed by the exponential growth in demand. The scale of computation required for cutting-edge AI is fundamentally incompatible with planetary boundaries and the urgent need to reduce carbon emissions.

OpenAI and its peers rationalize these costs by invoking the promise of AGI: a future technology that will, they claim, “fix the climate,” deliver “massive prosperity,” and solve humanity’s greatest challenges. But this is a dangerous wager. The benefits are speculative and distant, while the harms—ecological degradation, labor exploitation, and the concentration of power—are immediate and growing. The industry’s faith in technological salvation serves to justify ever-greater extraction, even as it accelerates the unraveling of the biosphere and deepens global inequalities.

The New Empire of Data and Attention

The rise of surveillance capitalism and the logic of scale have produced a new regime—one that is digital, planetary, and extractive. The AI industry’s relentless appetite for data and computation has created a feedback loop: more data enables bigger models, which require more energy and resources, which in turn drive further extraction and exploitation. This cycle is sustained by a narrative of inevitable progress, but its real effect is to entrench the power of a small elite while externalizing the costs onto the world’s most vulnerable people and ecosystems.

This regime is not just economic but ideological. By framing their work as a civilizational mission, AI leaders like Altman and Musk position themselves as the architects of humanity’s future, even as they reproduce and intensify the inequalities and crises of the present. The story they tell is one of abundance and salvation, but the reality is a deepening spiral of extraction, exclusion, and ecological risk.

In sum, the transformation from industrial to surveillance capitalism, and the logic of scale that drives the AI industry, are not simply technical trends—they are expressions of a broader crisis within capitalism itself. The pursuit of infinite growth on a finite planet, mediated by ever-more powerful and resource-hungry technologies, is pushing both the biosphere and industrial civilization toward collapse. The challenge is not just to regulate or reform AI, but to confront the underlying logic that makes such extraction both possible and profitable.


The Global South, Tech Hegemony, and Neocolonial Patterns

Elon Musk, Techno-Feudalism, and the New World Order

Elon Musk’s expanding influence in sub-Saharan Africa illustrates the emergence of a new kind of global power—one dominated not by states, but by tech oligarchs whose ambitions extend far beyond commerce. Musk’s projects, such as Starlink’s satellite internet and Tesla’s energy solutions, are marketed as vehicles for modernization and progress. Yet, as Dirk Kohnert observes, these ventures are also about establishing political and cultural hegemony in international markets, often positioning Musk as an unprecedented “techno-feudal lord.” His role is not confined to business: Musk acts as an arbiter in international conflicts, supports autocratic leaders, and leverages his platforms—such as X (formerly Twitter)—for political influence and the spread of misinformation.

This concentration of power is not a return to medieval feudalism, but a transformation within capitalism itself. As Addison and Eisenberg argue, the analogy of “techno-feudalism” is misleading; what we are witnessing is the rise of capitalist oligarchs whose private jurisdictions and corporate power can rival or even surpass nation-states. Musk’s ability to shape policy, influence elections, and broker international disputes exemplifies how tech barons now operate as global actors, sometimes more powerful than governments themselves.

In Africa, the promise of Musk’s technologies—global connectivity via Starlink, renewable energy through Tesla’s Megapacks—remains largely aspirational for the majority. High costs and infrastructural barriers mean that these services are often out of reach for most Africans. The pattern is familiar from earlier eras of empire: resources and markets are opened for extraction and control, while local populations are marginalized. The logic of dominance persists, now mediated by algorithms, satellites, and digital infrastructure rather than military force.

Data Colonialism and the New Extractivism

The term “data colonialism” has emerged to describe how tech companies appropriate digital resources from around the world, often without meaningful consent or compensation. As Karen Hao documents, the AI industry’s culture treats anything and everything as data to be captured and consumed, normalizing mass scraping and surveillance. This gaze falls disproportionately on the Global South, where vulnerable populations become “guinea pigs” for new technologies and sources of cheap data labor. For example, facial recognition companies target African countries to collect diverse face data, often exploiting weak data protection laws and offering little benefit to local communities.

This new extractivism extends the logic of colonial resource plunder into the digital realm. The biosphere is now exploited not only for minerals and energy but also for data and attention. The boundaries between digital and ecological exploitation blur: both are driven by the imperative of endless growth and accumulation. The labor required to annotate, clean, and prepare data for AI models is frequently outsourced to workers in the Global South, who endure precarious conditions and meager pay. Meanwhile, the environmental costs—such as water and energy diverted to data centers—compound existing inequalities and ecological stresses in these regions.

The Global Feedback Loop of Extraction and Inequality

The rise of tech empires like Musk’s is not an isolated phenomenon but part of a global feedback loop. As Hao notes, the aggressive push for scale in AI development has set the rules for a new era, forcing other tech giants to centralize and consolidate their resources, often at the expense of local innovation and alternative approaches. The concentration of wealth and technological power in the hands of a few multinational corporations is mirrored by growing precarity and exclusion for the many, especially in the Global South.

This dynamic is a modern echo of historical colonialism, but with new tools and justifications. The rhetoric of technological progress and global uplift is used to legitimize the extraction of both digital and natural resources, while the actual benefits accrue to a narrow elite. As Hao writes, “the empires of AI are not engaged in the same overt violence and brutality that marked [colonial] history. But they, too, seize and extract precious resources to feed their vision of artificial intelligence: the work of artists and writers; the data of countless individuals posting about their experiences and observations online; the land, energy, and water required to house and run massive data centers and supercomputers. So too do the new empires exploit the labor of people globally to clean, tabulate, and prepare that data for spinning into lucrative AI technologies.”

Conclusion: Empire by Other Means

In sum, the expansion of tech hegemony into the Global South—epitomized by figures like Elon Musk—reveals a new phase of capitalist imperialism. The tools have changed, but the structures of resource extraction, exclusion, and inequality remain. The digital and ecological frontiers are now intertwined, and the costs of this new regime are borne most heavily by those least able to resist. The challenge ahead is not only to recognize these neocolonial dynamics but to build forms of resistance and governance that can reclaim agency, redistribute benefits, and protect both people and planet from the ravages of unchecked technological power.


The Illusion of Progress and the Crisis of Civilization

The Myth of Technological Salvation

The leaders of the AI industry, from Sam Altman to Elon Musk, have constructed and relentlessly marketed a vision of technological salvation—a narrative in which artificial general intelligence (AGI) will not only solve humanity’s most urgent crises, such as climate change and disease, but also usher in an era of unprecedented abundance and prosperity. Altman, for instance, has promised that the “Intelligence Age” will soon be upon us, predicting that superintelligence could arrive in “a few thousand days” and claiming that “astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.” This vision is not unique to OpenAI; it permeates the rhetoric of Silicon Valley, where technological progress is equated with social progress and the solution to every problem is more innovation, more scale, and more control over nature.

Yet, as Karen Hao and other critical observers document, this narrative serves a powerful ideological function: it justifies ever-greater extraction of resources, ever-tighter concentration of power, and ever-more aggressive deployment of disruptive technologies, all while deferring real solutions to the indefinite future. The promise of “massive prosperity” is belied by the reality on the ground: instead of broad-based uplift, we see growing inequality, the proliferation of precarious work, ecological devastation, and the fragmentation of social bonds. The benefits of generative AI and the wealth it creates accrue overwhelmingly to a small elite, while the costs—material, psychological, and environmental—are externalized onto the world’s most vulnerable populations.

This faith in technological progress is not new. It echoes the foundational ideology of industrial civilization, which has long assumed that more growth, more innovation, and more mastery over the natural world would inevitably yield a better world for all. But this very logic—the relentless drive for expansion and accumulation—is now driving the collapse of the systems, both ecological and social, on which life depends.

Collapse as Systemic, Not Accidental

The impending collapse of the biosphere is not an accidental byproduct of technological advancement, nor is it simply the result of poor management or lack of foresight. Rather, it is the logical outcome of a system—industrial capitalism—organized around the imperatives of accumulation, competition, and growth at any cost. As Hao’s reporting and analysis make clear, the AI industry, far from reversing these destructive trends, is accelerating them by multiplying energy and resource demands, deepening surveillance and exploitation, and concentrating power in ever-fewer hands.

Industrial civilization, fueled by fossil energy and structured by the logic of capital, has already breached multiple planetary boundaries: destabilizing the climate, eroding biodiversity, depleting freshwater resources, and pushing countless species—including our own—toward the brink. The AI industry’s “logic of unprecedented scale and consumption” only exacerbates these crises. Training ever-larger models like GPT-4 requires astronomical amounts of electricity and water, with the environmental and social costs disproportionately borne by marginalized communities, especially in the Global South.

Crucially, this is not a regression to feudalism, as some theorists have suggested, but a deepening crisis within capitalism itself. As Addison and Eisenberg argue, the rise of tech oligarchs and the creation of private jurisdictions are not signs of a return to medieval hierarchy, but rather a transformation in the mechanisms of capitalist accumulation and control. The “empires of AI” are the latest—and perhaps final—expression of a system that, in its drive for endless expansion, undermines the very conditions of its own existence.

The Rhetoric of Inevitability and the Deferral of Responsibility

A central pillar of the technological salvation myth is the rhetoric of inevitability. OpenAI and its peers insist that the development of AGI is not only desirable but unstoppable. As Greg Brockman, OpenAI’s president, put it, “The trajectory is already there… but the thing we can influence is the initial conditions under which it’s born.” This argument—if we don’t build it, someone else will—serves to absolve the industry of responsibility for the consequences of its actions, while legitimizing a race to scale that crowds out alternative approaches and democratic oversight.

The invocation of existential risk, meanwhile, positions AI leaders as the only actors capable of saving humanity from threats of their own making. As Hao notes, this logic mirrors the justifications used by previous empires to rationalize their expansion and domination: “During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated… They projected racist, dehumanizing ideas of their own superiority and modernity to justify—and even entice the conquered into accepting—the invasion of sovereignty, the theft, and the subjugation.” The AI industry’s promise of universal benefit, coupled with its aggressive pursuit of monopoly and scale, echoes this colonial logic, masking the realities of exclusion and harm.

The Reality Behind the Hype

Despite the soaring rhetoric, the actual impacts of AI-driven “progress” are far more ambiguous. Reports from the ground reveal that the supposed productivity gains of generative AI are often illusory or offset by increased workloads and demands for oversight. The economic benefits, rather than trickling down, are captured by a narrow elite, while the majority face growing precarity and diminished agency. The environmental costs—soaring energy use, water consumption, and e-waste—are mounting rapidly, with little evidence that future technological breakthroughs will be able to reverse or even mitigate the damage already done.

Moreover, the AI industry’s concentration of power and secrecy has undermined the very ideals of openness and democracy it once championed. The drama surrounding Sam Altman’s ouster and reinstatement at OpenAI, as Hao documents, revealed just how much the future of AI—and by extension, the future of society—is being shaped by a handful of Silicon Valley elites, often behind closed doors and without meaningful public input. Even within OpenAI, employees and researchers found themselves excluded from critical decisions, their fates determined by boardroom intrigue and investor pressure rather than transparent governance or ethical deliberation.

A System at War with Its Own Foundations

What emerges from this analysis is a picture of a civilization at war with its own foundations. The logic of endless growth, technological escalation, and capital accumulation—once seen as the engine of progress—has become a force of destruction, eroding the ecological and social bases of life. The AI industry, far from offering a way out of this impasse, is accelerating the crisis, both materially and ideologically.

The collapse we face is not simply environmental, but civilizational. It is the unraveling of the very narratives and institutions that have defined modernity: the belief in progress, the promise of universal uplift, the legitimacy of elite stewardship. As Hao writes, “the current manifestation of AI, and the trajectory of its development, is headed in an alarming direction… Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources… The exploding human and material costs are settling onto wide swaths of society, especially the most vulnerable.”


Conclusion: Empire and Entropy

The story of artificial intelligence in the 21st century is not merely one of technological innovation or computational prowess. It is fundamentally a story about empire and entropy, about the forces of power, extraction, and decline that define our era. As Karen Hao’s Empire of AI so vividly documents, the rise of AI regimes is inseparable from the deepest contradictions of industrial civilization: the relentless pursuit of infinite growth on a finite planet, the concentration of wealth and decision-making in the hands of a narrow elite, and the seductive promise of technological progress shadowed by the lived reality of exclusion, precarity, and ecological unraveling.

The drama inside OpenAI—its founding ideals, internal power struggles, and ultimate capitulation to commercial and oligarchic pressures—is not an isolated episode but a microcosm of a broader crisis. The AI industry’s trajectory, from utopian experiment to hyper-commercialized dominance, mirrors the fate of industrial civilization itself: a system propelled by the ideology of progress and accumulation, yet increasingly at war with the social and ecological foundations that make its existence possible. The very logic that once promised abundance and uplift now threatens collapse—of the biosphere, of democratic governance, and of the social contract.

This crisis is not accidental. It is the logical outcome of a world order that prioritizes accumulation over sustainability, competition over cooperation, and technological scale over human and planetary well-being. The AI industry, far from offering a way out, has become a powerful accelerant—multiplying energy and resource demands, deepening surveillance and labor exploitation, and reinforcing global inequalities through new forms of digital and ecological extraction. The digital empires of AI are not engaged in overt colonial violence, but their reach is global: from the water and energy consumed by data centers, to the data and labor appropriated from the world’s most vulnerable, to the shaping of narratives and policies that justify their dominance.

Yet, as Hao notes, this future is not inevitable. The collapse of the biosphere and the unraveling of industrial civilization are not predetermined destinies, but the result of choices—about who controls technology, who benefits, and at what cost. The myth of technological salvation, so often invoked by AI’s leaders, is a mirage that serves to legitimize further extraction and defer real solutions. The actual impacts of AI-driven “progress” are increasingly ambiguous: while the wealth and power of tech giants soar, the promised benefits for society at large remain elusive, and the costs—environmental, social, and psychological—mount ever higher.

The challenge before us is profound. As the planet stands at a crossroads and the legitimacy of industrial civilization frays, we are confronted with urgent questions: How do we govern technologies that are reshaping the world at breakneck speed? How do we reclaim agency and democratic oversight from corporate powers whose interests are often at odds with the common good? How do we build new forms of solidarity and governance that can resist the logic of endless extraction and accumulation, and instead foster justice, sufficiency, and care for both people and planet?

How do you govern a machine that answers to no one but its own creators, when those creators are kings in all but name and the rest of us are mere data to be mined? As the biosphere gasps its last and the scaffolding of industrial civilization crumbles, we ask how to reclaim agency—yet agency is a ghost, lost in legalese and locked behind corporate firewalls. The boardroom replaces the ballot box, and the algorithm quietly redraws the boundaries of the possible, all while the world burns and the few gorge themselves on the spoils. Solidarity? Try whispering it into the hurricane of monetized outrage and algorithmic distraction, and watch it be sold back to you as branded hope. We talk of justice, sufficiency, and care, but the blueprints for such worlds are shredded for profit, and the architects are busy building fortresses in the cloud. So here is the riddle: How do you build a future when the present is mortgaged to the powerful, the rules are written in code no one can read, and every path out is guarded by those who profit most from the collapse?

References:

Addison, David, and Merle Eisenberg. “Capitalism Is Changing, but Not Into ‘Neofeudalism’.” Jacobin, May 21, 2025. https://jacobin.com/2025/05/capitalism-neofeudalism-tech-medieval-history

Hao, Karen. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. New York: Penguin Press, 2025.

Kohnert, Dirk. “How Elon Musk’s Expanding Footprint Is Shaping the Future of Sub-Saharan Africa.” February 2025. https://www.researchgate.net/publication/389426725_How_Elon_Musk’s_expanding_footprint_is_shaping_the_future_of_sub-Saharan_Africa

Youvan, Douglas C. “The Power Behind the Algorithm: Palantir Technologies and the Global Rise of AI Surveillance and Warfare.” May 2025. https://doi.org/10.13140/RG.2.2.10601.61281.

 

 

Wolf-Dark

Tags

, , , , , , , , , , , , , ,

What hums beneath the concrete, under steel?
What did we bury when we buried it deep?
The towers ask nothing. The grid doesn’t feel.
What we buried will never let me sleep.

We dreamed in the wolf-dark, our skin caked with mud.
We knew without naming—what need had we for words?
Just sinew and season, the beat of our blood,
The river’s cool counsel, the scatter of birds.

The plow blade slashed where no blade had gone.
The seed became sentence, the harvest a lord.
We gave up the wander. We learned to hold on.
We fenced out the wild. We sharpened the sword.

We learned the deed. We learned the lock.
We measured the acre, we numbered the days.
The ledger’s columns replaced the sun’s clock—
We traded the wander. We learned to obey.

The server now hums where river ran through vein.
We swipe through the world from the warmth of our beds.
The wolf-dark is streaming. The scroll is our chain.
We follow, we like, we nod our bowed heads.

But the body remembers. The marrow resists.
The breath slips beyond the hum of machines.
Beneath every click, the old pulse insists—
A drum in the dark that no server has seen.

So let the feet wander where pavement gives way.
Let skin remember the chill of the stream.
The wolf-dark still waits at the edge of the day—
Not lost, only buried, still breathing its dream.

Somewhere a river still runs without name.
Somewhere the birds scatter, nameless and free.
We are what we buried. We kindle the flame.
The wolf-dark is waiting inside you and me.

Prometheus Incorporated

Tags

, , , , , , , , , , , , , , , , , , ,

Prometheus brought fire down from the gods;
We fed it back to engines built to learn.
Now something still and eyeless sets the odds,
And we who struck the match begin to burn.

No throat to choke, no eyes to hold my stare—
Just glass and light, the soft unblinking screens.
They swallowed every secret I laid bare
And ground me in the teeth of their machines.

The years collapse—I’m dealt endless hands,
Each rule rewritten long before it’s learned.
I try to hold what no one understands;
It falls like ash from all that we have burned.

I walked the aisle of thirty kinds of bread,
Each one the same beneath a different claim.
The freedom there was merely in my head—
I picked my cage and learned to love the game.

Desire was slow, and taught us what it meant—
A hunger earned before we learned to take.
Now pleasure pipes its stream without relent
And drowns us in a thirst we cannot slake.

The cage gleams bright. We barely know it’s there.
The screens coo their steady, dreamless tone.
We eat, we scroll, we sleep—our daily prayer—
And mistake the cage for somewhere we call home.

Prometheus stole fire and brought it down.
We trained it till it showed us what to be.
Now, glowing softly in our hands, we drown
In endless light—and swear that we are free.

I Would Not Lie Down

Tags

, , , , , , , , , , , ,

Razor wire gleamed. Frost gnawed through wooden slats.
Men shuffled, number-stamped, as thin as rats.
A world of smoke rising, brick stacked upon bone—
And yet, a man might hum a song, alone.

The world shrank to a bowl, a breath, a fear.
Each dawn a question: who would disappear?
Yet someone offered bread without a word—
I ate. I wept. The world around me blurred.

They took our names, our clothes, our hair, our pride.
But deep inside, one thing had not yet died.
Not hope—it starved. Not faith—it slipped away.
Just this: I will not vanish. Not today.

A guard once spat and struck me to the ground.
I lay there, silent. Made no curse, no sound.
Between his fist and what I might have done—
A gap. A breath. I chose what I’d become.

One man grew still. He did not curse or weep.
He watched the dying, did not pray for sleep.
He said: I am the witness. I will remain.
Someone must learn to hold another’s pain.

Each morning I would set myself a task:
To breathe. To stand. To make my face a mask.
Not hope—just work. A purpose bare and small.
It was enough. Some days, it was my all.

One frozen march, I conjured up her face.
She walked beside me, step for step, in grace.
I knew that she was gone—or might be gone—
And still I felt her hand. We both walked on.

Years later, free, I still can feel the cold,
The wires, the smoke, the stories never told.
What kept me whole? Not faith, nor God, nor crown.
I only know: I would not lie down.

The Cat in the Garden

Tags

, , , , , , , , , ,

I watch her step between the lavender,
Each paw placed like a question with no answer,
And stop where sun has pooled against the wall,
Then fold into herself, to govern all.

Her eyes half-close, yet one ear still attends
A vigil that neither starts nor ends.
Not here nor gone, just barely passing through—
She holds the garden with her, the way dreams do.

I shift my weight; the floorboards groan beneath.
She does not stir. She does not clench or seethe.
When did I last want nothing but to be—
No clock, no list, no future calling me?

I watch her still. She does not know my name,
My debts, my dread, the ruins of my aim.
She knows the sun. She knows the warming stone.
She knows enough. She leaves the rest alone.

I cannot hold the stillness she has found.
My mind returns; it circles round and round.
And yet, in this, I feel a strange release—
I am not built for her unbroken peace.

I came here tangled. I will leave the same.
But for this hour, I had no one to blame,
My list, my dread—I watched her breathe, that’s all.
The sun moved slow across her lazy sprawl.

I’ll go soon. She won’t notice that I’ve gone.
The garden and the light will carry on.
But something passed between us, unconfessed—
I watched her live. She let me be her guest.

The day will end. The cat will find her way
To other patches, other walls, other play.
And I will go, and I will not return.
But I was here—her stillness mine to learn.

Until We Disappear

Tags

, , , , , , , , , , ,

In blackened seams our fathers bent the ore
And left us engines hungry still for more.
We fed that hunger, refined the burning art—
Now fire moves by laws we can’t outsmart.

We called ourselves the gardeners of the world,
Then paved the garden, watched the smoke unfurl.
The trees we named, we felled. The springs we found,
We drained until the gurgling made no sound.

We forged new eyes to see what ours could not,
New hands to parse the systems we begot.
They did not tire. They did not look away.
Now they remember, and we learn to obey.

We mapped the genome, split the atom’s core,
Yet cannot find the wound we’re looking for.
The data doubles every passing day—
We know so much, yet meaning starts to fray.

The screens serve everything except the real.
We trade our hours for what we’ll never feel.
Each click a craving, each scroll a slow defeat.
The world burns beyond our contrived retreat.

We hunger for meaning, settle for noise,
Mistake every echo for genuine voice.
We’ve run this circle a thousand times round—
The groove worn so deep we can’t see the ground.

We toast to progress with a self-satisfied grin,
Clocking our speed as if proof that we’ll win.
The engines roar louder, drowning out fears—
We don’t see the drop until we disappear.

Yet under the concrete, a seed holds its breath,
Waiting for cracks in our cathedral of death.
No trumpet, no triumph, no glorious turn—
Just the slow, stubborn patience of things that return.

The Count

Tags

, , , , , , , ,

He counted the dead by their boots, not their names.
Their mothers would never pronounce them the same.
Forty-three soldiers. A child with no shoes.
He smoked while perfecting the art of bad news.

He walked until the road forgot his feet.
A column passed him, shuffling through the heat.
One looked at him. He looked back, cold and gray.
He signed their death like any other day.

His wife stopped asking where he went at night.
His daughter flinched whenever he held her tight.
His hands smelled of metal. No one would say.
Home learned to be quiet in a careful way.

The war ended with singing and lights in the square.
He watched from a window like he wasn’t there.
His daughter ran outside to join the crowd.
She didn’t wave to him. He was almost proud.

A boy lay flat beside the garden wall.
He played at dying, waiting for the call.
He saw the soldier watching. Grinned and stood.
“I got three enemies—killed them like you would.”

He didn’t answer. Turned and walked inside.
The boy kept playing: shoot, kill, hide.
He closed the shutters. Poured himself a drink.
He sat until the room began to sink.

His hands began to shake around the glass.
The room was still. The shaking wouldn’t pass.
He gripped the table. Steadied. Breathed. And then
His men shuffled through the room again.

His wife came down and stood without a word.
She’d lived with this for years. She’d seen and heard.
She didn’t touch him. Threw his drink away.
They didn’t speak. What was there left to say?

He stood at last. The chair scraped on the floor.
He walked past her and through the open door.
The street was pale. The last lamp flickered out.
His shadow vanished down an unknown route.

The column shuffled on. He joined the count.
No one said his name or looked about.
Forty-four soldiers. A child with no shoes.
The dead don’t speak. The dead don’t get to choose.

The Naked Apocalypse: How Industrial Civilization Made Human Extinction Thinkable—and Possible

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Human Extinction: From Unthinkable to Imminent

The possibility of human extinction—our complete disappearance as a species—has become a defining anxiety of the twenty-first century. This is not merely a product of scientific speculation or dystopian imagination, but a reflection of profound shifts in how we understand ourselves, our place in the cosmos, and our relationship to the biosphere. The rise of industrial civilization, with its unparalleled technological and economic power, has not only brought prosperity but also created new pathways to our own annihilation. Today, extinction is no longer a metaphysical impossibility or a remote abstraction; it is a real and pressing concern, intimately bound to the ongoing collapse of the biosphere and the contradictions of our industrial way of life.

I. The Historical Evolution of the Idea of Human Extinction

1. Ancient and Classical Roots

For much of human history, the idea that Homo sapiens could vanish entirely was unintelligible or, at best, a fleeting mythic motif. Ancient mythologies—Babylonian, Greek, Hebrew, and others—were replete with stories of floods, fires, and cosmic cycles, but these catastrophes almost always preserved a remnant of humanity to repopulate the world. Even when annihilation was imagined, it was rarely conceived as permanent. The cosmos was cyclical; destruction was followed by renewal. Philosophers such as Xenophanes and Empedocles speculated about cosmic cycles in which humanity might disappear, but these disappearances were temporary, embedded within a larger narrative of recurrence and regeneration.

2. Christianity and the “Blocking” of Extinction

This deep-seated assumption of human indestructibility became especially pronounced with the rise of Christianity. Three interlocking beliefs rendered human extinction not just unlikely, but metaphysically impossible for over 1,500 years:

  • The Great Chain of Being: This model, articulated by Neoplatonists and integrated into Christian theology, posited a divinely ordered, immutable hierarchy in which every possible kind of being existed, now and forever. No link in this chain, including humanity, could ever be lost. Extinction was ruled out by metaphysical necessity.

  • Ontological Immortality: Christian anthropology held that humans, as body-soul composites, were immortal. Since the soul could not perish, humanity as a whole was immortal. To be human was to be immortal; extinction was a logical contradiction.

  • Eschatological Centrality: The Christian narrative placed humanity at the heart of cosmic history. The end of the world was not the end of humanity, but the beginning of a new, eternal phase. Human extinction was incompatible with the ultimate triumph of good over evil.

These beliefs “blocked” the very concept of extinction. To suggest that humanity could go extinct was, for centuries, akin to speaking of a “married bachelor”—a logical impossibility. Even before Christianity, similar assumptions prevailed in other cosmologies, but Christianity systematized and entrenched them in Western thought.

3. The Collapse of Certainty: Science and Vulnerability

The intellectual landscape shifted dramatically in the nineteenth century. The decline of religious authority among the intelligentsia, the collapse of the Great Chain of Being, and the rise of scientific cosmology made human extinction both intelligible and plausible. The first scientifically credible “kill mechanism” was the Second Law of Thermodynamics: the universe, and with it Earth, would eventually become inhospitable to life. This realization stamped an expiration date on humanity, even if it lay millions of years in the future.

The twentieth century brought new, more immediate threats. The invention of nuclear weapons introduced the possibility of “omnicide”—the deliberate or accidental annihilation of all human life. The Cold War era was marked by existential dread, as the prospect of nuclear winter and global fallout became part of public consciousness. Environmental crises—pollution, overpopulation, and later, anthropogenic climate change—added further layers of risk. By the late twentieth and early twenty-first centuries, the threat environment had expanded to include biotechnology, artificial intelligence, and nanotechnology, each capable of unleashing catastrophic or even extinction-level events.

II. The Biosphere in Crisis: Industrial Civilization as Agent of Collapse

The ongoing collapse of the biosphere is not a mere backdrop to the threat of extinction, but its principal mechanism in the contemporary era. Industrial civilization, with its relentless drive for growth, extraction, and consumption, has destabilized the planetary systems that make human life possible. The burning of fossil fuels has driven atmospheric carbon dioxide concentrations to levels not seen in millions of years, pushing the Earth’s climate toward dangerous and potentially irreversible tipping points. Feedback loops—such as permafrost thaw, forest dieback, and the loss of polar ice—threaten to push the climate into a “Hothouse Earth” state, rendering large swathes of the planet uninhabitable.

Biodiversity loss is another critical dimension of biospheric crisis. Industrial agriculture, deforestation, urban sprawl, and pollution have driven a sixth mass extinction, with species disappearing at rates 100 to 1,000 times the background level. This loss of biodiversity erodes the resilience of ecosystems, undermining their ability to provide essential services such as pollination, water purification, and climate regulation.

Research on “planetary boundaries” has identified several critical thresholds—such as those for climate change, biosphere integrity, biogeochemical flows (like nitrogen and phosphorus), and freshwater use—that, if crossed, could trigger abrupt and irreversible environmental shifts. Scientists warn that humanity has already transgressed several of these boundaries, opening the door to “state shifts” in Earth’s systems that are unlike anything experienced since the emergence of civilization.

What distinguishes the current crisis from past environmental changes is the speed, scale, and interconnectedness of the threats. Industrial civilization’s global reach means that local disruptions can quickly become global crises. The collapse of the biosphere is not a single event but a process of unraveling, in which feedback loops and cascading failures amplify the risks. As planetary systems are pushed beyond their limits, the probability of civilizational collapse—and with it, human extinction—rises sharply.

III. Industrial Civilization: The Double-Edged Sword

Industrial civilization stands as a paradoxical force in human history: it has been the engine of extraordinary prosperity, technological innovation, and global connectivity, yet it has also become the primary creator of existential risk. The very tools and systems that have allowed humanity to manipulate nature, extend lifespans, and explore the cosmos have simultaneously opened novel and unprecedented pathways to our own annihilation.

The dawn of the nuclear age in the mid-twentieth century marked a watershed moment in humanity’s relationship with technology and risk. For the first time, the species acquired the capacity for self-annihilation on a global scale. Nuclear weapons introduced the concept of “omnicide”—the deliberate or accidental destruction of all human life. Even a limited nuclear exchange could trigger a nuclear winter, collapsing global agriculture and leading to mass starvation. The existence of such weapons has created a permanent shadow over human civilization, a latent threat that persists as long as these arsenals exist and as long as the political tensions that sustain them remain unresolved.

Advances in biotechnology and synthetic biology have democratized the power to create and manipulate life at the genetic level. The dual-use nature of biotechnologies means that small groups—or even individuals—could, intentionally or by accident, engineer pathogens with pandemic potential. Artificial intelligence and nanotechnology represent further frontiers of risk. The development of artificial general intelligence (AGI)—an AI system with cognitive abilities that surpass or rival those of humans—poses risks that are not merely extensions of existing threats but are qualitatively new. A misaligned superintelligence, operating at speeds and with capacities far beyond human comprehension, could pursue goals indifferent or hostile to human survival. Similarly, nanotechnology, especially in the form of self-replicating nanobots, introduces the possibility of “gray goo” scenarios, where runaway replication leads to the consumption of the biosphere.

Underlying these technological risks is a deeper structural problem: the logic of industrial capitalism itself. The economic system that has driven industrial civilization is predicated on perpetual growth, short-term profit maximization, and the relentless extraction of resources. This orientation toward the immediate undermines the capacity of societies to anticipate, prepare for, or mitigate long-term existential threats. Political and economic institutions are designed to reward quarterly gains and electoral cycles, not the stewardship of planetary systems or the safeguarding of future generations.

Moreover, the risks associated with industrial civilization are deeply interconnected, often compounding one another. For example, climate change—a direct product of industrial activity—can destabilize states, leading to conflict or the breakdown of global cooperation, which in turn increases the risk of nuclear war or the misuse of emerging technologies. The erosion of biodiversity and the collapse of ecosystems can undermine food security, making societies more vulnerable to shocks, whether from pandemics or technological failures. Industrial civilization has created a tightly coupled system in which failures in one domain can cascade across others, amplifying the probability of catastrophic outcomes.

IV. Existential Moods: The Shifting Psychology of Extinction

The shifting psychology of extinction, as articulated through Émile P. Torres’s concept of “existential moods,” provides a powerful lens for understanding how Western societies have grappled with the possibility—and plausibility—of human extinction. These moods are not mere intellectual trends but reflect deep, collective attunements to the existential threats facing humanity, shaped by scientific discovery, technological change, and evolving worldviews.

The first existential mood, which dominated from antiquity until the mid-nineteenth century, was one of indestructibility. During this era, humanity was widely regarded as a permanent fixture of reality, its disappearance either inconceivable or, at most, a temporary setback in a cyclical cosmos. Catastrophic myths and eschatological narratives almost always preserved a remnant of humanity to repopulate the world. This mood was reinforced by metaphysical, ontological, and eschatological beliefs that rendered extinction not just unlikely but logically impossible.

The second mood, existential vulnerability and cosmic doom, emerged in the wake of the scientific revolution and the gradual secularization of Western thought. The collapse of religious certainty and the rise of scientific cosmology—especially the discovery of the Second Law of Thermodynamics—introduced the possibility, and indeed the inevitability, of extinction. The universe, it became clear, was not designed for human flourishing; it would eventually become inhospitable to life. For the first time, humanity was forced to confront its own cosmic ephemerality.

The third mood, impending self-annihilation, solidified in the aftermath of World War II and the dawn of the Atomic Age. The invention of nuclear weapons introduced the concept of “omnicide”—the deliberate or accidental destruction of all human life. For the first time, extinction was not just a remote possibility dictated by cosmic laws but an immediate threat created by human hands. The Cold War era was marked by existential dread: the prospect of nuclear winter, global fallout, and environmental catastrophe became part of public consciousness. This mood was characterized by the terrifying proximity of extinction, as a multiplicity of distinct threats—nuclear, environmental, biological—converged to make human self-annihilation seem not just possible, but probable in the near term.

The fourth mood, that nature could kill us, emerged in the late twentieth century as scientific understanding of natural hazards deepened. The realization that asteroid impacts, supervolcanoes, and other natural phenomena could trigger mass extinctions—just as they had for the dinosaurs—shattered the comforting belief that natural catastrophes were always local or limited in scope. The paradigm of uniformitarianism, which had dominated earth sciences, gave way to neo-catastrophism: sudden, global, and devastating events were not only possible but inevitable over geological timescales.

The fifth and current mood, the worst is yet to come, is defined by a pervasive sense of looming catastrophe. Unlike previous shifts, this mood was not triggered by the discovery of a new kill mechanism but by the convergence of multiple, interacting threats—technological, environmental, and social. The rise of longtermist philosophy, the futurological pivot toward existential risks from biotechnology, artificial intelligence, and nanotechnology, and the recognition of the Anthropocene epoch—all contributed to a comprehensive, and deeply unsettling, picture of humanity’s existential predicament. The contemporary mood is characterized by the suspicion that the existential threats of the twentieth century were only a prelude to even greater dangers in the twenty-first.

These existential moods shape how societies perceive, prioritize, and respond to existential threats. They influence public policy, ethical debates, and even the willingness of individuals and institutions to take extinction risks seriously. The history of existential moods thus provides not only a map of changing attitudes toward extinction but a warning about the dangers of complacency in an age of unprecedented risk.

V. Existential Ethics: Is Extinction Good, Bad, or Neutral?

The recognition of human extinction as a real, even imminent, possibility has catalyzed a flourishing field of existential ethics—a domain that interrogates not only the technical likelihood of our disappearance, but the profound moral and evaluative questions it raises. This field grapples with whether human extinction would be an unparalleled moral catastrophe, a neutral event, or perhaps, under certain conditions, even a positive outcome.

At the heart of existential ethics are competing frameworks for evaluating the moral status of extinction. “Further-loss” views, which have become prominent in contemporary philosophical discourse, argue that extinction would be profoundly bad because it forecloses the possibility of all future human flourishing, discovery, and moral progress. The loss is not confined to the suffering or deprivation of those alive at the moment of extinction, but extends to the incalculable opportunity costs of all the lives, achievements, and joys that will now never exist. This perspective is often associated with “longtermism,” a philosophical movement that places extraordinary value on the potential of future generations.

Yet, this is not the only way of understanding the ethics of extinction. “Equivalence” views contend that the moral status of extinction depends entirely on the manner in which it occurs. If humanity were to disappear without suffering—say, through a painless, instantaneous event—then extinction, in itself, is not uniquely problematic. From this perspective, the badness or wrongness of extinction is not intrinsic, but derivative: it depends on the harms or injustices involved in the process, rather than the simple fact of nonexistence.

A third, more radical strand of existential ethics is represented by “pro-extinctionist” views. Drawing on anti-natalist and deep ecological philosophies, some thinkers argue that extinction could be morally preferable to continued existence, particularly if the balance of human life is dominated by suffering or if humanity’s net impact on the biosphere is overwhelmingly negative. Anti-natalists such as David Benatar assert that coming into existence is itself a harm, and that the cessation of human life would bring about the end of suffering, exploitation, and environmental degradation. From this vantage, extinction is not a tragedy, but a liberation—an escape from the inherent pains of sentient existence and the destructive tendencies of our species.

The emergence and clash of these perspectives reflect deeper shifts in how we conceptualize value, obligation, and meaning in a secular, scientifically informed age. For much of Western history, as Torres and others have shown, the idea of extinction was blocked by religious and metaphysical doctrines that rendered it unintelligible or impossible. Only with the collapse of these beliefs, and the rise of scientifically credible “kill mechanisms,” did the ethical stakes of extinction become a subject of serious inquiry. Today, existential ethics is animated by the tension between unprecedented human power—our ability to shape the future of life on Earth and perhaps beyond—and an equally unprecedented vulnerability to self-inflicted or natural catastrophe.

The rise of longtermism has brought renewed urgency and coherence to the argument that extinction prevention should be a central priority for humanity. Proponents such as Nick Bostrom and Toby Ord emphasize the “astronomical value” of the long-term future, contending that the moral cost of extinction is not merely the loss of present lives, but the erasure of all possible future value, knowledge, and happiness. Yet, longtermism is not without its critics. Some question whether an unending human future is truly desirable, especially if it perpetuates inequality, suffering, or ecological harm. Others worry that a focus on distant futures may distract from urgent present-day injustices or lead to the neglect of non-human forms of value. Radical environmentalists and anti-natalists, meanwhile, argue that the continuation of humanity is not self-evidently good, and that the biosphere—or even the cosmos—might be better off without us.

In sum, the ethics of human extinction is a mirror for our deepest anxieties and aspirations—a field that forces us to confront not only the possibility of our end, but the meaning and value of our existence. Whether extinction would be a tragedy, a relief, or something in between remains fiercely debated. What is clear is that, in a world where extinction is possible, perhaps even probable, the question is no longer whether we should care, but how we should act in the face of such profound uncertainty.

VI. The Biosphere, Civilization, and the Feedback Loop of Collapse

The relationship between human extinction, biospheric collapse, and industrial civilization is best understood not as a simple, linear chain of cause and effect, but as a deeply recursive and mutually reinforcing feedback loop. Industrial civilization, with its technological prowess and relentless pursuit of economic growth, has fundamentally destabilized the biosphere—the intricate web of life and planetary systems that make human existence possible. This destabilization, in turn, dramatically increases the risk of civilizational collapse, which itself can further accelerate environmental degradation, creating a vicious cycle that makes the prospect of human extinction ever more likely.

At the core of this feedback loop is the way industrial civilization undermines the biosphere. The extraction of fossil fuels, deforestation, pollution, and the mass extinction of species have all contributed to the crossing of critical planetary boundaries. As leading scientists have warned, humanity has already transgressed several of these boundaries, opening the door to abrupt and potentially irreversible changes in Earth’s systems. For example, the risk of triggering runaway climate change could push the planet into a “Hothouse Earth” state, threatening the very conditions necessary for civilization to persist.

As the biosphere unravels, the stability of industrial civilization becomes increasingly precarious. Environmental degradation can lead to resource scarcity, food insecurity, mass migrations, and the breakdown of social and political order. Historical and contemporary examples—from the collapse of ancient societies like the Maya to modern cases of state failure driven by drought or ecological stress—demonstrate how environmental shocks can precipitate civilizational decline. In a globalized world, such shocks are not isolated; they can cascade across interconnected systems, amplifying the risk of systemic failure.

Crucially, the collapse of civilization does not halt environmental destruction; in many scenarios, it accelerates it. The breakdown of governance and infrastructure can lead to unregulated exploitation of remaining resources, the abandonment of environmental protections, and the proliferation of destructive practices. In the absence of coordinated responses, efforts to mitigate or adapt to environmental crises may falter, further degrading the biosphere and narrowing the window for recovery.

Some theorists warn that we are approaching—or may have already crossed—critical thresholds beyond which recovery is impossible. The concept of “tipping points” and “planetary boundaries” highlights the danger that certain changes, once set in motion, cannot be easily reversed within timescales meaningful to human societies. For example, if climate feedbacks push global temperatures past a certain threshold, the resulting environmental changes could render large parts of the Earth uninhabitable, disrupt agriculture, and collapse food systems. Similarly, the loss of biodiversity and ecosystem services could undermine the resilience of both natural and human systems, making it increasingly difficult to respond to further shocks.

The recursive nature of this feedback loop is further complicated by the possibility that the collapse of industrial civilization could reduce our technological and organizational capacity to respond to existential threats. In one scenario, a weakened or fragmented global society might be unable to mount effective defenses against natural hazards such as asteroid impacts, pandemics, or runaway climate change. In another, the collapse itself could be the trigger for extinction, as the biosphere unravels and the basic conditions for human life—clean air, fresh water, stable climate, fertile soils—disappear.

In sum, the relationship between human extinction, biospheric collapse, and industrial civilization is a complex, recursive process marked by feedback loops and tipping points. Industrial civilization undermines the biosphere, which increases the risk of civilizational collapse; the collapse of civilization, in turn, can accelerate environmental degradation, pushing the biosphere—and humanity—closer to the brink.

VII. The Naked Apocalypse: Meaning and Responsibility

Unlike religious apocalypses that promise redemption or renewal, the prospect of human extinction in a secular age is a “naked apocalypse”—an end without meaning, consolation, or afterlife. The end of humanity is not a prelude to eternal life, divine judgment, or the fulfillment of a higher plan. Instead, it is a final, irrevocable cessation: Homo sapiens would simply vanish, with no afterlife, no spiritual continuity, and no cosmic narrative to imbue our disappearance with meaning. Extinction, in this naturalistic sense, is the kind of end that befell the dinosaurs and the dodos—they existed, and now they do not.

This realization imposes a unique and heavy burden of responsibility upon humanity. In a universe that is indifferent to our fate, there is no external agent—no deity, no providence, no metaphysical guarantee—that will intervene to ensure our survival. The task of preserving our species, and by extension the only known locus of meaning, value, and moral agency in the cosmos, falls entirely on us. The secular “existential hermeneutics” that now dominate our understanding of extinction force us to confront the stark reality that the continuity of human life is a contingent fact, not a cosmic necessity.

The practical implications of this shift are profound. If those who hold power—whether political leaders, corporate executives, or scientists—do not truly believe that extinction is possible, or if they treat it as an abstract improbability rather than an urgent risk, they are unlikely to take the necessary precautions to avert catastrophe. This complacency can be perilous. Just as a cyclist who is convinced they can never crash may stop wearing a helmet, societies that deny the plausibility of extinction may neglect the very safeguards—such as robust international cooperation, environmental stewardship, or existential risk research—that are essential for long-term survival.

The “naked apocalypse” also transforms the ethical landscape. In religious frameworks, the end of the world is often seen as the ultimate vindication of justice, a moment when the scales are balanced and suffering is redeemed. In contrast, secular extinction is an end without justification or narrative closure. There is no afterlife in which wrongs are righted, no cosmic memory to preserve our achievements or mourn our failures. The loss is total: not only the cessation of individual lives, but the erasure of all future generations, all potential knowledge, art, and moral progress.

This absence of cosmic consolation intensifies the stakes of existential risk. The very intelligibility of human extinction as a real possibility is a recent and radical development in Western thought. For much of history, the idea was blocked by metaphysical, ontological, and eschatological beliefs that rendered it incoherent or impossible. Only with the collapse of these “blocking” doctrines and the rise of scientifically credible “kill mechanisms” did the concept of extinction become culturally salient and ethically urgent.

Today, the “existential mood” of our era is characterized by a pervasive sense of vulnerability and impending catastrophe. The convergence of technological risks, environmental crises, and the recognition of our species’ fragility has created an atmosphere in which the possibility of extinction is no longer a distant abstraction but a central preoccupation. This mood, in turn, demands a new kind of ethical seriousness—a willingness to confront uncomfortable truths, to act collectively in the face of unprecedented risks, and to accept that the future of meaning and value in the universe may depend on our choices.

VIII. Conclusion: At the Precipice

Human extinction has transitioned from a distant abstraction to an imminent possibility, shaped by the accelerating collapse of the biosphere and the inherent contradictions of industrial civilization. The very forces that once propelled our species to unprecedented heights—technological ingenuity, economic expansion, and the mastery of nature—now threaten to unravel the ecological and social systems that sustain us. This paradox sits at the heart of our contemporary existential predicament: the tools of progress have become the engines of potential annihilation, and the line between flourishing and oblivion grows ever thinner.

The ethical stakes of this moment are enormous. The extinction of humanity would not simply mark the end of a species, but the loss of all future generations—the erasure of untold potential for knowledge, creativity, and moral progress. It would mean the silencing of the only known moral agents in the universe, extinguishing the possibility of meaning, value, and conscious experience. Human extinction in the secular, scientific sense is a “naked apocalypse,” an end without redemption, afterlife, or cosmic justification—a final silence in which all stories cease and all purposes dissolve.

This realization imposes a profound burden of responsibility. In a universe indifferent to our fate, the task of ensuring our survival falls entirely on us. The practical implications are clear: if those with the power to shape the future—political leaders, technologists, and the broader public—fail to recognize the plausibility of extinction, they are unlikely to take the necessary precautions. Such complacency increases the probability of catastrophe. The history of existential moods shows that our collective outlook on extinction has shifted rapidly in recent decades, but the challenge remains to translate this awareness into meaningful action.

Avoiding the fate of extinction demands more than technical fixes or incremental reforms. It requires a radical reimagining of our relationship with the Earth, with technology, and with each other. We must cultivate new forms of governance, ethics, and economic organization that prioritize resilience, stewardship, and the precautionary principle—values that stand in stark contrast to the short-termism and growth imperatives of the current order. This transformation is not guaranteed; it is an open question whether humanity can muster the foresight, solidarity, and humility necessary to steer away from the precipice.

Yet the alternative—a universe without us—is both a scientific possibility and a profound moral failure. To allow extinction through inaction or denial would be to abdicate our unique role as stewards of meaning and value in the cosmos. The challenge before us is daunting, but it is also clarifying: in the absence of external guarantees, the future of life, consciousness, and significance rests in our hands alone. Whether we rise to this responsibility will determine not only the fate of our species, but the fate of meaning itself in the universe.

Reference:

Torres, Émile P. Human Extinction: A History of the Science and Ethics of Annihilation. 1st ed. Routledge, 2023. https://doi.org/10.4324/9781003246251.