Defending the Hamptons
At the Other End of History
While I do not agree fully with Zizek that happiness is for idiots, I do see an unjust optimism that is blind to the reality of where we are, and where we seem to be headed. Without an interest in despair, from which we can form a path that reflects reality rather than aspiration, towards building a future worth living in, we risk happily marching to our own demise.
Before you dismiss my pessimism as simply the existential dread of an over-thinking philosopher, consider that those at the cutting edge of technology give us stark warnings.
In 2023 a statement was signed by experts working at the cutting edge of AI in the hopes that some form of regulation would take place. Perhaps an implementation of Azamov’s rules or some similar fundamental principles. As of yet there are zero international legally binding regulations on AI. Here is a list of signatories to that statement in 2023 which states “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Geoffrey Hinton- Emeritus Professor of Computer Science, University of Toronto
Yoshua Bengio - Professor of Computer Science, U. Montreal / Mila
Demis Hassabis - CEO, Google DeepMind
Sam Altman - CEO, OpenAI
Dario Amodei - CEO, Anthropic
Dawn Song - Professor of Computer Science, UC Berkeley
Ted Lieu - Congressman, US House of Representatives
Bill Gates - Gates Ventures
Ya-Qin Zhang - Professor and Dean, AIR, Tsinghua University
Ilya Sutskever - Co-Founder and Chief Scientist, OpenAI
Igor Babuschkin - Co-Founder, xAI
Shane Legg - Chief AGI Scientist and Co-Founder, Google DeepMind
Martin Hellman - Professor Emeritus of Electrical Engineering, Stanford
James Manyika - SVP, Research, Technology and Society, Google-Alphabet
Yi Zeng - Professor and Director of Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences
Xianyuan Zhan - Assistant Professor, Tsinghua University
Albert Efimov - Chief of Research, Russian Association of Artificial Intelligence
Alvin Wang Graylin - China President, HTC
Jianyi Zhang - Professor, Beijing Electronic Science and Technology Institute
Anca Dragan - Associate Professor of Computer Science, UC Berkeley
Christine Parthemore - CEO and Director of the Janne E. Nolan Center on Strategic Weapons, The Council on Strategic Risks
Bill McKibben - Schumann Distinguished Scholar, Middlebury College
Alan Robock - Distinguished Professor of Climate Science, Rutgers University
Angela Kane - Vice President, International Institute for Peace, Vienna; former UN High Representative for Disarmament Affairs
Audrey Tang - Digitalminister.tw and Chair of National Institute of Cyber Security
Daniela Amodei - President, Anthropic
David Silver - Professor of Computer Science, Google DeepMind and UCL
Lila Ibrahim - COO, Google DeepMind
Stuart Russell - Professor of Computer Science, UC Berkeley
Tony (Yuhuai) Wu - Co-Founder, xAI
Marian Rogers Croak - VP Center for Responsible AI and Human Centered Technology, Google
Andrew Barto - Professor Emeritus, University of Massachusetts
Mira Murati - CTO, OpenAI
Jaime Fernández Fisac - Assistant Professor of Electrical and Computer Engineering, Princeton University
Diyi Yang - Assistant Professor, Stanford University
Gillian Hadfield - Professor, CIFAR AI Chair, University of Toronto, Vector Institute for AI
Laurence Tribe - University Professor Emeritus, Harvard University
Pattie Maes - Professor, Massachusetts Institute of Technology - Media Lab
Kevin Scott - CTO, Microsoft
Eric Horvitz - Chief Scientific Officer, Microsoft
Peter Norvig - Education Fellow, Stanford University
Joseph Sifakis - Turing Award 2007, Professor, CNRS - Universite Grenoble - Alpes
Atoosa Kasirzadeh - Assistant Professor, University of Edinburgh, Alan Turing Institute
Erik Brynjolfsson - Professor and Senior Fellow, Stanford Institute for Human-Centered AI
Mustafa Suleyman - CEO, Inflection AI
Emad Mostaque - CEO, Stability AI
Ian Goodfellow - Principal Scientist, Google DeepMind
John Schulman - Co-Founder, OpenAI
Wojciech Zaremba - Co-Founder, OpenAI
Dan Hendrycks - Executive Director, Center for AI Safety
Baburam Bhattarai - Former Prime Minister of Nepal, Society of Nepalese Architects
Kersti Kaljulaid - Former President of the Republic of Estonia
Russell Schweickart - Apollo 9 Astronaut, Association of Space Explorers, B612 Foundation
Andy Weber - Former U.S. Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs, Council on Strategic Risks
Allison Macfarlane - Former Chairman, US Nuclear Regulatory Commission
Nicholas Fairfax (Lord Fairfax) - Member, House of Lords
Mark Beall - Former Director of AI Strategy and Policy, Department of Defense
Lord Strathcarron - Peer, House of Lords
Stephen Luby - Professor of Medicine (Infectious Diseases), Stanford University
David Haussler - Professor and Director of the Genomics Institute, UC Santa Cruz
Ju Li - Professor of Nuclear Science & Engineering and Professor of Materials Science & Engineering, Massachusetts Institute of Technology
David Chalmers - Professor of Philosophy, New York University
Daniel Dennett - Emeritus Professor of Philosophy, Tufts University
Peter Railton - Professor of Philosophy at University of Michigan, Ann Arbor
Peter Singer - Professor, Princeton University
Sheila McIlraith - Professor of Computer Science, University of Toronto
Victoria Krakovna - Research Scientist, Google DeepMind
Mary Phuong - Research Scientist, Google DeepMind
Mariano-Florentino Cuéllar - President, Carnegie Endowment for International Peace
Lex Fridman - Research Scientist, MIT
Sharon Li - Assistant Professor of Computer Science, University of Wisconsin Madison
Phillip Isola - Associate Professor of Electrical Engineering and Computer Science, MIT
David Krueger - Assistant Professor of Computer Science, University of Cambridge
Jacob Steinhardt - Assistant Professor of Computer Science, UC Berkeley
Martin Rees - Professor of Physics, Cambridge University
Nando de Freitas - Director, Science Board, Google DeepMind
Hongwei Qin - Research Director, SenseTime
He He - Assistant Professor of Computer Science and Data Science, New York University
David McAllester - Professor of Computer Science, TTIC
Vincent Conitzer - Professor of Computer Science, Carnegie Mellon University and University of Oxford
Bart Selman - Professor of Computer Science, Cornell University
Philip Torr - Professor of Engineering Science, University of Oxford
James Mickens - Professor of Computer Science, Harvard University
Michael Wellman - Professor & Chair of Computer Science and Engineering, University of Michigan
Luis Videgaray - Senior Lecturer, MIT; Former Minister of Interior and Exterior Relations of Mexico
Jinwoo Shin - KAIST Endowed Chair Professor, Korea Advanced Institute of Science and Technology
Alice Oh - Professor at The School of Computing, KAIST and Director, MARS AI Research Center
Dae-Shik Kim - Professor of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST)
Edith Elkind - Professor of Computing Science, University of Oxford
Ray Kurzweil - Principal Researcher and AI Visionary, Google
Frank Hutter - Professor of Machine Learning, Head of ELLIS Unit, University of Freiburg
Alexey Dosovitskiy - Research Scientist, Google DeepMind
Jaan Tallinn - Co-Founder of Skype
Vitalik Buterin - Founder and Chief Scientist, Ethereum, Ethereum Foundation
Adam D’Angelo - CEO, Quora, and board member, OpenAI
Simon Last - Cofounder and CTO, Notion
Dustin Moskovitz - Co-founder and CEO, Asana
Shane Torchiana - CEO, Bird
Thuan Q. Pham - Former CTO, Uber, Board member, Nubank
Scott Aaronson - Schlumberger Chair of Computer Science, University of Texas at Austin
Max Tegmark - Professor, MIT, Center for AI and Fundamental Interactions
Bruce Schneier - Lecturer, Harvard Kennedy School
Martha Minow - Professor, Harvard Law School
Gabriella Blum - Professor of Human Rights and Humanitarian Law, Harvard Law
Kevin Esvelt - Associate Professor of Biology, MIT
Edward Wittenstein - Executive Director, International Security Studies, Yale Jackson School of Global Affairs, Yale University
Sonny Ramaswamy - President, Northwest Commission on Colleges & Universities
Laurie Zoloth - Margaret E. Burton Professor of Religion and Ethics, University of Chicago
Karina Vold - Assistant Professor, University of Toronto
Victor Veitch - Assistant Professor of Data Science and Statistics, University of Chicago
Dylan Hadfield-Menell - Assistant Professor of Computer Science, MIT
Samuel R. Bowman - Associate Professor of Computer Science, NYU and Anthropic
Mengye Ren - Assistant Professor of Computer Science, New York University
Shiri Dori-Hacohen - Assistant Professor of Computer Science, University of Connecticut
Miles Brundage - Head of Policy Research, OpenAI
Allan Dafoe - AGI Strategy and Governance Team Lead, Google DeepMind
Helen King - Senior Director of Responsibility and Strategic Advisor to Research, Google DeepMind
Jade Leung - Governance Lead, OpenAI
Jess Whittlestone - Head of AI Policy, Centre for Long-Term Resilience
Sarah Kreps - John L. Wetherill Professor and Director of the Tech Policy Institute, Cornell University
Jared Kaplan - Co-Founder, Anthropic
Chris Olah - Co-Founder, Anthropic
Andrew Revkin - Director, Initiative on Communication & Sustainability, Columbia University - Climate School
Carl Robichaud - Program Officer (Nuclear Weapons), Longview Philanthropy
Leonid Chindelevitch - Lecturer in Infectious Disease Epidemiology, Imperial College London
Nicholas Dirks - President, The New York Academy of Sciences
Hongyi Zhang - Research Scientist, ByteDance
Marc Warner - CEO, Faculty
Rob Pike - Distinguished Engineer (retired), Co-Creator of Golang, Google
Clare Lyle - Research Scientist, Google DeepMind
Nisarg Shah - Assistant Professor, University of Toronto
Ryota Kanai - CEO, Araya, Inc.
Tim G. J. Rudner - Assistant Professor and Faculty Fellow, New York University
Noah Fiedel - Director, Research and Engineering, Google DeepMind
Jakob Foerster - Associate Professor of Engineering Science, University of Oxford
Michael Osborne - Professor of Machine Learning, University of Oxford
Marina Jirotka - Professor of Human Centred Computing, University of Oxford
Nancy Chang - Research Scientist, Google
Tom Schaul - Research Scientist, Google DeepMind
Daniel Cer - Research Scientist at Google Deepmind
Roger Grosse - Associate Professor of Computer Science, University of Toronto and Anthropic
David Duvenaud - Associate Professor of Computer Science, University of Toronto
Daniel M. Roy - Associate Professor and Canada CIFAR AI Chair, University of Toronto; Vector Institute
Kanjun Qiu - CEO, Generally Intelligent
Chris J. Maddison - Assistant Professor of Computer Science, University of Toronto
Tegan Maharaj - Assistant Professor of the Faculty of Information, University of Toronto
Florian Shkurti - Assistant Professor of Computer Science, University of Toronto
Jeff Clune - Associate Professor of Computer Science and Canada CIFAR AI Chair, The University of British Columbia and the Vector Institute
Eva Vivalt -Assistant Professor of Economics, University of Toronto, and Director, Global Priorities Institute, University of Oxford
Jacob Tsimerman - Professor of Mathematics, University of Toronto
Now, I hope you are reasonably convinced of the problems we face from an AI dominated world, so what follows are my observations.
“The Hamptons are not a defensible position” Professor of international economics and public affairs, Mark Blyth, once offered this line as a reminder that wealth is tied to the social world that sustains it. The supremely wealthy, still rely on the rest of us. History shows, when people are oppressed enough, revolution becomes inevitable. And no gilded enclave can keep the outside from coming in. However, Mark Blyth spoke before the rise of artificial intelligence, before the oligarchs imagined seceding from Earth to Mars, or from the human body to an uploaded consciousness, before attitudes in western democracies became so extreme and politics so polarised, And back when Climate change was the futures problem.
Today, the Hamptons or their equivalents in New Zealand bunkers, fortified compounds, and private arcologies may indeed become defensible. Not because society has become fair or resilient. Not because each is safe and happy. Not because flourishing liberal democracies have ushered in the end of history where revolt is not required. The Hamptons are defensible because the technological advances exploding onto society over the next few years, mean that the infrastructure needed for a comfortable survival may no longer depend on other humans to run it. AI doesn’t need to have conciousness or reach General Intelligence for this dystopia to come into being. It seems the trajectory we are on in the west is leaning towards a sytem where a Tech Aristocracy could justify using AI to keep “us” safe from the “others”.. As yet, the “others” are not defined, but many minorities are in the running. The vague label of immigrant may once again be enough to form the requisite “other”.
The most credible scientific bodies like the IPCC, NASA, the Potsdam Institute, and others warn of looming ecosystem breakdown. This means multi-breadbasket crop failures, AMOC slowdown, the crossing of multiple climate tipping points, catastrophic heatwaves approaching wet-bulb limits, mass migration triggered by drought, sea level rise, and collapsing freshwater systems. These disruptions threaten to undermine the basic infrastructures that modern societies rely on such as food supply, electricity, transport, communications, and stable governance. So much new energy is required to sustain AI and its future iterations, that unless technology itself solves climate change, we are heading for an ecological collapse. Against this backdrop, the automation of production, surveillance, persuasion, creation and curation of information, and even control of coercion, transforms not just work or politics, but the future shape of human survival.
Revolutions then and now
Historically, revolutions emerged when three conditions aligned, material grievance, shared narrative, and collective coordination. The French Revolution had pamphleteers and salons, the American Revolution had committees of correspondence, labour movements had unions, meeting halls, and print culture. The Arab Spring relied on physical squares where bodies gathered to create political possibility. Ireland’s own struggle for independence relied on dense networks of organisers, newspapers, Co-op farmers, secret meatings, and volunteer militias that brought scattered grievances into coordinated action. Revolt was never just anger. It was infrastructure, attention, and a shared sense of what was real.
Today, the material grievances remain and are set to intensify as ecological collapse disrupts food, water, energy, and stability. What is disappearing are the other two pillars. Shared narratives have fractured, and coordination has become difficult in an information landscape designed for distraction and outrage rather than deliberation. The historical conditions that made revolt viable are being replaced by architectures that dissipate rather than concentrate political energy.
The end of mutual dependency in a destabilising world
The industrial era created a mutual dependency. Capital needed labour, and labour needed wages. Liberal Democracy, social welfare, and the public sphere emerged in this tense balance. But in a world buckling under ecological strain, that relationship changes. Millions of workers who drive, deliver, sort, code, moderate, and service the digital economy are already being replaced by systems that do not sleep, strike, care, or starve. Autonomous vehicles move freight, AI governs warehouses, and robots perform tasks that once required subtle human judgment as virtual agents soothe customers with synthetic voices.
The old idea that technology frees humans for “higher” work assumes such work will exist. When AI writes, designs, diagnoses, and instructs more efficiently than we do, what is left for the intelligence of humans when its primacy as the entity of utility no longer holds. If food production becomes completely automated, if energy systems become AI-managed microgrids, if conflict is waged by drones and autonomous targeting systems, the powerful may no longer need the masses for labour, defence, or legitimacy.
As ecological collapse accelerates resource scarcity and state instability, the able may turn even more aggressively to automation as a buffer against a world becoming harder to live in. Retreat into a Golden Dome becomes a pragmatic response for those who can.
The automation of protection
Blyth’s warning assumed that revolt was materially possible. Guards, soldiers, and pilots were human beings capable of refusing, defecting, or joining the crowd. But as security becomes automated, political possibility narrows. Private security firms already deploy autonomous sensors and counter drone systems. Conflicts from Gaza to Ukraine show the growing dominance of algorithmic targeting. Armed drones require no loyalty. Surveillance networks identify faces, patterns, and movements with increasing accuracy.
Premeditated and targeted violence, once rooted in human judgment, is becoming computational. Historically, the question of protection has always revealed the true nature of power. Since the days of the Roman Empire, leaders relied on a Praetorian Guard whose loyalty determined whether a ruler lived or died. The guard could be persuaded, bribed, or turn against the very person they were sworn to protect, a reminder that no amount of wealth or authority could fully secure a leader against human will. Today, that once taken for granted element of our social contract is quietly disappearing. In elite enclaves, protection can be provided not by people who might reconsider, but by an AI praetorian guard, automated, networked, unpersuadable. Revolt becomes not only dangerous but technically impossible.
Information as pacification
If machines defend the walls, information pacifies the exterior. Revolutions once relied on shared narratives that united people behind a common injustice. Today’s digital architecture fragments, isolates, and overwhelms. Social media was once imagined as a democratic amplifier, it has become an attention extraction machine tuned for outrage and distraction.
People no longer look upward toward structures of power, but sideways toward scapegoats. Migrants, minorities, climate refugees, the very people displaced by ecological breakdown and war, become convenient targets. Outrage becomes entertainment. Dissent becomes content. A population trained by algorithms to react rather than reflect, loses the cognitive and emotional foundations needed for critical thinking, and thus, we lose the political agency necessary for a healthy liberal democracy.
Attention shrinks. Imagination flattens. Isolated individuals get disparate information feeds curated just for them. The psychological and social conditions that could produce collective action erode.
The rise of the tech oligarchy
Fukuyama theorised that the end of history (in a Hegelian sense) would be the global dominance of liberal democracy. Instead we have the consolidation of a new aristocracy of technology. Musk, Thiel, Zuckerberg, Andreessen, and others do not simply have wealth, they command and hugely influence the informational, political, and info-structural systems through which contemporary life is mediated.
Thiel’s admiration for strong-state governance and his engagement with neo-reactionary thinkers like Curtis Yarvin reveal an ideological shift. Democracy for Theil is too slow, too chaotic, too beholden to the masses. Musk’s empire spans media, satellites, rockets, transport, AI research, and neural interfaces. Zuckerberg governs the social nervous system of billions. Andreessen bankrolls the ideology of techno-solutionism that dismisses democratic oversight as obstruction.
They control the distribution of information, the platforms for public discourse, and increasingly hold sway over the machinery of state power. They hold epistemic power, as in the power to define reality for many people through social media feeds. Earlier societies could revolt through shared symbols and collective physical presence. Now the very medium through which dissent could organise is owned and ordered by those who benefit from preventing it.
Meanwhile, many of these elites are preparing for collapse. They buy fortified compounds in New Zealand, build private energy systems, and store cryptocurrency as post-collapse liquidity. They expect the world to fracture, they know the science, and they intend to be on the inside of whatever remains defensible.
Politics in the age of unreality
By “unreality” I mean the widening gap between the material conditions we face and the mediated world we perceive. A condition in which the signals needed for democratic self-understanding are drowned out by noise, distortion, and engineered distraction.
Ecological crisis intensifies our unreality. Heatwaves, megafires, and floods expose fragilities of infrastructure and governance. Environmental Scientists persistently ring bells as we exercise a kind of collective work avoidance, put our headphones on, and go about our day.
Reality deteriorates, public understanding becomes more fractured. Political energy is diverted into culture wars and conspiracies rather than the structural causes of collapse.
The end of history may be the slow neutralisation of political possibility.
The fading pathways of resistance
Part of the danger we now face is that none of this requires artificial general intelligence. The dystopia forming at the edges of our political and ecological reality is orchestrated by human actors equipped with increasingly powerful tools. We are not facing AI overlords, but human techno overlords supported and protected by AI. This is not a threat from a non-human conciousness, but from humans doing what they have always done. However, many old failsafes relied on humans being embedded in the structure itself, and made revolt, dissent, rebellion, and resistance an ever-present possibility.
The shift toward a future where machines do most of the work is not inherently troubling. In fact, it could be liberating. But without strong political institutions, without what Ibn Khaldun called asabiyyah, the social cohesion that binds people together,and without a narrative that values compassion and cooperation over hyper-individualism and competition, we risk building a society where automation entrenches inequality rather than relieves it. A world where the Hamptons become defensible is possible only when inequality has been allowed to grow without the counterbalance of collective power.
Are the Hamptons defensible?
In the literal sense, yes. AI-driven security, climate-adapted estates, privately owned infrastructure and info-structure, give a few unprecedented insulation, from the turbulence ahead.
Seen against the long curve of political struggle, the moment we are entering does not represent the end of conflict or the end of desire, but something more subtle and more disquieting. History has always depended on the belief that collective action can alter the trajectory of human society. What makes this moment feel like another end of history is not that change is impossible, but that the very conditions that once made it imaginable are eroding.
Without a renewed sense of asabiyyah, without a narrative that refuses resignation to helplessness, and without institutions capable of rebuilding common purpose, the future narrows. Have we fallen into the confortable traps Fukuyama warned us of:
“But supposing the world has become “filled up”, so to speak, with liberal democracies, such as there exist no tyranny and oppression worthy of the name against which to struggle? Experience suggests that if men cannot struggle on behalf of a just cause because that just cause was victorious in an earlier generation, then they will struggle against the just cause. They will struggle for the sake of struggle. They will struggle, in other words, out of a certain boredom: for they cannot imagine living in a world without struggle. And if the greater part of the world in which they live is characterized by peaceful and prosperous liberal democracy, then they will struggle against that peace and prosperity, and against democracy.”
Francis Fukuyama
History’s darkest moments were not driven by superhuman monsters, but by ordinary people with bureaucratic power and dehumanising stories. At Wannsee in 1942, senior Nazi officials sat around a table and calmly discussed the logistics of genocide. They did not need advanced technology to commit atrocities, they needed only a murderous ideology and enough administrative capacity to carry it out. The uncomfortable truth is that we are the same species that did these things. We are the animals that annihilated Indigenous peoples, that enslaved, lynched, tortured, imprisoned, and dropped atomic bombs on civilians. The belief that “we” have transcended such impulses because we live in technologically advanced liberal democracies is a comforting story, which we decorate with optimism, but clearly this picture is not the End Of History.
What is different today are the tools available to those in power. As AI, bio‑engineering, and precision surveillance advance, the scale and speed at which harm can be inflicted expands dramatically. This does not mean such horrors are inevitable, nor that new technologies predetermine their misuse. But it does mean that the stakes of political decay are far higher than before. When democratic norms erode, when dissent is criminalised, and when whole groups are cast as threats, the machinery that could be turned toward repression or exclusion is faster, more powerful, more decisive, more opaque, and less accountable than any bureaucracy in the past.
In a political environment where opposing authoritarianism is reframed as subversion, the moral compass of a society begins to spin loose. This is the world into which we are introducing AI and automated power structures. If a future Wannsee‑like meeting were to occur, the options on the table for a final solution to the Nazi’s Jewish problem, would be far more sophisticated and far more final. And all this as the space for coordinated resistance gets far smaller.
Dreaming again
If hope remains, it lies not in resisting technology but in reshaping the story we tell with it. The ecological crisis cannot be solved by automation alone, it requires renewed forms of coordination, solidarity, and meaning. Remember that language shapes perception. We need vocabularies of interdependence, ethics of care, and structures of governance suited to a heating world in stark need of healing.
AI can be a part of that. It could support empathy, cooperation, and resilience, but only if guided by human purposes larger than profit, convenience, efficiency, dominance, and control.
Until then, the Hamptons may indeed become defensible at the other end of history..


The article seems to be in three parts. Probably each deserving their own article. There is the concerns about the Wild West of AI development, the consolidation of power in a few techno-archs, and the requirement to counter this with a new narrative. Asking if the Hamptons are defensible presuppose that they might be under attack. But I would say they have more to worry about from the billionaire next door than the hoi polloi beyond the gates. Being wealthy to exist in that kind of rarefied environment, must paint a target on your back. But I am not sure if the threat comes from the masses this time around. They are being promised comfort and convenience to stay in their digital boxes that are designed for them to exist in isolation from each other. Revolution needs a unifying principle and don't think cat videos will do the trick. This is definitely an era of divide and rule in action. Thanks for the post Daithi. More to say I think.