The Accidental Republic
The unintended outcome of a defence project that gave the demos a voice
‘Since the end of human action, as distinct from the end products of fabrication, can never be reliably predicted, the means used to achieve political goals are more often than not of greater relevance to the future world than the intended goals.’
― Hannah Arendt, ‘On Violence’ in Crises of the Republic: Lying in Politics, Civil Disobedience, On Violence, Thoughts on Politics and Revolution, 1972.
Fifty-seven years ago, in January 1969, when the Cold War was hot, a decision was made that would change the world in ways no one could have foreseen. A small group of engineers and administrators working under the auspices of the United States Department of Defence authorised a project whose ambition, on the surface, was strikingly modest. Computers were rare, expensive, and isolated. Communications between them were fragile. A centralised system, it was feared, could fail at precisely the wrong moment. The task was to design a network that could survive disruption, particularly against a nuclear attack.
The project would become known as ARPANET – the birth of the Internet.
The Advanced Research Projects Agency commissioned it, and it rested on one of the most elegant conceptual breakthroughs of modern engineering: packet switching.
Instead of sending information as a continuous stream along a fixed path, messages would be broken into discrete packets, each tagged with destination data and sent independently across a network. The packets could take different routes, arrive out of order, be rerouted around failures, and still be reassembled perfectly at their destination.
It is difficult to overstate how audacious this idea was. It required the abstraction of information itself – text, numbers, later images and sound – into binary form. It required faith that meaning could survive fragmentation. It required translating human intention into mathematics and timing, into electrical states and pulses of light racing through cables at near-light speed. Knowledge would no longer travel as voice, written text in books or signal alone, but as symbolic tokens, disassembled and reconstituted without loss.
This was not merely clever engineering, an expression of mankind’s intelligence and problem-solving capacities. It was a wager about the nature of order and the resilience of knowledge.
The intellectual atmosphere that made such a wager conceivable owed much to J. C. R. Licklider, a psychologist by training who wandered into computing and promptly began thinking about it in human terms. In his 1960 paper ‘Man-Computer Symbiosis’, he imagined machines not as calculating servants but as partners in thought. He envisioned interactive systems, shared problem-solving as distributed cognition. It was a bold optimism: rational, procedural, confident that tools could elevate their users.
Even so, no one involved imagined how revolutionary this would prove to be. ARPANET was built for researchers, scientists, and military contractors – credentialed experts operating within trusted institutions. Its architects assumed hierarchy would persist naturally. Access would remain scarce. Authority would remain intact.
But the engineering was indifferent to the limits of existing orthodoxies. History has a habit of hiding its revolutions inside the limits of human experience.
Packet switching does not recognise rank. Digitisation does not privilege status. Once information is abstracted into packets, all packets are equal. Once a network is designed to route around failure, it also routes around authority. These were not ideological commitments. They were emergent properties of the Internet, a system designed to keep working no matter what. And most remarkably of all, it worked.
The first ARPANET node went live in late 1969. Soon, messages began to be exchanged between universities. Files were shared. Soon, almost as an afterthought, email emerged – not as a strategic objective, but as a convenience. People talked. They joked. They argued. Communication became social before anyone noticed that something profound had occurred.
What had been created was a communication system that did not require permission.
This was the first truly subversive unintended consequence.
For most of human history, public communications – speech – was expensive, controlled and therefore governable. To speak to the public at scale required presses, pulpits, licenses, studios, and editors. These choke points were not incidental. They were how elites exercised authority and governed reality. They determined what could be said, what counted as knowledge, and what was dismissed as error or heresy. Authority rested on narrative scarcity.
ARPANET abolished that scarcity without meaning to.
Once communication became cheap, decentralised, and reproducible at near-zero cost, the institutional monopoly on meaning began to erode. Not overnight, not cleanly, but irreversibly. Ordinary people, the demos had historically been looked down upon by respective elites and presumed to be incapable of sustained, responsible participation in the production of public meaning. Now, the public could speak, create, organise, document, and dispute without mediation.
At first, this development was welcomed. The early Internet was celebrated as a space of openness and democratic promise. The assumption, often unspoken, was that broader participation would converge on elite consensus. More voices, it was thought, would enable the refinement of orthodoxies, of received wisdom and truth rather than fracture it.
But instead of convergence and compliance, there was contestation. Instead of deference, there was scrutiny. Instead of passive consumption, there was user-generated content and argument. Political and media elites discovered that their authority had rested less on superior reasoning than on superior control of channels. Once those channels dissolved, elite legitimacy became unstable.
This is the second unintended consequence: the collapse of narrative monopoly.
Sustainable elite rule in all domains is never sustained by force alone. It depends on asymmetry – on some voices mattering more than others, on some interpretations circulating more widely, on disagreement remaining marginal or containable. The Internet flattened this structure. It did not make everyone equal, but it made everyone potentially audible.
The fear that followed was not primarily fear of falsehood. Elites have always tolerated lies when they served power. The more profound fear was the loss of well-worn and lucrative business models and of unauthorised interpretation – of truths spoken out of turn, of challenges not routed through approved procedures, of legitimacy contested in real time by people who, horror of horrors, had not been trained, credentialed, or licensed to do so.
Social media did not invent this condition; it intensified it. It placed the means of content creation and publication into the hands of millions and stripped institutions of their gatekeeping role. The result was not chaos in the theatrical sense, but something more corrosive for the status quo: permanent epistemic instability.
The response was swift and revealing. When outright censorship proved politically and technically difficult, management replaced prohibition. Speech would not be banned but ranked by experts and non-elected, self-appointed ‘fact checkers’ and purveyors of ‘the truth’. Not silenced, but contextualised. Not forbidden but framed hierarchically. Power, adapting as it always does, re-entered the system under new names.
Nowhere is this clearer than in the European Union’s approach to digital governance. Under the Digital Services Act and related frameworks, platforms are required to address ‘disinformation,’ ‘hate speech,’ and ‘harmful content’ – categories whose definitions remain deliberately elastic. These are not merely legal terms. They are instruments of narrative control, reasserting elite authority over meaning while maintaining the appearance of procedural neutrality.
Especially after Brexit and the first election of Donald Trump as President of the USA, elite prejudice about the public received its greatest confirmation. The public, it seemed, could not be trusted. The demos, now disastrously empowered, was regarded as deficient, as vulnerable to manipulation, prone to hatred, incapable of discernment. Freedom, it turns out, for those who see themselves as the self-appointed rulers of society, was granted too generously.
This logic is not confined to Europe. Western governments increasingly speak of the need to ‘protect democracy’ from speech – a formulation that would once have sounded absurd. It makes sense only if democracy is understood not as popular rule, but as elite-managed consent. When consent becomes unreliable, speech becomes dangerous.
The final irony appears when we observe where this reasoning leads in its purest form. In Iran, as we see so bloodily demonstrated today, the Internet has been switched off by the state. The justification is identical: public safety, national stability, social harmony. The technology born of decentralisation becomes, in authoritarian hands, a centralised kill switch. While the Mullahs in Iran use the cover of digital darkness to murder their citizens on the streets in their thousands, the would-be ‘Ayatollahs’ in the West, like Sir Keir Starmer, seek to shut down X, not to kill citizens but to silence them into deference.
Here, the unintended consequence completes its arc. A network designed to survive disruption becomes a weapon of absolute control. A system built to route around authority reveals, under sufficient coercion, how much authority can still be exerted.
And yet, the genie does not return to the bottle.
Shutdowns are temporary. Workarounds proliferate. Satellite links, VPNs, mesh networks. The demos, once silent, have learned not only to speak but to evade. Deference, once broken, does not regenerate easily. The knowledge that speech is possible – even if intermittently denied – changes political psychology permanently.
The engineers of ARPANET did not intend this. They were addressing a resilience problem. They succeeded. The system survived. What did not survive intact was the elite monopoly on meaning.
That is the true unintended consequence. And it is still unfolding.
What followed was not simply noise, nor merely disorder, but something rarer and more historically potent: the re-entry of the many into the work of interpretation. The demos, long managed through deference and mediation, discovered that they could speak back – not always wisely, not always well, but persistently, creatively, and at scale. Meaning ceased to be delivered from above and became something argued over in public, endlessly revised, fought for, abandoned, rediscovered.
This contestation is often described as a pathology. It is framed as confusion, misinformation, and a collapse of trust. And there is truth in this diagnosis. But it is only half the story. Contestation is also how new orders are born. It is how stagnant certainties are broken open. It is how human agency reasserts itself against systems that have grown too confident, too closed, too certain of their own benevolence.
Every expansion of expressive capacity has been greeted with fear. Printing produced heresy before it produced enlightenment. Mass literacy spread pamphlets before it spread constitutions. Photography destabilised authority before it documented injustice. The network is no different. It amplifies folly and brilliance alike. It reveals how uneven wisdom is – and how widely it is distributed.
We now stand at another threshold. Artificial intelligence promises to automate cognition itself, to compress expertise, to render non-human ‘judgment’ scalable and cheap. Some claim this will reduce the need for human intelligence altogether. History suggests something subtler. When tools expand capacity, they do not abolish agency; they unleash and displace it. They force humans to renegotiate what judgment, creativity, and responsibility mean under new conditions.
The networked contestation of meaning – messy, adversarial, unresolved – may yet prove to be the training ground for that renegotiation. It may cultivate forms of collective intelligence that no planner could design, no elite could authorise in advance. Not harmony, but dynamism. Not consensus, but movement.
The new dynamic unsettled power. Not simply that control has been weakened, but that outcomes can no longer be reliably predicted.
It is both exhilarating and terrifying. Fifty-seven years ago, who could have imagined that a modest defence research project, built on packets and redundancy, would help produce a world in which billions of people participate – however imperfectly – in the ongoing construction of reality?
Over half a century ago this month, ARPANET did not promise wisdom. It made its proliferation possible. What humanity does with that possibility remains, gloriously and dangerously, unfinished.



This is a brilliant analysis of how technological architecture shapes political power! Youve captured something profound about the relationship between information flow and authority that most analyses miss entirely. I've been thinking a lot about how ARPANET's packet-switching mirrors the way grassroots movements bypass traditional hierarchies, and your point about deference being irrevocably broken is fascinatin. Really excellent historical perspective here.
Excellent account. If you'd told the origin story of the Internet in the language of the century in which it originated, you might have said that digital IT further actualises the dialectic of abstract and concrete labour which is already the essence of capital as universal social relation. Savvy? From this does it follow that the development of AI will unexpectedly actualise the dearth of that dialectic in the non-productive financialised economy....or is that an example of the kind of 20thc talk which is no longer applicable?