The Missing Link: Why Labour Needs a Theory of AI Power
This is not an academic question. It cuts to the heart of what kind of society we want to live in, and whether democracy itself will be equipped to shape the technologies that are already shaping us. It is a conversation the Labour Party urgently needs to lead—not with cautious triangulation or technocratic cheerleading, but with a serious, structural theory of AI power.
To speak of AI without a political analysis is like speaking of markets without class, or foreign policy without empire. The technology is not neutral. It is not a force of nature. It is a human project, built with assumptions, incentives, and interests embedded in every line of code. If Labour wants to govern in the interests of working people, it must understand AI not as a novelty, but as a new architecture of control.
What Kind of Power Are We Dealing With?
Across disciplines, theorists have been mapping the contours of AI power. The implications are sobering:
Algorithmic Governmentality: Legal theorist Antoinette Rouvroy and sociologist David Beer describe a shift from laws and norms to data-driven behavioural steering. AI systems don’t just process decisions—they subtly pre-empt and shape them, turning governance into a form of preemptive influence.
Surveillance Capitalism: Shoshana Zuboff's now-famous term describes the extraction of behavioural data from individuals and its repackaging into predictive products. The individual becomes a source of surplus value, and the feedback loops of targeted content begin to govern culture, economy, and attention.
Data Colonialism: Nick Couldry and Ulises Mejias extend the critique to a global level: just as historic empires extracted land and labour, digital empires extract data. These are not just economic systems; they are systems of global governance without democratic mandate.
Embedded Worldviews: Sheila Jasanoff reminds us that every technology reflects, and also feeds back in to, a set of social assumptions and worldviews. An AI system trained on data from a structurally unequal world will replicate and reinforce those inequalities—while claiming objectivity.
Hidden Labour: The myth of AI as autonomous intelligence masks the vast human labour it depends on. From clickworkers labelling images to underpaid moderators cleaning up content, there is a global class of invisible workers whose exploitation underwrites the illusion of automation.
Affective Governance: AI does not only govern behaviour; it also governs emotion. By filtering, amplifying, and targeting content, these systems reshape how we feel, how we polarise, how we grieve, and how we hope. This is political power at the level of consciousness.
Taken together, these insights describe a technological regime that is not just a productivity tool, but a mode of governance. One that disciplines labour, rewires the public sphere, and centralises control in the hands of the few who own the infrastructure.
Why Labour Must Take This Seriously
Labour has so far approached AI with caution. The AI Opportunities Action Plan focuses on skills, public service reform, and responsible innovation. There is promise in Peter Kyle’s reframing of AI as a public benefit, and Labour’s policy as set forth in the 2024 King's Speech shows encouraging signs of longer-term thinking. But this is not a policy vacuum. It is a framing failure.
The dominant view still treats AI as an economic challenge or a technological puzzle. Institutions like the Alan Turing Institute do valuable technical and ethical research, but this work has not yet reshaped the political centre of gravity. Under the previous Conservative government, this was epitomised by Rishi Sunak’s Frontier AI Taskforce and the 2023 Bletchley Declaration—a framework shaped by Silicon Valley narratives, focused on existential risk, light-touch regulation, and strategic partnerships with industry giants. While the Labour government has rightly shifted the tone towards public benefit and long-term stewardship, much of the foundational framing remains unchanged from the Conservative era. The rhetoric still often treats AI primarily as an economic opportunity rather than a political challenge. By contrast, the Labour Party could use its power to reframe AI as a tool for strengthening democracy, not just industry.
Crucially, the UK is not a passive observer in this space. It is already a site of contested AI power. Consider Palantir’s £330 million NHS data contract, granted with minimal public scrutiny, which risks entrenching private control over public health infrastructure. Or the predictive policing tools deployed by several UK forces—including the Met—whose flawed algorithms and racial bias have been widely documented. These are not theoretical risks. They are live examples of how AI is being embedded into systems of governance without democratic mandate.
And yet, much of Labour’s current rhetoric still accepts the terms set by industry: AI as an economic opportunity to be “safely” harnessed, not a political formation to be critically contested. Risks are framed in technical terms—bias, misinformation, lack of transparency—rather than in political ones: ownership, power concentration, legitimacy.
This is a category error. AI is not merely a “what” to be managed; it is a “who” to be interrogated: a set of actors, incentives, and infrastructural interests that increasingly mediate how public power operates. Without a sharper structural lens, Labour risks legitimising a status quo that deepens inequality, undermines public trust, and outsources sovereignty to private infrastructure owners.
To be clear: Labour does not need to be anti-technology. But it must be anti-concentration of power. And that means moving beyond safety discourse to systemic critique—beyond responsive regulation to proactive democratic design.
The Missing Link: A Democratic Theory of AI
So what would it mean to develop a democratic theory of AI? It would start with a few basic premises:
AI is not neutral: It encodes assumptions, priorities, and exclusions. Every model is trained on a history.
AI governs: It shapes decisions, allocates resources, and structures visibility. These are political acts.
AI benefits those who own the infrastructure: Data, compute, and talent are concentrated in a handful of companies. Without structural intervention, inequality will deepen.
Consent matters: The mass extraction of data from citizens has occurred without meaningful democratic deliberation.
Alternative futures are possible: Public models, cooperatively governed infrastructure, and mission-driven design offer real pathways forward.
From these premises, a Labour agenda could include:
Public Compute and Data Commons: Invest in publicly owned and accessible compute capacity and data infrastructures, ensuring that AI development is not monopolised.
Worker Rights in the Age of Automation: Guarantee algorithmic transparency, collective bargaining, and fair pay for the often-invisible human labour behind AI.
Democratised AI Governance: Establish citizen assemblies, union input, and civil society oversight in the shaping of AI policy.
Civic Infrastructure for the Digital Age: Fund open-source alternatives, ethical audit bodies, and public-interest R&D in AI.
Data Sovereignty and Trusts: Empower citizens to control how their data is used, shared, and monetised, via democratic data trusts.
AI for the Social Good: Redirect technological effort toward climate mitigation, healthcare, education, and the repair of social infrastructure.
The Labour Party has always been at its best when it has aligned institutional reform with moral vision. Just as post-war Labour built a national health service from the ruins of war, today’s party has the opportunity to build a public ethic of intelligence: one that treats AI not as a proprietary asset but as a shared civic capability.
But this requires a shift in tone and imagination. Not just regulating harms, but articulating the good. Not just mitigating disruption, but designing transformation.
It requires leaders who understand that AI is not just about innovation policy, but about constitutional design. That the question is not just what we can automate, but what we must refuse to automate. That legitimacy in the age of AI will depend on more than outcomes—it will depend on process, on consent, and on voice.
There are analogies here to other struggles: the right to unionise, the fight for public broadcasting, the campaign for digital rights. But this moment is different in scope and speed. If the left does not engage now, the architecture will be built without us.
Designing the Future
AI is not destiny. It is design. And design is political.
The left’s task is to make visible what power would rather keep hidden: the assumptions in the code, the workers in the loop, the values at stake. To refuse the framing of inevitability. To insist that democracy still has teeth.
This means stepping into the AI debate not just as critics or cheerleaders, but as architects of a different future. One where intelligence is not privatised. One where technology does not serve capital first, but care, justice, and truth.
Labour must act decisively now—not in some distant future when AI has become too entrenched to challenge, but in this crucial moment of technological transition. Developing a democratic theory of AI will not just safeguard the interests of the many, but will put Labour at the forefront of shaping a future in which the digital commons belongs to us all, not just the powerful few.
Comments
Post a Comment