Three Policy Pieces
At some point I realised that, as an NPF rep, I should probably stop writing high-concept, philosophically inflected blog posts, and actually submit some policy.
So here they are: three submissions I made to the Labour Party’s National Policy Forum at 2am last night, presented here as a kind of triptych (or perhaps a Policy Sonata — three movements in dialogue).
Regulating in the Blur explores how we might build institutions capable of navigating ambiguity and change.
The Neuroinclusive Employment Charter reimagines the workplace through the lens of cognitive diversity.
Towards an AI Commons asks what it would mean to hold digital infrastructure in common — and design it for the public good.
Each piece stands on its own. But together, they circle something. Not a doctrine, exactly, but a sensibility: a way of thinking about systems that listens for tension as well as form. A kind of motif, returning in different keys.
The current NPF consultation period is open until June 8th. Make your submission here!
Regulating in the Blur: Building Institutions for Ambiguous AI
Summary
As artificial intelligence systems become more powerful, more generative, and more integrated into everyday decision-making, existing approaches to regulation are beginning to fail. Many current frameworks rely on binary categories—“safe vs unsafe,” “human vs machine”—that do not map cleanly onto the complex, probabilistic, and fluid nature of contemporary AI systems, especially large language models.
This submission argues that effective AI governance cannot rely on false clarity. Instead, we must build institutions capable of functioning within ambiguity: institutions that are reflexive, context-aware, pluralistic, and robust to uncertainty. This is not regulatory weakness, but democratic strength. By recognising ambiguity as a structural feature, not a bug, of AI systems, Labour can lead in designing regulatory architectures that are resilient, adaptive, and accountable.
This vision doesn’t replace existing regulatory goals like safety, fairness, or accountability. It makes them attainable in a world where AI is not a product but a moving target. This is governance as choreography, not command.
Areas for Exploration
1. Establish Reflexive Regulatory Infrastructure
- Create an AI Regulatory Body with a mandate to learn, not just enforce. This body should be capable of updating guidance, issuing temporary rulings, and evolving standards over time.
- Require post-deployment reviews of AI systems, especially in high-impact areas such as health, education, and criminal justice.
- Promote living legislation: AI laws should include sunset clauses, revision triggers, and built-in feedback loops.
2. Empower Contextual Decision-Making
- Provide discretionary authority to regulators to make contextual judgements based on expert review, not just static checklists.
- Embed impact interpretation panels in oversight bodies—diverse teams with legal, technical, and sociocultural expertise who can assess context-specific harms or disputes.
- Learn from the Competition and Markets Authority model, which blends statutory power with economic judgment.
3. Support Epistemic Pluralism in Governance
- Ensure non-technical voices—ethicists, educators, artists, community leaders—are present in AI oversight and advisory boards.
- Establish pluralist ethics panels to inform major public AI deployments.
- Support participatory technology assessments in key areas, modelled on citizens’ assemblies or deliberative mini-publics.
4. Standardise Procedural Safeguards
- Mandate red-teaming (adversarial testing) of high-impact models as part of pre- and post-market assessments.
- Require public algorithm registers for government-used AI and major private sector deployments in regulated domains.
- Introduce auditable logs and traceability requirements for training data, fine-tuning processes, and deployment conditions.
5. Protect Against Capture and Performance Governance
- Create clear boundaries between regulators and regulated entities to avoid policy capture by tech incumbents.
- Promote transparency as discipline, not theatre: publish regulatory deliberations, dissenting views, and rationales for decisions.
- Fund independent oversight institutions—akin to the OBR or NAO—with a mandate to scrutinise the broader AI ecosystem.
Why Now?
Generative AI is not merely a technology. It is an epistemic shift, blurring truth, agency, and authorship. Attempts to force it into old regulatory paradigms risk creating brittle frameworks that collapse under pressure, or worse, become tools of corporate self-regulation.
By embracing ambiguity as the terrain—not a failure—Labour can lead in building regulatory systems that are both principled and pragmatic. This is not a retreat from standards, but a recognition that standards must evolve with the systems they govern.
The institutions that will succeed in governing AI will not be those that promise false certainty. They will be those that can breathe.

The Neuroinclusive Employment Charter: A Framework for Inclusive Workplaces
Summary
Across the UK, millions of neurodivergent people — those with ADHD, autism, dyslexia, dyspraxia, and other cognitive differences — face significant barriers to employment, progression, and workplace wellbeing. Despite growing awareness of neurodiversity, the gap between rhetoric and reality remains wide. Many workplaces still rely on outdated norms around communication, productivity, and behavioural expectations that exclude or marginalise neurodivergent workers.
At the same time, there is a growing recognition that diversity of thought, cognition, and perspective is not only a matter of equity, but a source of collective strength. To build an economy that is innovative, fair, and fit for the future, we must reimagine our workplaces — from the ground up — to include all kinds of minds.
This proposal calls for the development of a nationally recognised framework of neuroinclusive workplace standards, created through consultation with neurodivergent people, trade unions, employers, and disability rights groups. The goal is to establish a voluntary but influential “Neuroinclusive Employment Charter”, which can inform procurement policy, collective bargaining, and Labour’s wider agenda on mental health, fair work, and inclusive growth.
Areas of Exploration
1. Recognising Neurodiversity as a Spectrum of Cognitive Styles
- Affirm the legitimacy of self-identification and self-advocacy.
- Promote strengths-based, non-medicalised frameworks for understanding neurodivergence.
- Centre lived experience in all stages of Charter development and implementation.
2. Setting Workplace Inclusion Standards
- Develop clear standards for sensory environments, communication protocols, and flexible workflows.
- Provide toolkits for employers on implementing low-cost, high-impact adaptations.
- Recommend the integration of neurodivergent-inclusive policies into HR processes and job design.
3. Building Reflective Management Practices
- Encourage training for managers on neurodivergence, psychological safety, and inclusive feedback.
- Promote supervisory practices that prioritise relational attunement, clear boundaries, and trust.
- Advocate for co-coaching and peer mentorship models.
4. Embedding Neuroinclusion in Collective Bargaining
- Support the inclusion of neurodivergent priorities in union-employer agreements.
- Recognise neurodivergent representatives in workplace forums and negotiations.
- Develop shared standards for reasonable adjustments in collaboration with trade unions.
5. Monitoring, Accountability and Learning
- Encourage voluntary adoption through a public recognition scheme.
- Support participatory evaluation through regular surveys and feedback mechanisms.
- Include neuroinclusion metrics in wider EDI reporting and strategy.
Why Now?
Neurodivergent people continue to face some of the highest structural barriers to meaningful, secure employment — from recruitment processes that penalise difference, to rigid working cultures that overlook individual needs. While awareness is increasing, there remains a lack of practical, evidence-based frameworks to guide employers and empower workers.
This is a moment of opportunity. The future of work is being reshaped by new technologies, changing attitudes to mental health, and evolving expectations around inclusion. A Neuroinclusive Employment Charter would place Labour at the forefront of this transition, turning values into standards, and lived experience into policy.
By leading on neuroinclusion, Labour can show what dignity at work truly means: not just protection from harm, but active support to thrive. This is a chance to reshape our economy in line with our values: fairer, more dynamic, and rooted in respect for every kind of mind.

Towards an AI Commons: Building a Democratic Future for Artificial Intelligence
Summary
Artificial Intelligence is becoming a core infrastructure of 21st-century life, shaping decisions in health, education, transport, and the workplace. This NPF submission invites Labour to explore a policy direction that treats AI not as a private commodity, but as a public good. What might it mean to develop a shared digital infrastructure — an AI Commons — that serves the whole of society?
This would involve publicly stewarded datasets, open and transparent models, shared compute resources, and nationally coordinated capabilities in AI safety and governance. It is a vision that encourages innovation while ensuring accountability, and fosters sovereignty alongside international collaboration. By framing AI as part of our collective future, we can begin building a more just and resilient digital society.
Areas for Exploration
1. Found a National AI Commons
- Establish publicly funded, democratically governed datasets for use in research, public sector innovation, and regulated private access.
- Support the development of open-source AI models to increase transparency, resilience, and innovation.
- Explore regulatory structures that ensure private models trained on public data contribute proportionally to the commons they draw from.
2. Create a National AI Compute Infrastructure
- Invest in sovereign compute capacity: a network of secure, energy-efficient data centres dedicated to AI research, education, and public sector use.
- Ensure equitable access to compute resources for universities, startups, and public interest projects.
- Frame compute not only as innovation infrastructure, but as a component of national security and resilience. In times of crisis — environmental, geopolitical, or economic — AI capability may prove vital to public response.
3. Democratise Access and Governance
- Involve civil society, academia, trade unions, and regional representatives in the governance of the AI Commons.
- Fund participatory research and deliberative forums to shape ethical guidelines, use cases, and boundaries.
- Develop a public register of high-risk AI systems in use across government and regulated sectors.
4. Embed AI Commons in Industrial Strategy
- Align AI investment with mission-driven innovation goals: net zero transition, health system transformation, and inclusive productivity.
- Require grantees of public R&D funding to commit to openness principles and reciprocal investment in shared infrastructure.
- Use the AI Commons to anchor UK leadership in safe, inclusive, and democratically governed AI globally.
5. Protect Workers and Promote Inclusion
- Ensure collective bargaining rights in any workplace deploying algorithmic management or monitoring.
- Use public AI resources to develop inclusive tools, such as accessibility enhancements and assistive technology.
- Recognise digital inclusion and data literacy as foundational public skills in the age of AI.
Why Now?
The UK faces a strategic choice. We can allow AI development to remain concentrated in the hands of a few large private actors — most based overseas — or we can assert democratic control over the knowledge, compute, and governance structures that will shape our collective future.
Just as public infrastructure underpinned the industrial revolution, and the NHS became a model of post-war social progress, so too can a National AI Commons provide the backbone for a just, sovereign, and forward-looking AI ecosystem.
Labour has always been at its best when it thinks big. This is the moment to do so again.
Author: Andrew Hopper, NPF representative for Labour International CLP
Comments
Post a Comment