Skip to content
| BLOG POST

What would have to be true for unions to be at their strongest in responding to AI?

By Nick Scott, Director, Trade Union AI, Unions21 | 7 min


Can a union negotiate AI use with employers if it has no policy for how its own staff use AI, has never run AI training and has never asked members if they're using ChatGPT for workplace advice?

The answer is clearly yes. Many unions are doing exactly this. They’re tackling algorithmic management and ethical AI adoption with employers. Yet few have internal AI policies, staff training, or data on member behaviour.

However, is that union operating at its strongest in those negotiations? Is their capacity, credibility and expertise as high as it could be?

I don't think so. I believe that unions that want to be strongest in negotiating over AI must themselves go through AI transformation. Not just bargain about it – live it.

What does AI transformation look like? There are four critical dimensions unions need to work through.

1. People: it's not just skills, it's identity

Two-thirds of union staff who responded to a Unions 21 survey on AI haven't had any AI training. Yet three-quarters have experimented with AI. That's a problem for a technology that's weird, unreliable and ethically complicated.

But this isn't just about skills – it's about professional identity.

When I ran digital webinars in the past, a handful of digital specialists showed up. When I run AI webinars now? Hundreds attend – from finance, HR, operations, everywhere. Everyone shows up because AI discourse is personal. Digital transformation was about organisations (think Kodak, Blockbuster). AI discourse is about individuals – your job, your future.

And union staff face a uniquely complex challenge: navigating this shift whilst helping members navigate the same fears. How do you experiment with AI internally whilst organising for workers' protection without appearing hypocritical?

What would have to be true? Staff would need to believe that:

  • Their judgment, empathy and institutional knowledge remain irreplaceable (AI augments but doesn't replace this)

  • Using AI to strengthen their work doesn't undermine their advocacy for members affected by it

  • Their union will invest in their ongoing learning, not just one-off training (though that’s a start)

  • They have permission to experiment within clear boundaries

Without these conditions, staff experimentation happens in the shadows, without support or safeguards.

2. Process: from "if" to "how"

That shadow experimentation - “shadow AI” - is already happening. 60% of survey respondents either weren't sure if their union had an AI policy (32%) or were certain it didn't (28%). Meanwhile, the vast majority were using AI anyway. That's a governance gap.

The challenge? Democratic decision-making cultures aren't built for rapid technology adoption.

But perhaps that's not actually a problem – if we aim for responsible adoption rather than fast adoption.

When unions negotiate with employers, they push for transparency, accountability, non-discrimination, worker participation. Democratic structures enshrine these values. Unions shouldn't see themselves as slowing things down, but as building in safeguards that companies skip.

Yet too many internal conversations remain stuck on whether to engage with AI rather than how. The most common wish from staff? "That it would go away." We can't build new processes from that standpoint. Staff are already using AI. If we don't establish responsible adoption processes now, we will fall behind. Slow shouldn't mean standstill.

What would have to be true? Unions would need:

  • Processes focused on how to use AI responsibly, not whether to use it at all

  • Policies that create safe experimentation spaces rather than blanket restrictions

  • Cross-team learning mechanisms that allow discoveries spread quickly

  • A willingness to redesign workflows entirely, not just speed up existing ones a little

The question isn't "should we use AI?" but "how do we use it in ways that reflect our values?"

3. Technology & data: control versus chaos

The old model for union technology – where IT controls all systems decisions and everything is approved from the centre – struggles with AI. AI rollout follows consumer patterns, not enterprise patterns.

AI tools have been added everywhere, without any consultation: embedded in Office 365, browser, phones. It goes further: I’ve even talked to union organisers coding their own AI apps. 

So, AI is already everywhere. But, unfortunately, not the most useful version of AI. In fact, I've met only one union with a concrete plan for rolling out Microsoft Co-Pilot paid licences. Just one. The others are stuck in enterprise procurement hell: should we buy it? Who gets access? Meanwhile, staff use a pretty limited free version (often without training). 

The second infrastructure challenge is data. One respondent described their CRM as "a mountain of information that's neither use nor ornament”. Yet AI is fundamentally a data tool. Unions collect incredible data – grievances, bargaining records, membership engagement. But it's rarely structured for easy access and analysis.

What would have to be true? Unions would have developed:

  • Infrastructure supporting both coordination and experimentation simultaneously

  • A decision-making model where staff influence which tools get adopted

  • Clean, accessible data that people can actually use for analysis

  • Safe testing spaces for AI applications before wider rollout

Without this, you get either paralysis (no one can use anything) or chaos (everyone uses different tools with no co-ordination).

4. External environment: the threat of disintermediation

The world is shifting in ways that threaten union relevance.

Google's AI overviews are causing massive drops in click-throughs to union websites. Disinformation created using AI is likely already deployed in union elections. Startups are offering AI-facilitated case management services similar to those unions provide.

Here's what concerns me most: I haven't heard of a single union that knows how many members now ask ChatGPT the questions they'd previously ask their rep. Yet this is exactly the kind of task ChatGPT is being most heavily used for.

If members are quietly switching from "ask my rep" to "ask ChatGPT," unions are being disintermediated without even knowing it.

What would have to be true? Unions would be:

  • Actively monitoring of how AI is changing member behaviour and expectations

  • Gathering intelligence about employers' AI capabilities (to spot negotiation opportunities)

  • Horizon scanning for where AI enables new competitors to union services

  • Doing things that would have been impossible without AI (AI-powered training at scale, mass listening exercises)

This isn't paranoia – it's environmental scanning. If you don't know how the landscape is shifting, you can't position yourself strategically.

The urgency (and the opportunity)

What took ten years with digital transformation needs to happen in two to three years with AI. If unions want to influence where, how and when AI is used in society – and they should – windows of opportunity will close.

But here's the optimistic part: unions that transform well will be better positioned to influence. They'll have better intelligence through more accessible data. They'll have lived experience of AI transformation, giving them credibility in workplace negotiations. They'll have identified where AI strengthens their work, freeing up resources for organising and impact.

The key is doing it in ways that align with union values – transparency, democracy, worker protection. This is where unions have an advantage: they can model how to do AI transformation ethically and democratically, not just quickly.


If your union is tackling these challenges, Unions 21 can help. We run workshops and consultancy programmes on managing and leading AI in unions. Get in touch: nick@unions21.org

More ideas