
Let’s get this out of the way early: AI is already in your workplace.
Whether your organisation has officially rolled it out, banned it, quietly tolerated it, or is pretending it doesn’t exist (bold strategy), people are already using AI tools to write emails, summarise documents, draft presentations and occasionally ask existential questions at 11pm.
So, the question for comms teams in 2026 isn’t “Should we use AI?”
It’s “How do we talk about it sensibly, safely and without triggering mass panic?”
Because handled well, AI can be a productivity booster, a creativity helper and a serious time-saver. Handled badly, it becomes a compliance nightmare wrapped in an anxiety spiral. And guess who sits right in the middle of that? Yep. Us.
A considered, sensible and realistic comms strategy around AI is pretty much an essential in 2026.
For some companies, a separate and bespoke campaign might even be needed.
One of the biggest mistakes organisations make with AI is saying nothing.
When there’s no clear guidance, people fill the gaps themselves. They experiment quietly, copy/paste sensitive information into tools they shouldn’t, or assume leadership is either clueless or hiding something.
None of those outcomes is ideal.
Our role as comms professionals isn’t to be the AI police. It’s to help the organisation set clear expectations, reduce risk, and build confidence, without turning every message into a legal disclaimer with a pulse.
The tone we set early really matters. (As does our tone of voice in all important comms.) If AI is introduced as mysterious, risky or vaguely threatening, employees will respond with fear, resistance or covert usage that would impress the most secret of secret squirrels.
A healthier framing is this:
AI is a tool.
It’s not a replacement for people.
It’s not magic.
Nor is it something everyone must use all the time.
And crucially: how it can be used depends entirely on your organisation, your sector, your data sensitivity, and your regulatory environment. That nuance needs to be baked into your comms from day one.
This is where comms teams earn their keep.
Most people don’t want to misuse AI. They just don’t know where the lines are - especially when those lines differ between companies, roles and industries. Our job is to make those boundaries clear, practical and easy to remember.
A good starting point is to separate AI use into three buckets: generally OK, proceed with caution, and abso-bloody-lutely not.
These are low-risk, high-value uses. [Guru Disclaimer: always subject to your own policies]
This is where AI shines as a co-pilot, not an author.
These uses often require clear rules, training, or approval.
This is where comms and legal need to be aligned, and where “check before you use” should be a normal message, not a scary one.
These are the red lines employees need to understand clearly.
If these boundaries aren’t spelled out plainly, people will either guess or ignore them entirely.
This is where tone really matters. If your AI guidance reads like it was written by someone who hates both technology and joy, adoption will happen anyway. It’ll just happen quietly and under the radar.
Better approaches include:
One simple rule of thumb that might be worth communicating: If you wouldn’t paste it into a public forum, don’t paste it into an AI tool.

Let’s not dance around this. When organisations talk about AI, many employees start asking themselves one thing: “Is this going to replace me?” Ignoring that fear doesn’t make it go away. In fact, it usually makes it louder and fiercer. And considerably more widespread. Comms teams play a critical role in addressing this honestly and without making promises no one can keep.
The most effective reassurance doesn’t sound like: “AI will never affect jobs.” No one believes that.
It sounds more like:
Transparency builds far more trust than forced optimism.
If there are roles likely to change significantly, it’s better to acknowledge that and talk about reskilling pathways than pretend nothing is ever going to change.
In organisations doing this well, AI communication is:
Ongoing, not a one-off announcement.
Employees get updates as tools evolve, policies change, and lessons are learned.
Two-way, not broadcast-only.
People can ask questions, raise concerns, and suggest improvements.
Role-aware.
What’s appropriate for marketing might be completely inappropriate for legal or finance.
Consistent.
Leaders don’t say one thing while managers quietly discourage usage out of fear.
Grounded in reality.
No hype. No doom. No paranoia. Just clear guidance and support.
AI isn’t neutral. It reflects the data it’s trained on, which means bias is a real concern, particularly in areas like recruitment, performance assessment or decision support.
Comms teams don’t need to become data scientists, but they do need to help reinforce a few principles:
If employees feel AI is being used on them rather than with them, trust will evaporate very quickly.
Yes, there is one! Handled well, AI is actually a gift to internal comms teams.
It can help us:
But our real value isn’t in using AI. It’s in helping everyone else use it wisely. We are the translator between technology, leadership, policy and people. That’s not a side role. That’s central.
AI in the workplace isn’t a future issue anymore. It’s a present one.
Comms teams don’t need to have all the answers, but they do need to ask the right questions, set the right tone, and create space for honest conversation.
Clear guidance beats silent fear.
Honest reassurance beats empty hype.
Human judgement still beats automation every time.
If your AI comms helps people feel informed, supported and trusted rather than monitored or replaceable, you’re doing it right.
And if you’d like help shaping that message without scaring the life out of everyone, well… you know where to find us.