.jpg)
In our last blog, we share a general outlook on how to use AI well at work. In it, we touched on some examples of what is, and isn’t, appropriate use for AI. We thought this was important enough to expand upon a little further, so… well... hi again!
We also wanted to touch upon some of the most commonly used platforms.
So, to recap: it really isn’t a question right now of whether people in your organisation are using AI. They are. Quietly. Sometimes badly. Occasionally with mild panic.
So, if your current AI approach is “we haven’t said anything yet, so hopefully nothing’s on fire”… congratulations, you’re normal. But in 2026, silence isn’t neutral. It’s risky. This is where internal comms earns its keep: translating rules, reducing fear, and stopping well-meaning employees from accidentally uploading half the company into a chatbot.
Let’s break it down.
AI is a tool, not a decision-maker, not a shortcut around judgement, and definitely not your new Head of People.
Used well, it saves time and improves clarity.
Used badly, it creates compliance headaches and trust issues that no one wants to own.
Your job as a comms pro isn’t to ban it or blindly cheerlead it. It’s to help set and communicate clear, human guardrails.
There are plenty of places where AI can make comms teams faster and saner, all without crossing any lines.
AI is genuinely useful for drafting first versions of content, especially when you’re staring at a blank page and questioning all your life choices. It’s great for turning messy notes into something readable, summarising long documents, or helping repurpose content across channels.
It can help internal comms teams test tone, sharpen headlines, simplify language, and create structure - particularly useful when you’re juggling campaigns, change comms and leadership updates all at once.
Used this way, AI doesn’t replace expertise. It frees you up to spend more time on strategy, stakeholder management, and actual thinking. Which, conveniently, is what we’re really paid for.
This is also where a comms health check becomes incredibly useful. If teams are leaning heavily on AI just to keep up, that’s often a sign the system itself is overloaded or unclear. Fix the system, and AI becomes a helper, not a crutch.
Here’s where most organisations start sweating.
AI should not be used as a dumping ground for confidential information, personal data, commercially sensitive material or anything that would make Legal physically flinch. That includes employee relations issues, performance data, customer information, or anything covered by regulatory obligations.
It also shouldn’t be used to generate external-facing content without proper review, or to automate decisions that affect people. If AI is influencing hiring, performance, pay, or disciplinary outcomes without human oversight, you’ve crossed from “helpful” into “problematic”.
And no, AI should not be used to impersonate leaders, fake employee voices, or generate messages that give the illusion of authenticity without the substance. People can smell that a mile off. Plus it’s just (morally) dodgy as all hell.
This is where comms strategy matters. The clearer your principles, tone of voice and governance, the easier it is to explain why some uses are fine and others aren’t.
Every organisation needs a short, plain-English list of red lines. Not a 40-page policy. Not buried away in a footnote on a Word doc in a filing cabinet somewhere. A simple “don’t do this” section that removes ambiguity is required.
Things like:
If employees don’t know where the edge is, they’ll either freeze… or step right over it.
Good internal comms turns these rules into something understandable, not something people are scared to ask about. This is exactly where change comms skills come into play: explain the why, not just the rule.
.jpg)
One of the trickiest things about “AI guidance” is that employees aren’t using one tool. They’re using whatever’s easiest, fastest, already open in a browser tab, or mentioned on LinkedIn that morning.
From a comms perspective, lumping all AI together is a mistake. Different tools have different strengths, risks and best-use cases, and employees need help navigating that without becoming accidental compliance experiments.
Here’s a plain-English tour of the usual suspects.
ChatGPT is the one most people start with, because it’s flexible, fast and surprisingly good at turning chaos into something readable.
It’s good for:
Where it gets risky:
Comms guidance angle: Great as a thinking partner and drafting assistant. Not a fact-checker, not a policy interpreter, and definitely not a dumping ground for sensitive information.
Copilot lives inside tools many organisations already use - Outlook, Word, Teams, PowerPoint - which makes it both powerful and slightly dangerous if expectations aren’t clear.
It’s useful for:
Where it gets tricky:
Comms guidance angle: Copilot is great when governance is clear. Comms teams should work closely with IT and Legal so employees understand what Copilot can access. And what it can’t.
This is also a great moment to reinforce tone of voice and quality standards, because Copilot will happily reproduce your worst-written emails at scale.
Gemini often appeals to people who live in Google Workspace and want help synthesising information or drafting content quickly.
It’s good for:
Where caution is needed:
Comms guidance angle: Fine for inspiration and exploration. Not a replacement for verified sources, and not somewhere to park anything sensitive.
These tools often get overlooked in AI guidance, until someone generates an image that raises eyebrows.
They’re useful for:
The risks:
Comms guidance angle: Creative AI needs brand guardrails. Clear rules on where it can be used, how outputs are checked, and what’s acceptable for internal vs external use are essential.
This often ties neatly into broader campaign comms and brand strategy conversations.
Many organisations are now using AI baked into specialist platforms - HR systems, learning tools, analytics dashboards, customer support platforms.
These can be powerful, but they raise bigger questions around bias, transparency and decision-making.
Comms should be especially alert when AI is:
Comms guidance angle: Employees need to know when AI is being used, how it supports decisions, and where human judgement still sits. Silence here breeds suspicion.
This is where change comms and employee engagement tools really matter, because trust is fragile in these spaces.
Rather than issuing a tool-by-tool manual that no one will read, many organisations do better with a few clear principles:
Comms teams can reinforce these principles consistently across onboarding, campaigns, leadership messaging and everyday updates.
If employees are already using AI (and they are), then guidance that’s unclear, outdated or missing is a reputational and compliance risk.
Again, this is exactly the kind of thing a comms health check will surface:
From there, a clear comms strategy and campaign approach can turn AI from a grey area into something people feel confident using responsibly. Because when people understand the tools, the rules and the reasons, AI stops being scary and starts being useful.
Tone matters more than the tool. If your AI guidance reads like it was written by someone who distrusts both technology and humanity, people will ignore it. Or worse, they’ll use AI secretly.
Better comms sound like:
This is perhaps a great case for a video comms. A short, human explainer from a trusted leader or comms voice will land far better than a dense PDF with footnotes. It brings the human touch to a topic that’s already mired in suspicion of non-human output.
AI doesn’t need banning. It needs framing.
Clear guidance, sensible boundaries and a tone that assumes good intent go a long way. And if your organisation is struggling to agree what’s OK, what’s risky and what’s off-limits, that’s not a tech problem, it’s a communication one.
A comms health check or strategy refresh can help you define those lines and explain them in a way people actually understand. And if you need any help with that, please holler!