AI in Internal Comms: The Do’s, Don’ts and “Absolutely Not in a Million Years”

In our last blog, we share a general outlook on how to use AI well at work. In it, we touched on some examples of what is, and isn’t, appropriate use for AI. We thought this was important enough to expand upon a little further, so… well... hi again!

We also wanted to touch upon some of the most commonly used platforms.

So, to recap: it really isn’t a question right now of whether people in your organisation are using AI. They are. Quietly. Sometimes badly. Occasionally with mild panic.

So, if your current AI approach is “we haven’t said anything yet, so hopefully nothing’s on fire”… congratulations, you’re normal. But in 2026, silence isn’t neutral. It’s risky. This is where internal comms earns its keep: translating rules, reducing fear, and stopping well-meaning employees from accidentally uploading half the company into a chatbot.

Let’s break it down.

The Golden Rule of AI in Comms

AI is a tool, not a decision-maker, not a shortcut around judgement, and definitely not your new Head of People.

Used well, it saves time and improves clarity.
Used badly, it creates compliance headaches and trust issues that no one wants to own.

Your job as a comms pro isn’t to ban it or blindly cheerlead it. It’s to help set and communicate clear, human guardrails.

The Do’s: Where AI Actually Helps Internal Comms

There are plenty of places where AI can make comms teams faster and saner, all without crossing any lines.

AI is genuinely useful for drafting first versions of content, especially when you’re staring at a blank page and questioning all your life choices. It’s great for turning messy notes into something readable, summarising long documents, or helping repurpose content across channels.

It can help internal comms teams test tone, sharpen headlines, simplify language, and create structure - particularly useful when you’re juggling campaigns, change comms and leadership updates all at once.

Used this way, AI doesn’t replace expertise. It frees you up to spend more time on strategy, stakeholder management, and actual thinking. Which, conveniently, is what we’re really paid for.

This is also where a comms health check becomes incredibly useful. If teams are leaning heavily on AI just to keep up, that’s often a sign the system itself is overloaded or unclear. Fix the system, and AI becomes a helper, not a crutch.

The Don’ts: Where Things Get Risky Fast

Here’s where most organisations start sweating.

AI should not be used as a dumping ground for confidential information, personal data, commercially sensitive material or anything that would make Legal physically flinch. That includes employee relations issues, performance data, customer information, or anything covered by regulatory obligations.

It also shouldn’t be used to generate external-facing content without proper review, or to automate decisions that affect people. If AI is influencing hiring, performance, pay, or disciplinary outcomes without human oversight, you’ve crossed from “helpful” into “problematic”.

And no, AI should not be used to impersonate leaders, fake employee voices, or generate messages that give the illusion of authenticity without the substance. People can smell that a mile off. Plus it’s just (morally) dodgy as all hell.

This is where comms strategy matters. The clearer your principles, tone of voice and governance, the easier it is to explain why some uses are fine and others aren’t.

The “Absolutely Not” List (Be Clear, Not Scary)

Every organisation needs a short, plain-English list of red lines. Not a 40-page policy. Not buried away in a footnote on a Word doc in a filing cabinet somewhere. A simple “don’t do this” section that removes ambiguity is required.

Things like:

  • Don’t upload sensitive or personal data into public AI tools
  • Don’t use AI to bypass approvals or controls
  • Don’t present AI output as fact without checking
  • Don’t let AI make final decisions about people

If employees don’t know where the edge is, they’ll either freeze… or step right over it.

Good internal comms turns these rules into something understandable, not something people are scared to ask about. This is exactly where change comms skills come into play: explain the why, not just the rule.

The AI Tool Landscape: Same Job Title, Very Different Personalities

One of the trickiest things about “AI guidance” is that employees aren’t using one tool. They’re using whatever’s easiest, fastest, already open in a browser tab, or mentioned on LinkedIn that morning.

From a comms perspective, lumping all AI together is a mistake. Different tools have different strengths, risks and best-use cases, and employees need help navigating that without becoming accidental compliance experiments.

Here’s a plain-English tour of the usual suspects.

• ChatGPT (OpenAI): The Swiss Army Knife

ChatGPT is the one most people start with, because it’s flexible, fast and surprisingly good at turning chaos into something readable.

It’s good for:

  • Drafting first versions of internal comms
  • Summarising long documents
  • Rewriting content in a clearer or more human tone
  • Brainstorming campaign ideas or structures

Where it gets risky:

  • People forget it’s not a source of truth
  • Public versions should never be fed confidential or personal data
  • It fibs with great conviction. Outputs sound confident even when they’re wrong

Comms guidance angle: Great as a thinking partner and drafting assistant. Not a fact-checker, not a policy interpreter, and definitely not a dumping ground for sensitive information.

• Microsoft Copilot: The Embedded Assistant

Copilot lives inside tools many organisations already use - Outlook, Word, Teams, PowerPoint - which makes it both powerful and slightly dangerous if expectations aren’t clear.

It’s useful for:

  • Drafting emails and documents using existing content
  • Summarising meetings or chat threads (beware of meetings with confidential subject matter!)
  • Creating presentations faster
  • Working securely if your tenant is configured properly

Where it gets tricky:

  • People assume “it’s Microsoft, so it must be safe”
  • Access permissions still matter
  • Outputs can amplify existing bad habits if the source material is messy

Comms guidance angle: Copilot is great when governance is clear. Comms teams should work closely with IT and Legal so employees understand what Copilot can access. And what it can’t.

This is also a great moment to reinforce tone of voice and quality standards, because Copilot will happily reproduce your worst-written emails at scale.

• Google Gemini: The Researcher

Gemini often appeals to people who live in Google Workspace and want help synthesising information or drafting content quickly.

It’s good for:

  • Early-stage research and ideation
  • Drafting outlines or summaries
  • Exploring different angles on a topic

Where caution is needed:

  • Same data risks as other public tools
  • Not all outputs are reliable or current
  • Can blur the line between “research” and “fact”

Comms guidance angle: Fine for inspiration and exploration. Not a replacement for verified sources, and not somewhere to park anything sensitive.

• Image & Creative AI (DALL·E, Midjourney, Adobe Firefly)

These tools often get overlooked in AI guidance, until someone generates an image that raises eyebrows.

They’re useful for:

  • Concept visuals
  • Campaign mood boards
  • Early creative exploration
  • Social or internal visuals (depending on licensing)

The risks:

  • Copyright and licensing confusion
  • Brand misuse or inconsistency
  • Generating images that unintentionally exclude or stereotype

Comms guidance angle: Creative AI needs brand guardrails. Clear rules on where it can be used, how outputs are checked, and what’s acceptable for internal vs external use are essential.

This often ties neatly into broader campaign comms and brand strategy conversations.

• Specialist AI Tools (HR, Analytics, Writing Assistants)

Many organisations are now using AI baked into specialist platforms - HR systems, learning tools, analytics dashboards, customer support platforms.

These can be powerful, but they raise bigger questions around bias, transparency and decision-making.

Comms should be especially alert when AI is:

  • Influencing people decisions
  • Ranking or scoring employees
  • Making recommendations that feel opaque

Comms guidance angle: Employees need to know when AI is being used, how it supports decisions, and where human judgement still sits. Silence here breeds suspicion.

This is where change comms and employee engagement tools really matter, because trust is fragile in these spaces.

The Big Message to Employees (Keep It Simple)

Rather than issuing a tool-by-tool manual that no one will read, many organisations do better with a few clear principles:

  • Use AI to support your work, not replace judgement
  • Don’t share anything you wouldn’t post publicly
  • Always review and sense-check outputs
  • Ask if you’re unsure; no one gets punished for checking
  • Remember: AI doesn’t understand context, culture or consequences

Comms teams can reinforce these principles consistently across onboarding, campaigns, leadership messaging and everyday updates.

Why This Belongs in Your Comms Strategy

If employees are already using AI (and they are), then guidance that’s unclear, outdated or missing is a reputational and compliance risk.

Again, this is exactly the kind of thing a comms health check will surface:

  • Are people confused about what tools are allowed?
  • Is guidance inconsistent across teams?
  • Are leaders saying one thing while tools quietly say another?

From there, a clear comms strategy and campaign approach can turn AI from a grey area into something people feel confident using responsibly. Because when people understand the tools, the rules and the reasons, AI stops being scary and starts being useful.

How to Talk About AI Without Sounding Like the Fun Police

Tone matters more than the tool. If your AI guidance reads like it was written by someone who distrusts both technology and humanity, people will ignore it. Or worse, they’ll use AI secretly.

Better comms sound like:

  • “Here’s how AI can help you”
  • “Here’s where we need to be careful”
  • “Here’s who to check with if you’re unsure”
  • “Here’s why these boundaries exist”

This is perhaps a great case for a video comms. A short, human explainer from a trusted leader or comms voice will land far better than a dense PDF with footnotes. It brings the human touch to a topic that’s already mired in suspicion of non-human output.

The Bottom Line

AI doesn’t need banning. It needs framing.

Clear guidance, sensible boundaries and a tone that assumes good intent go a long way. And if your organisation is struggling to agree what’s OK, what’s risky and what’s off-limits, that’s not a tech problem, it’s a communication one.

A comms health check or strategy refresh can help you define those lines and explain them in a way people actually understand. And if you need any help with that, please holler!

BOOK A FREE 15 MIN DISCOVERY CALL

Let’s work together

THE BEST WAY TO SEE IF WE ARE A GOOD FIT IS TO HAVE A BREW AND A CHAT, AND WE CAN TAKE IT FROM THERE…

BOOK A CALL