← Back to all articles

Why General-Purpose LLMs are a Risky Shortcut for Corporate Communications

From hallucinations to an off-putting tone, broad-use chatbots can create more crises than they solve

There are only a few minutes left until 5 p.m., a welcome signal to the end of a long workday. But just as you're about to log off, an email lands in your inbox.

Something's happened. It's serious. There's a crisis, and all hands are needed—immediately.

Suddenly, the end of the day turns into a public statement to draft, stakeholders to notify, social media channels to monitor and internal teams to align. Everything needs to move immediately, and there's no room for error.

This is a scenario communications professionals know all too well.

In moments like these, the appeal of generative AI is obvious. A general-purpose large language model, like ChatGPT or Perplexity, can instantly generate fluent, professional-sounding copy. For communicators under intense time pressure, it's feels like an easy shortcut. It can put words on a page quickly, which, of course, can be tweaked and polished.

But relying on LLMs in high-stakes situations brings a slew of risks that are easy to overlook until the damage is done.

As the name implies, general-purpose tools are general. Unlike tools specifically designed with communications in mind, they are vaguely agreeable and confident across a variety of topics, making them ideal for quick brainstorming or note summarizing. They also pose significant accuracy and tone risks; however, that can be the deciding factor between successfully managing a crisis and creating a whole new one.

Hallucinating with Confidence

One of the biggest dangers of chatbots is that they are built to sound confident even when they're wrong.

LLMs don't know information or answer questions the way humans do. Instead, they make predictions drawing on the information they were trained on. That's why they produce writing that is often impressively fluent and well-structured.

It's also why they hallucinate.

LLMs will confidently generate statistics, events, quotes and policies that don't exist. While this can be inconvenient, or even funny, in a low-stakes setting, in corporate communications, it can be disastrous. When every word is scrutinized by journalists and stakeholders alike, there's no room for make-believe claims. In crisis situations, especially, a single incorrect claim can easily deteriorate trust and exacerbate reputational damage.

Robotic People Pleasing

Another risk with LLMs is sycophancy, which occurs when an AI model prioritizes agreeableness over accuracy.

Designed to accommodate what users want to hear, AI models have the tendency to shape responses around a user's individual beliefs, preferences or desires. For instance, if a user frames a situation as not that big of a deal, the model is unlikely to challenge that framing. Instead, it will reinforce it, creating a dangerous, echo-chamber-like effect.

In corporate communications, this can lead to overly optimistic messaging that is misaligned with its intended audience. Truthfully, the last thing a company needs is a tool that automatically agrees.

The AI Tone of Voice

Even when AI-generated content is factually correct and unbiased, another risk remains: tone.

In the world of corporate communications, trust is everything, and it is built through a consistent, recognizable voice. Content drafted by LLMs looks appealing, but it can't capture a company's tone of voice or an individual's charisma. Instead, LLMs produce a recognizable and overly generic “AI voice.”

And as every communications professional knows, generic language can be damaging. It sounds evasive when the public needs accountability, cold when employees expect empathy and overly corporate when stakeholders value authenticity.

It can also sound like it was written by a machine, because it was.

When AI has a Place in Communications, and When it Doesn't

Accuracy, authenticity and trust are all on the line when general-purpose chatbots are used to draft corporate communications. While they're quick and sound confident, they don't truly know a company or its audiences. Even with human review, the core of generated content can be riddled with fabricated facts or influenced by a harmful belief system. Suddenly, that “quick shortcut” has turned into a full blown a catastrophe.

This is not to say that AI itself has no place in communications. Purpose-built AI tools designed for strategic communications, like Lexient, are extremely valuable, especially when they support human writers by reviewing content rather than generating it from scratch. Tools that simulate audiences, flag risky language, assess tone, and reinforce approved messaging frameworks can strengthen communications without compromising credibility.

Under intense time pressure, AI can absolutely be an asset. But the safest and most effective use is as a second set of eyes, not as a writer.

Protect your reputation before you hit send.

Lexient acts as an automated safety net for your communications. Stress-test your drafts for tone, risk, and reception instantly.

Request a demo