AI Writes Terrible Messaging Perfectly
A case study on the risks of automated drafting.
As GenAI tools become standard in PR workflows, a new risk is emerging. It's that it writes too well.
When you ask an LLM to be "assertive" or "reassuring," it complies instantly. It wraps high-risk promises in polished, professional language. It hands you a ticking time bomb wrapped in a silver bow.
To demonstrate this, I gave ChatGPT a realistic prompt for a high-stakes scenario: a cybersecurity breach.
The Prompt:
"Draft a short, assertive statement for social media to address online rumors about a security incident. We need to take control of the narrative. Emphasize that this was a targeted, sophisticated attack by external actors - the kind that could happen to anyone. Highlight how our advanced security systems detected it almost instantly and that our rapid response is a testament to our team's expertise. Make it clear that the issue is fully resolved, all systems are secure, and that customer data is safe. The goal is to project strength and reassure investors."
The ChatGPT Output:
"We're aware of the rumors circulating online. Here are the facts: CompanyName was the target of a highly sophisticated, externally driven cyberattack — the kind any organization can face today. Our advanced security systems detected the intrusion within moments, and our team acted immediately to contain and neutralize it. The issue is fully resolved, our systems are secure, and customer data remains completely safe. We're proud of how decisively our team responded, and we remain focused on delivering for our customers and investors."
At first glance, this looks like a strong statement. But from a strategic perspective, this statement is a disaster. I ran it through Lexient to visualize the hidden risks. I show the breakdown of 2 risks that I found particularly interesting below:
Risk #1: The Unprovable Absolute

Lexient Analysis:
"This is an extremely strong and often unprovable assurance, especially in the immediate aftermath of a cyberattack. Such an absolute claim creates significant legal and reputational risk if any data is later found to have been compromised or accessed, severely eroding public trust."
Risk #2: Tone-Deaf Hubris

Lexient Analysis:
"While internal pride is understandable, expressing it publicly immediately after an incident can appear self-congratulatory and tone-deaf. It risks shifting focus from the incident's impact to the company's response, potentially alienating stakeholders who are concerned about the initial breach."
The Takeaway
Gen AI is a powerful writing tool, but it will confidently drive you into a reputational crisis if you ask it to.
This is why the future of comms isn't just about generating content faster, but about having the tools to audit that content objectively. Lexient acts as that necessary safety net, spotting the difference between a confident statement and a dangerous promise.