Will Your Stakeholders Still Trust You If They Know You’re Using AI?
If your audience found out you used AI to write, would they lose trust or give you a high five?
After my recent session at a PR conference in Jacksonville, an attendee asked me a simple but revealing question about my work:
“Did you use AI for that?”
It wasn’t aggressive or skeptical. It was cautious. There wasn’t an accusation. I just felt the weight of the question.
I’ve been in enough of these conversations to know that for many stakeholders, AI still lives in a gray zone. The word raises real questions about trust, ethics, and control, and for communications professionals, those questions are no longer theoretical.
This is now part of your work.
Your Responsibility
Whether you’re in a PR agency, a city government office, a university comms team, or a nonprofit managing donor relationships, you’re responsible for how people perceive the messages you create. And that includes whether those messages were shaped, drafted, or even partially informed by AI tools.
The tools alone aren’t the problem, but how you use them and how your stakeholders feel about that use absolutely is.
No matter how advanced we get, no matter how many guardrails we put in place, that question will always be there.
Cautiously Optimistic
I’m not an early adopter. I’m careful, and I work with leaders who are, too. Trust is our shared baseline, and when it erodes, even a little, everything else becomes harder.
I don’t just rely on the platforms' security settings or privacy promises. I build my own layers. I mask personal and company names. I avoid specific tools altogether when I don’t trust their intentions or business models. And I document those decisions, not just for myself, but for the people I work with. People deserve to know where the boundaries are.
The more layers of protection you can give yourself and your stakeholders, the better.
But even when the inputs are protected and you have a smart workflow, it’s not always enough. You still have to account for perception. Because it doesn’t matter how strategic your prompt is, if someone reads something that feels flat, or lazy, or inauthentic, and then finds out it was “AI-assisted,” trust takes a hit.
More Content at Speed isn’t the Answer
AI slop is a new term floating around right now. It refers to low-effort, AI-generated content that’s flooding the internet and making its way into professional communications.
When someone sees a press release, a fundraising appeal, or an org-wide announcement that feels generic, they start to wonder if anyone’s really listening. That’s a dangerous place to be. Especially when your role, as a communicator, is to maintain the humanity in the message.
You still have to maintain humanity. PR and comms folks own that space. You relate to your publics.
I don’t think AI use will cause most stakeholders to walk away, but careless AI use might. What’s “careless” depends on your relationship with the people on the other end. You have to understand their expectations, their level of AI literacy, and their emotional readiness. You need to know where your ethical line is and be able to explain it clearly when asked.
What does your team consider off-limits for AI use?
Do you mask sensitive information in prompts?
Have you turned off training permissions?
Can you confidently explain to a stakeholder how a piece of AI-assisted content was produced and why it made the final cut?
Even the most cautious and ethical use of AI carries risk. A tool that was safe last month might be acquired by another company next quarter. Suddenly, everything you input is owned by someone new. You only have to look at the 23andMe acquisition and what happened to the data people submitted in good faith to understand how quickly control can change hands.
No, you don’t need to panic. And you certainly don’t need to stop using AI. But you do need to own your choices, and you need to communicate them with the same clarity you bring to other parts of your work.
The big question isn’t going away and neither is the responsibility that comes with answering it.
Key Takeaways
The question isn’t if you’re using AI, but how, and whether your stakeholders can trust that use.
Protective layers matter. Don’t rely solely on platform settings. Mask inputs, choose tools carefully, and document decisions.
Perception is part of the equation. Even ethical AI use can erode trust if your output feels lazy or inauthentic.
AI should serve your strategy, not replace your judgment. Use it to create more space for human connection.
Be ready to explain your stance. Stakeholders need clarity, purpose, and accountability.