Everyone’s Using GenAI, Few Admit It
Why workers lie about using AI and what it says about your culture, not your tools.
In my MBA course on business process improvement, I introduced a new approach to streamlining process discovery. Instead of spending hours reviewing documentation, we could use generative AI tools like ChatGPT to summarize, synthesize, and surface patterns, enabling more comprehensive analysis in less time.
One student raised a concern: “Isn’t it unethical to load a company’s documents into ChatGPT?” I asked, “What makes it unethical, especially if we disclose our AI use in the engagement contract?” The student replied without hesitation, “It’s still unethical.” I turned to the rest of the class and asked, “What do the rest of you think?” The room went quiet.
This class was rarely silent, but no one spoke. We moved on. Later, I wondered if something deeper was at play. Perhaps it wasn’t just about ethics. Maybe it was the discomfort surrounding GenAI usage in professional settings. A strong opinion can shut down a conversation. Or maybe students were already using AI tools and felt unsafe saying so. That silence lingered.
In today’s Dispatch, I step back from that moment in the classroom to examine what it reveals about the workplaces we are leading. The challenge is not just how people are using GenAI, but why so many feel they cannot talk about it.
The big picture
Something strange is happening in offices around the world. AI tools like ChatGPT, Google Gemini, and Claude are everywhere, but honesty about their use is nowhere to be found.
A sweeping 2025 KPMG global study revealed that 57% of employees admit to hiding their GenAI use at work, passing off generated content as their own. Nearly half knowingly violate company policies, and most don’t even verify the output before using it. This isn’t fringe behavior. It’s becoming the norm.
What’s emerging isn’t just a more efficient workforce, it’s a workforce operating under the radar. Employees are quietly integrating GenAI into their workflows without approval, oversight, or accountability. In many cases, their managers don’t know it’s happening. Their peers don’t know either. The deliverables look polished, the productivity seems strong, but the process is hidden.
And that’s the real problem. Not the tools themselves, but the silence around them. This concealment, intentional, widespread, and normalized, erodes trust and makes it impossible for leaders to manage risk, set expectations, or even understand how work is getting done.
Why it matters
We’ve entered a new phase of digital transformation, one defined less by what GenAI can do and more by whether people are honest about how they use it. And that shift is quietly reshaping workplace culture. Trust is fraying as colleagues conceal automation. Security risks are mounting as sensitive data flows into public tools without oversight. And in the absence of clear norms, invisible advantages are emerging for those who quietly cut corners while others play by the rules.
The numbers make it plain: mistakes are common and guardrails are rare. A global KPMG–University of Melbourne survey of 48,000 workers across 47 countries found that:
57% conceal their AI use.
48% upload sensitive data to public tools.
66% don’t verify AI-generated work.
Among younger workers, the numbers are even higher. Meanwhile, a 2024 Pluralsight survey found that 79% of tech professionals and 91% of executives exaggerate their AI knowledge. One-third say shadow AI use is now common on their teams.
The technology is powerful. But the deeper challenge is cultural: AI is quietly redefining what it means to contribute, collaborate, and be accountable. When work is automated in secret, it creates gaps in understanding between employees and managers, between teammates, even between effort and outcome. Trust erodes not because AI is being used, but because no one’s sure how, when, or by whom. And when the process becomes invisible, so does the foundation of trust.
Why people lie, and why it makes sense
The widespread dishonesty around AI isn’t about laziness or cheating. It’s structural. AI is often encouraged for speed but penalized when acknowledged. With unclear policies and mixed signals, workers learn that staying silent is safer than being honest.
Research shows four key drivers behind why workers conceal their use of AI:
Fear of falling behind: Half of employees worry they’ll lose ground if they don’t use AI.
Unclear rules: Fewer than 50% say their employer has clear policies on AI use.
Lack of training: Only 47% have received formal guidance or education on using AI tools responsibly.
Perception penalty: Work done with AI is still viewed as less original, less human, and less legitimate.
We’ve created a culture where using AI feels smart, but admitting it feels shameful. The work is valued, but the method is suspect, which pushes people to hide how they get things done. And being honest doesn’t help. Research from the University of Arizona shows that disclosing AI use consistently reduces trust, even when the results are strong.
Students trusted professors 16% less when told AI graded their work.
Investors trusted companies 18% less when ads revealed AI involvement.
Clients trusted graphic designers 20% less when they admitted using AI tools.
But concealment backfires too. Getting caught using AI without disclosure leads to even deeper trust losses, creating a double bind: be honest and lose credibility, or stay quiet and risk a crisis if found out.
Incentives are broken. Policy is lagging. And workers are left navigating a system that quietly encourages dishonesty, even while punishing transparency.
The real risks
This growing governance failure threatens the integrity of organizations when AI use becomes widespread but invisible, leading to leaders losing the ability to manage risk, enforce policy, or understand work processes.
Compliance breakdowns: Sensitive data is flowing into public models without guardrails.
Invisible inequity: Some employees get ahead by secretly automating, while others grind.
Quality concerns: With no oversight or version control, AI errors go undetected until they cause real damage.
Team dysfunction: One in five workers says AI is reducing their workplace collaboration.
The result is a hidden layer of activity across the organization, where productivity rises, but transparency, consistency, and accountability quietly erode.
What leaders need to do
This leadership issue isn’t about AI tools. The real problem is lack of clear guidance, shared norms, and psychological safety. Employees navigate mixed signals alone, choosing silence to avoid judgment. The research points toward five responses:
Write clear AI policies that focus on responsible use, not prohibition.
Provide vetted AI tools so employees don’t default to risky platforms.
Train everyone to use AI effectively and ethically, not just the tech-savvy.
Build psychological safety so employees feel safe disclosing AI use.
Recognize AI transparency as a leadership issue, not an individual failing.
The goal isn’t to prevent AI use. It’s to prevent dishonesty, dysfunction, and data breaches from becoming cultural defaults. Banning AI won’t stop its use; it’ll only make it more secretive.
The bottom line
We’re not in a phase of AI adoption. We’re in a phase of AI concealment. Tools like ChatGPT and Copilot are already embedded in daily work, often out of sight from managers and teams. This hidden use signals a breakdown in trust, policy, and communication, where employees feel safer staying silent than being honest.
The real question for leaders is not how widely AI is used, but why people feel they cannot talk about it. That is a cultural issue, not a technical one. If leaders create clarity, safety, and openness around AI, it can strengthen collaboration and accountability. If they don’t, it will deepen mistrust and erode the very culture they are trying to protect.


Such a timely post and I couldn’t agree more. Just this weekend, I read about the potential of eroding trust in companies like Deloitte which included GenAI hallucinations in a report prepared for the Australian government
A question I often pose to those using AI for productivity is this:
“Am I using AI to handle the legwork, or am I outsourcing my critical thinking?”
If it's the latter, we may think that we’re creating efficiencies, but instead we’re creating blind spots. Offloading thinking to current AI tools can slow individual growth, weaken professional development and judgment, and quietly introduce ‘workslop’ into systems that rely on trust and rigor.
I believe we'll come to see AI not as a builder of knowledge, but a supporting block, one that helps us advance our own critical thinking, not replace it.
Of course, this all changes if (or when) we achieve AGI — and at that point, we’ll need to revisit everything.
Note: In all ‘AI use’ transparency, I used AI to “polish” and “align the tone” of my originally written comment :)
100%. Studies show this. There is still that stigma? Let’s just admit we use it, and maybe try to either use it less, or put our efforts into not just copy and pasting. Scaffolding is fine, but I want to feel the emotions of the writer.