5 Comments
User's avatar
Ilir Hasko's avatar

Such a timely post and I couldn’t agree more. Just this weekend, I read about the potential of eroding trust in companies like Deloitte which included GenAI hallucinations in a report prepared for the Australian government

A question I often pose to those using AI for productivity is this:

“Am I using AI to handle the legwork, or am I outsourcing my critical thinking?”

If it's the latter, we may think that we’re creating efficiencies, but instead we’re creating blind spots. Offloading thinking to current AI tools can slow individual growth, weaken professional development and judgment, and quietly introduce ‘workslop’ into systems that rely on trust and rigor.

I believe we'll come to see AI not as a builder of knowledge, but a supporting block, one that helps us advance our own critical thinking, not replace it.

Of course, this all changes if (or when) we achieve AGI — and at that point, we’ll need to revisit everything.

Note: In all ‘AI use’ transparency, I used AI to “polish” and “align the tone” of my originally written comment :)

Timothy Chester's avatar

Ilir, this is such a thoughtful comment, and I appreciate the clarity of your framing. That distinction between offloading legwork versus critical thinking is everything. We’re not just trying to drive efficiency, we’re stewarding trust, shaping culture, and modeling how tools should serve judgment, not replace it.

You’re right that the real danger is subtle: it’s not some dramatic collapse, but a quiet seepage of “workslop,” as you aptly put it: unverified claims, shallow synthesis, and eroded professional development. When this becomes normalized, rigor is the casualty. And once rigor falters, the foundation of institutional credibility follows.

Your closing point is especially important. Whether or not AGI arrives, we already have a responsibility to train ourselves and others to think with the machine, not like it. That means building a culture where transparency is rewarded, discernment is taught, and AI is treated as a catalyst for better thinking, not a shortcut to avoid it. Thanks for modeling that, even in your note about polishing with AI. That kind of disclosure is the leadership we need.

ToxSec's avatar

100%. Studies show this. There is still that stigma? Let’s just admit we use it, and maybe try to either use it less, or put our efforts into not just copy and pasting. Scaffolding is fine, but I want to feel the emotions of the writer.

Shari King's avatar

This was good! We just had a Terry Staff training on AI on Friday and I feel much better about using it now -both knowing the best way to use it and knowing I'm allowed to.

Timothy Chester's avatar

Thanks, Shari. That’s exactly the kind of shift I hope we start to see more broadly: not just training people on the mechanics of AI, but giving them cultural permission to talk about it out loud.

Too often, the fear isn’t that the tools won’t work, it’s that someone will look sideways at you for using them. That’s a governance failure. If we want trust, we have to make the rules explicit, the training universal, and the culture safe enough for people to say, “Yes, I used AI to help with this.” Otherwise, we create shadow systems and false expectations.

Terry’s session sounds like a great step. Clarity creates confidence. And when people feel confident and authorized, they start making better, more open decisions. That’s when AI becomes a force for alignment, not just acceleration.