Hiya,
Here's a sobering statistic from a new study commissioned by Slack: Nearly half of workers globally are uncomfortable telling their managers they use AI at work. They fear being seen as lazy, incompetent, or worse - cheating.
The timing is significant. OpenAI is about to launch AI agents that can control your computer. Microsoft is embedding AI throughout Office. Every tech company is racing to define AI's future. But they might be missing something crucial: You can't drive bottom-up innovation when people are afraid to admit how they're using the technology.
Beyond the Numbers
The data tells an interesting story. While companies pour billions into AI capabilities, worker adoption has plateaued at 33% in the US. But that's just reported usage. The real number is likely much higher - people are using AI, they're just not talking about it.
Even more telling: Many workers report AI is actually increasing their workload, not reducing it. This isn't just about resistance to change or poor implementation. Something deeper is happening.
The Mental Model Problem
Organizational psychologists call it "cognitive entrenchment" - when expertise in one way of working actually hinders adaptation to new methods. It's why expert taxi drivers initially rejected GPS, insisting their mental maps were better. They weren't wrong about their expertise; they just couldn't see past it.
AI requires an even bigger cognitive leap than GPS.
We're not just asking people to learn new tools - we're asking them to fundamentally reimagine their relationship with work:
From memorizing information → Understanding how to prompt and verify
Old Way: A marketing analyst memorizes key statistics, like ad benchmarks or competitor data, to make decisions during campaign planning.
New Way: The analyst uses AI to retrieve benchmarks with a simple query (e.g., “What’s the average CPC for retail ads this year?”) and then cross-checks the results against trusted sources to ensure accuracy.
From sequential workflows → Parallel AI-human collaboration
Old Way: A product manager writes a project brief, sends it for approval, and waits for feedback before moving to the next stage.
New Way: The product manager drafts the brief with AI, simultaneously refining ideas while sharing it with the team for immediate input. The AI and team collaborate in real-time, speeding up the process.
From "perfect, then publish" → Iterative refinement
Old Way: A graphic designer creates a final ad design and submits it for review, aiming for perfection before sharing.
New Way: The designer uses AI tools like MidJourney or Canva AI to generate rough drafts quickly. Feedback is gathered early, and the AI helps refine multiple iterations, resulting in a better final product with less wasted time.
From following processes → Designing AI-human systems
Old Way: A finance team manually tracks expenses and follows rigid processes to prepare monthly reports.
New Way: The team sets up an AI system that automates expense tracking, flags anomalies, and generates reports. The team’s role shifts to designing and overseeing the AI workflow to ensure it aligns with business goals.
Each shift requires unlearning deeply ingrained habits. And we're expecting this to happen while people feel they need to hide their AI use.
The Innovation Valley
We've faced similar challenges before.
Spreadsheets transformed business not just because of formula syntax, but because they changed how we think about data relationships. The internet wasn't revolutionary as a digital brochure - its power emerged when we grasped its interactive potential.
The creative arts show this pattern even more clearly. Early filmmakers just recorded theatre plays - it took decades to discover cinema's unique language of close-ups, montage, and parallel action. Photography's impact wasn't just in capturing reality, but in liberating painters from that obligation. By taking over representational art, photography freed artists to explore abstraction and expression in entirely new ways.
That's what makes the current situation so fascinating. AI isn't just a tool to do existing tasks better - like photography freeing art from pure representation, it could liberate human thinking from routine cognitive tasks. But that won't happen if people are afraid to admit they're using it.
This is the crucial difference: Those earlier transformations happened gradually, with plenty of room for experimentation and failure. The current AI revolution is happening at breakneck speed, with contradictory pressures:
Leadership wants bold innovation
Workers fear judgment
Everyone's struggling with basics
Productivity can't drop while we figure out the above
It's like asking someone to learn a new dance while insisting they never miss a step.
Creating Safe Spaces for Experimentation
This is why traditional training approaches fall short. Teaching AI features is like teaching camera settings without composition. The real learning - the mental model shift - happens through experimentation.
In my company, Novela, we see this clearly in our simulations. When given a safe environment to experiment, people relish the opportunity. They play repeatedly to try out new variations of a strategy, to fail without the repercussions, and then to succeed by learning from the failures.
The key elements aren't technological, they're psychological:
Safety to fail without consequences
Permission to question assumptions
Time to develop new thinking patterns
Support in restructuring workflows
Naturally, we're building all of the above into our new products at Novela. We're developing AI skills simulations that will enable users to experiment with new tools and workflows to tackle everyday work challenges.
Crucially, we are asking them to partner with our AI co-pilot, Ela, to navigate these challenges. She will provide instant feedback and our simulation engine will mimic real-world market dynamics. This can be scaled to include scenarios across multiple industries and disciplines, and the simulations should be used to keep skills sharp over time.
These are not a magical solution to the problem, because of course there simply isn’t one. However, we are working with some excellent partners to bring these simulations to life as part of ongoing corporate education programs.
💭 Final Thought: The Path Forward
The companies that will succeed with AI aren't just investing in tools - they're investing in a mental model transformation. They understand that before you can get the big breakthroughs, you need to create space for small experiments, failed attempts, and new ways of thinking.
Because right now, half the workforce doesn't feel that safety. And that's not just stopping them from admitting AI use - it's stopping them from using AI better.
Curious about building AI confidence in your team? Contact me to explore Novela's interactive AI simulations!
Excellent post, Clark . The analogy with the impact of photography on the visual arts is really thought-provoking. And, funnily enough, I happened to be in a taxi during rush hour last week with a driver who wasn't using GPS. Unfortunately, I noticed this mid-way through the journey so I couldn't compare his performance to the Google Maps estimate. But my anecdotal observation is that he did pretty well... until it came to paying by card--he couldn't get the card reader to work with his phone. :)