top of page
Search

With great power comes great responsibility

What businesses should really be worried about with AI


Last week I met with a brilliant business to talk about AI.

The senior leader I was speaking to was genuinely on it. He understood what AI can do, and his organisation had already rolled out some impressive functionality. Their CRM was using AI for things like summarisation, next best action, and background automation. We talked about chatbots, personalisation, and what might be possible next.

Then I asked about his teams.

“Yeah,” he said. “They’re using it too. Some of them are really getting into ChatGPT now.”

So I asked another question.

Had they had any AI training or awareness sessions for their people?

The answer was simple.

“Not for the ChatGPT stuff. Just for the big things.”

And that’s the moment the conversation shifted.

As we talked through some of the risks, I could literally see the penny drop.

“Surely that’s just common sense,” he said.

You would think so. But the horror stories are already out there.

Because what many organisations are missing isn’t the technology, it’s what’s happening quietly, day to day, much closer to the front line.

People are using AI off their own back. And I don’t mean that in a negative way. I’m all for people using AI to make their working lives easier. If you’re a team leader looking for ideas for a sales incentive, ChatGPT can be a brilliant thinking partner.

But here’s the bit that often gets overlooked.

Do your people actually understand what not to put into AI tools?

Are you confident that no one in your organisation is pasting company information, customer data, or commercially sensitive material into an unsecured generative AI product?

Do they understand the bias these tools work from?

Do they know what hallucinations are? and that AI can sound incredibly confident while being completely wrong?

For many leaders, the honest answer to those questions is, “I’m not entirely sure.”

And that’s not because they’re negligent or careless. It’s because AI has moved far faster than most organisations have had time to keep up with.

Recent research shows that while the majority of people in businesses are now using AI in some form, only a small proportion have received any meaningful training. Where training does exist, it’s often focused on technical teams, senior leaders, or specific tools that have been formally rolled out.

The further you move down the organisation, towards managers, frontline teams, and individuals just trying to do a good job, the less understanding there often is.

And that’s where the real risk sits.

This isn’t about banning AI. That's not going to work now anyway! It’s about acknowledging that what feels like “common sense” to some people simply isn’t obvious to everyone, especially when no one has ever stopped to explain it.

It’s also not rocket science. But without some shared understanding and sensible guardrails, organisations can very quickly find themselves in hot water. Data protection. GDPR. Bias. Trust. Reputation.

None of these risks come from people trying to do the wrong thing.They come from people trying to be helpful, efficient, and proactive without the context they need to use AI responsibly.

AI has enormous potential. That part is undeniable.

But with that power comes responsibility, not just at leadership level, but across the whole organisation.

If your teams are already using AI and they almost certainly are, the most responsible thing you can do isn’t to panic or over-engineer solutions. It’s to make sure people understand how these tools work, where the risks are, and when human judgement still matters.

That’s where the real value is.

Check out my AI Skills page for more info





 
 
 

Comments


bottom of page