top of page
Search

People Leaders – Please Stop Using AI like this at work

I’ve just come off the phone with a company that reached out to book their leaders onto my AI & People Leaders course.

They also gave me permission to share their story because, in their words, they are desperately trying to stop this from happening again.

What they described was a genuine HR nightmare. One that started with good intentions, poor understanding, and the wrong use of AI at work.


An internal investigation uncovered emails sent from Team Leaders to frontline employees. These emails had clearly been written using generative AI and copied straight across without enough thought, context, or human judgement.

They were full of the usual giveaways.

The outcome?

Employees were able to challenge decisions and even raised a formal grievance with claims of discrimination and concerns of GDPR and data handling. Perhaps the most damaging of all, trust and credibility were completely lost.

To make matters worse, the information shared with the employee was factually wrong.

So the message was not only cold and impersonal, it was also incorrect.


There is a huge cost to loosing employee trust and it's the part organisations often underestimate.

When people leaders hide behind AI generated responses, especially in sensitive situations like flexible working requests, performance conversations, or wellbeing matters, employees notice immediately.

They do not feel supported.They do not feel listened to.They do not feel psychologically safe.

They feel dismissed.

And once trust is broken, it is incredibly hard to rebuild.

The Bigger Issue at play is AI Use without governance

Generative AI is one of the first technologies where individuals have access in exactly the same way as organisations.

Right now, people are using it without:

  • Understanding its limitations

  • Knowing where it can confidently get things wrong

  • Recognising legal, ethical, or emotional risk

  • Any clear guardrails from their employer

What I am seeing repeatedly is this:

Companies are investing in training their technical teams, while leaving their non technical people to figure it out on their own.

That means people leaders are making judgement calls using tools they do not fully understand, often in moments that require empathy, nuance, and accountability.

A Real Example (Shared With Permission)

Below is an example of the type of email that caused significant issues for this organisation. I will let you read it and form your own view.

Dear Jayne Doe Thank you for your flexible working request — we have reviewed the details provided. Thank you for After consideration of operational requirements and business needs — we are unable to approve the request at this time. The role requires consistent availability during core hours to maintain service delivery, team collaboration, and overall productivity. While we recognise the importance of flexibility and employee wellbeing — approving this arrangement would have a negative impact on performance and team coverage. This decision has been made in line with the Flexible Working Policy and relevant statutory criteria. We appreciate that this may be disappointing — however, this decision is final based on current business requirements. You may submit a further request in the future should circumstances change. Kind regards,

On the surface, it looks professional.

In reality, it is generic, defensive, emotionally tone deaf, and clearly not written with the human on the other end in mind.

AI Is Not the Problem and nore should it not be used, Ai at work is being used in the wrong way.

AI can be a powerful support tool for leaders, but it should never replace thinking, empathy, or responsibility.

If people leaders are going to use AI, they need:

  • Clear boundaries on when and how it should be used

  • Training that focuses on judgement, not just prompts

  • Confidence to co design communication rather than copy and paste it

  • Accountability for the impact of what they send

Because when AI is used badly, it does not just create risk.

It breaks trust.

And broken trust is far more expensive to fix than doing this properly in the first place.


For dates on my next Open course for AI & People Leaders or to book a course for your teams. Email me at Ceri@ceri-davies.com and I'll get back to you with all of the details.



 
 
 

Comments


bottom of page