Employers are increasingly facing the reality that their staff are using AI at work. The inescapable spread of such tools brings not only tricky new challenges for employers, but new risks to which even legislators in the EU are paying attention. Employers without a response face serious hazard. A recent study by Software AG concluded that half of all computer-based workers use AI tools. And 46 per cent of this contingent insisted they would continue to do so, even if they were ordered not to…

This quantitative data confirms what for many is true anecdotally – regardless of an employer’s policies and tools, AI is now being used by employees for work purposes in almost every workplace. This is a major escalation in the longstanding issue of “Shadow IT”: the use of unauthorised software or hardware by staff. With AI’s abilities growing, the attraction of subversive use is likely only to increase. 

For employers, the risks of such unchecked usage are considerable. Use of approved AI tools that are vetted and secured by IT departments already comes with established legal risk, which is only multiplied when such use is covert:

  •  Legal liability: Anyone who has used AI tools is aware of their (at least current) limitations: their disposition for hallucinations and fallacy. Yet their predominantly high standard can leave the unvigilant user blindly reusing flawed output. Employees relying on AI tools without appropriate training and safeguards risk producing work that compromises quality and can lead to wider reputational damage and liability. Non-organisation specific AI tools are also unlikely to provide output that matches an organisation’s specific style and approach, leading to potentially jarring inconsistencies. As awareness of AI usage grows, more and more customers will become aware that they can use online tools to check whether they are being served AI-generated content. Customers and clients may pick up on this, leading to losses in trust and, at worst, allegations that the product or service is not as advertised, and even in breach of contract.

  • Data protection: Unregulated use of insecure AI tools risks the wider disclosure of sensitive company information and the personal data of clients, customers, and colleagues. This presents a real danger of breaching data protection regulations and risking significant fines and reputational damage. 

  • Cyber crime: Similarly, unsanctioned AI tools not only carry the risk of sensitive information being mistakenly disclosed, but being actively used for cyber crime. Bad faith actors who gain access to such information will potentially have the means available to blackmail not only the compromised organisation, but their clients, customers, and employees too. Leakage of sensitive information can also provide training data for evermore convincing phishing and ransomware attacks. 

  • IP: Employees feeding work data into an AI tool could be inadvertently training the tool, shaping its knowledge base and thus sharing company IP with other users of the tool on an industrial scale. This danger works both ways, with the reciprocal prospect of your employees, through AI tools, inadvertently using IP that does not belong to them and exposing their organisation to legal challenge.


All of these risks can be reduced by ensuring staff are trained on the safe use of AI and are guided by clear policies and guidance. This is increasingly a necessity, regardless of an organisation’s endorsed level of usage of such tools.

The need for awareness raising and training is being recognised by legislators too. The EU’s AI Act imposed from 2 February 2025 not only strict limits on the use of AI in the workplace but requires all those within organisations utilising AI to have enough technical knowledge to understand how it can be used safely. This knowledge is termed “AI literacy” and necessitates the capability to assess the opportunities and risks of AI usage. While this will of course only apply to UK companies operating in or selling to the EU, this illustrates a growing comprehension of the risks employers face. And, as the UK may face pressure to deregulate its technology sector from across the Atlantic, this may well be a problem employers have to deal with themselves.

If you would like any advice on AI usage in your workplace or details of the AI training and employment policies we can offer, please get in touch

Written by

Related News, Insights & Events

Risk Conference Series5

Risk Resilience in 2025

26/03/2025


Join our expert team to consider the top issues that we believe should be on your risk register in 2025.

Read more
E3 Essential Elements Of Employment

Webinar: Essential elements of employment training

17/03/2025


We are delighted to launch our next “Essential Elements of Employment” training series, bringing legal issues to life in virtual webinars that are practical and meaningful.

Read more
AI In The Workplace

AI in the workplace: the growing risks to employers and how to manage them

Employers are increasingly facing the reality that their staff are using AI at work.

Read more

Want to hear more from us?

Subscribe here