This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minute read

The rise of the digital employee: Navigating employment law risks in the age of agentic AI

A profound transformation is on the horizon for the workplace with the rise of agentic AI, or a digital workforce working side by side with humans. As businesses use agentic AI and embrace digital employees to enhance efficiency and reduce costs across the organisation, a critical question arises: how do we reconcile the autonomous, often opaque nature of agentic AI with the fundamental principles that underpin employment law, particularly when deployed in HR decision-making?

The reality of the digital workforce

These AI agents or digital employees are much more than chat bots; they are artificial intelligence systems capable of accomplishing specific goals with limited supervision, which mimic human decision-making to solve problems in real time. They can operate independently with considerable autonomy and without direct human supervision, adapting their approaches based on changing circumstances and learning from outcomes.  Read more

This is now reality in some businesses, where digital employees are sitting alongside human employees in the organisation chart and are considered a parallel workforce rather than a tool.

The foundation: fairness, transparency and reasonableness

The core principles of fairness, reasonableness and transparency in employer decision-making are fundamental to employment law frameworks across the globe. These principles are not merely aspirational - they are baked into the legal tests that determine employees' rights and employers' duties. In the UK, for instance, the Employment Rights Act 1996 requires employers to act reasonably and carry out a fair process when dismissing an employee. Similar requirements exist across numerous jurisdictions.

These frameworks assume that human decision-makers can articulate their reasoning, that processes can be scrutinised and that employees can understand - and challenge - the basis for decisions affecting them. The mutual term of trust and confidence that must exist between employer and employee in many of these legal frameworks serves as the lynchpin of the employment relationship, requiring transparency and open communication.

The Black Box problem

Herein lies the fundamental tension with agentic AI. When digital employees are deployed, for example, to make decisions concerning the employment lifecycle - from recruitment and performance management to allocation of work, promotion decisions and terminations - the "black box" nature of their operations runs contrary to the principles of procedural fairness and transparency.

Consider a practical scenario: an employee brings an unfair dismissal claim to an employment tribunal. Traditionally, the employer would present witnesses - typically HR professionals and line managers - who can explain the decision-making process, the rationale for the dismissal and the procedural steps followed. But where agentic AI has made or substantially influenced the decision, who attends as a witness? How does an employer defend the claim when the decision-making process involves complex algorithms and machine learning patterns that even the system’s designers may struggle to fully explain?

The success of a defence in employment litigation claims often relies heavily on a paper trail and documentary evidence supporting actions taken and decisions made. Where agentic AI or machine learning technology drives decisions, accountability and decision-making processes become extraordinarily difficult to evidence in a manner that satisfies legal scrutiny.

The discrimination dilemma

The risk of discrimination and bias when deploying agentic AI may be more acute than with other forms of AI. Traditional AI systems, whilst potentially problematic, typically produce relatively predictable outputs based on their training data and programming. Agentic AI, however, does not always generate predictable results because it can develop new decision-making patterns as it learns and adapts, potentially creating discriminatory outcomes that were not present during initial system design or testing.

This poses a significant legal challenge. Many legal frameworks, such as the Equality Act 2010 in the UK, impose strict liability on employers for discriminatory outcomes of AI tools, even when discrimination was not intended. The legislation holds employers accountable for the impact of their decisions, not merely their intentions. An employer cannot simply argue that they delegated decision-making to an AI system and therefore bear no responsibility for discriminatory results.

The autonomous learning capabilities of agentic AI mean that bias can emerge and evolve over time, potentially in ways that diverge significantly from the system's original parameters. An algorithm might, for instance, begin to favour certain demographic groups based on patterns it identifies in performance data, without any human having programmed such preferences - and potentially without anyone noticing until a pattern of discriminatory decision making has already occurred if appropriate checks and balances are not put in place.

The indispensable human element

Current AI tools (including agentic AI tools) require some element of human supervision to mitigate the risks of discrimination and bias. This is not dissimilar to how a second human review layer is used in moderation processes and appeal mechanisms for traditional HR procedures. 

Moreover, the legal landscape is evolving to address these concerns directly. In Europe, the EU AI Act and similar frameworks emerging globally including in the US impose specific requirements around automated decision-making. Various jurisdictions are introducing increased regulation specifically targeting AI deployment in employment contexts, recognising the unique vulnerabilities and power imbalances inherent in the employment relationship.

Striking the balance: Practical recommendations

Whilst the challenges associated with agentic AI need to be carefully considered, there are also significant opportunities for improving efficiency, boosting productivity, reducing administrative burdens and even enhancing fairness by removing certain forms of human bias from decision-making. 

The challenge is to harness these benefits whilst mitigating legal risks and maintaining robust compliance frameworks. Here are some key steps that organisations considering agentic AI deployment as part of employment and HR decision-making should take:

  • Prepare comprehensive risk assessments that evaluate legal, ethical, regulatory and operational implications across all relevant jurisdictions in which they operate. These assessments should specifically address how the technology aligns - or conflicts - with local employment law principles.
     
  • Implement meaningful human oversight at critical decision points: particularly for high-stakes employment decisions such as dismissals, redundancy selection or significant disciplinary actions. This oversight should be genuine and substantive, not merely a rubber-stamping exercise.
     
  • Adopt transparency mechanisms as essential Employees should understand when AI is being used in employment decisions and have access to meaningful information about how such systems operate. Documentation processes must be enhanced to create the evidentiary trail necessary to defend employment claims, including records of human oversight and intervention.
     
  • Regular auditing of AI systems for discriminatory outcomes should become standard practice: with particular attention to how agentic systems may be evolving their decision-making patterns over time. Finally, organisations should invest in training for HR professionals and managers to understand both the capabilities and limitations of agentic AI, ensuring they can effectively supervise these digital employees when needed.

The rise of agentic AI in the workplace may be inevitable, but its integration into employment decision-making must be managed thoughtfully.

To stay up to date with the latest tech developments - subscribe now!

Tags

agentic ai, ai, employment & culture