This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read

Issues for Boards: Managing AI risk in 2025 - updated guidance

Just two years ago, generative AI was a novelty. Today it’s everywhere. Early AI discussions centred on legal topics such as copyright; now we see a surge in AI-specific laws. For boards, managing AI is no longer just good practice – it’s a legal requirement. New regulations are making some best practices mandatory.  

In 2023, members of our global TMT team laid out seven “rules” for dealing with AI. In theme 3 of our 2025 Issues for Boards publication, they assess how far these rules remain true. Here is a summary of their conclusions:

1| 2023: Don’t trust it – at least not yet. 2025: Getting better, but you still need to check 

AI's accuracy has improved, but it still produces some plausible, yet strikingly incorrect answers, making it risky to rely on without human oversight. Read more: UK – The LinksAI English law benchmark (Version 2)

2| 2023: Don’t tell it anything private or confidential. 2025: Risk remains but can be mitigated if you build your own system 

To prevent the risk of disclosing confidential information, organisations should either develop internal AI systems or, when using externally hosted AI systems, implement thorough due diligence procedures and obtain contractual assurances about data usage.

3| 2023: Make sure third parties don’t rely on its output. 2025: Regulation and reputation mean that organisations must take a degree of responsibility for outputs 

Organisations will need to take more responsibility for AI outputs, manage their AI processes, comply with emerging regulations, and disclose when the output is AI generated, while ensuring failures are monitored and addressed.

4| 2023: Be alert that some intellectual property issues are unresolved. 2025: This remains the case, and data enforcement is increasing 

The use of copyrighted materials to train AI has led to significant litigation and uncertainties persist about the application of IP rights laws to AI. With data enforcement increasing it is crucial that lawyers, technology, and marketing teams collaborate to protect IP rights and avoid infringements.

5| 2023: Be careful about bias and discrimination. 2025: That advice remains. Ensuring data integrity is key to successful deployment of AI 

As AI’s use becomes more pervasive, so do the risks of bias and discrimination, which require close cooperation between technology, procurement, and legal teams to address. Data provenance and integrity, along with proper training and testing, are essential for the successful adoption of AI.

6| 2023: Consider outsourcing rules. 2025: Equal focus on supply chain and staff 

Focus is shifting from the establishment of terms and rules to ensure the use of AI complies with procurement and outsourcing rules, to the longer-term impacts of increasing AI adoption on staffing requirements, and the balance between AI as assistant and AI as replacement for human resources. The near-term focus remains on training, enabling the organisation to fully benefit from the technology and ensuring staff can effectively adapt to this new form of hybrid working.

7| 2023: Be ready for regulation. 2025: Focus on managing risks 

Regulation now in place is compelling organisations at each level of the AI supply chain to adopt appropriate pro-active controls. AI use cases can materialise across the business and central functions, and specialists from impacted areas must be involved early on to assess risks for their area, using tools from legal, compliance, and risk teams to avoid management log-jams. 

Additionally, regulations impose AI literacy on organisations, training staff on AI impacts, management are aware about AI risks and informed decisions are made. Boards must be informed about strategic AI issues and the associated legal, operational and ethical risks and mitigants. 

Issues for boards and GCs to consider:

Today’s AI landscape demands that organisations do more than simply adopt powerful tools. They must build robust governance and training structures, maintain thorough oversight, and engage in continuous risk management. Making sure that your organisation can explain how it uses AI and mitigates its risk is a legal necessity.

Read the full article: Issues for Boards 2025: Managing AI risk in 2025: updated guidance

Further reading: Updated AI Toolkit - Ethical, safe, lawful: A toolkit for AI projects 

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

ai