This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minutes read

Scaling AI to drive value: Addressing the challenges

As we progress through 2024, interest in and adoption of AI is continuing to soar. In May, the OECD reported that since its AI principles were first adopted in 2019, venture capital investments in generative AI startups have increased nine-fold, demand for AI skills has soared by 130%, and the share of large firms in the OECD using AI has nearly doubled. The latest McKinsey global survey on the state of AI suggests that not only is GenAI adoption spiking, but that it is starting to generate value. This seems to be the case for those business that are focused on using the technology to solve specific problems.

However, all businesses rolling out AI need to give due consideration to practical limitations and to associated legal and reputational risks. Failure to do so is not just a wasted investment, but can run a real risk of a negative impact on performance and reputation. 

AI adoption 2023 - 24

McKinsey has confirmed that in the past year there has been a dramatic increase in AI adoption worldwide.

A graph with numbers and lines

Description automatically generated

While 65% of their global survey respondent organisations are regularly using gen AI in at least one business function (a 50% increase in the last 10 months), the average organisation using GenAI is doing so in two functions. This is most commonly in marketing and sales and in product and service development, through a mix of off the shelf generative AI capabilities, significantly customised models or models developed themselves. 

Business readiness 

Businesses who are succeeding in driving value from AI are focusing on using AI to address specific pain points, and to add tangible value through identifying clear use cases focused on improving productivity/ efficiency e.g. in supply chains, enhancing customer services or creating new revenue streams. 

As we saw in the early days of the blockchain hype, businesses need to avoid finding a solution for the technology. Indeed it is the question of “why”, the proof of value, and determining right KPIs to measure that value to the business, that is more important than the use case.

Many are now realizing that AI implementation and risk management requires a holistic approach. Indeed we recommend that senior executives take the lead in advancing the implementation of all forms of AI in their organisations and not relegate it to their IT departments or junior work force.

The holistic approach includes legal, risk and compliance playing important and strategic roles throughout the process and overseeing its governance and controls – all while preserving a human touch in its implementation and maintaining careful attention to ethical considerations. 

Addressing the challenges – practical and legal

We examine five of the key practical challenges with related legal and risk management issues to address for successful implementation of AI in business operations:

01 Strategy 

  • Roadmap: Organisations can run into difficulties where they do not have a clear AI strategy with clear use cases aligned with their wider business objectives. Organisations should define a clear strategic roadmap for both implementing and scaling AI, including specific goals, metrics for performance and a governance framework for ongoing monitoring and evaluation. 
     
  • Risk allocation in AI investment: The digitalisation trend of more mergers and consolidation between industries is leading to increased convergence – bringing with it greater merger control risk. And foreign investment rules can also impact investments in AI as critical infrastructure, depending on the nature of the investment. As regulators are increasingly intervening in transactions, forward planning to address risk allocation in deal making in the tech space is crucial.  
     
  • Navigating regulation: A strategic approach is also required for organisations to navigate the evolving body of law and regulation that is emerging across the globe to address safe deployment of AI – most notably in China, the EU and in the US. Regulatory compliance with existing and emerging regulation is an imperative in maintaining trust (read more).

02 Data and technical infrastructure

  • Infrastructure needs: Establishing the robust infrastructure needed to accommodate AI systems can be a major issue for many companies still relying on legacy systems. The time-consuming nature of preparing and integrating vast amounts of high quality data sets required to train GenAI can also create a bottleneck to implementation. Businesses need to consider solutions to these practical issues – such as modern cloud AI solutions – at the outset.
     
  • Data protection: AI powered targeted marketing objectives may breach the myriad of data protection rules spreading around the world. Companies need to budget for and ensure required notices are sent to customers, consents are obtained or opt-outs provided, data processing must be fair and impartial (with further explanations of data use to be available), and that multi-language customer hotlines and resources to address transparency requirements are provided. 
     
  • IP infringement: There is much focus on AI specific regulation, however from a practical perspective, the more pressing issue at hand is the treatment of IP with respect to AI inputs (in terms of data training) and outputs (which are copies of original works). IP issues are already creating real risks from a litigation viewpoint which need to be mitigated (for example through IP infringement indemnity) or understood and accepted as reasonable commercial risks. 

03 Security 

  • Cyber and data breach: Increasingly sophisticated – including AI-enabled – modern hacks which lead to the theft of proprietary or personal customer data can result in serious reputational damage, as well as hefty regulatory fines and ensuing litigation, including potential class actions.  
     
  • Defences: Protecting sensitive data while leveraging AI requires robust security measures: balancing innovation with privacy is also critical to legal and reputation risk management. Given that criminals can exploit AI development for nefarious ends, organisations must continuously evolve their defence mechanisms both technologically and culturally. 

04 Supply chain

  • Third party risk: There is direct correlation in the rise of AI and operational risk, which also increases with third-party dependence prevalent in the supply chains that come with buying in AI technology and capability.
     
  • Financial services: AI adoption is high in financial services and it is the industry with the most significant amount of regulatory guidance on the adoption of AI.  Adoption often involves establishing new servicing arrangements with AI companies. Managing operational risks across supply chains is a regulatory as well as a reputational imperative – for example the EU’s new Digital Operational Resilience Act (DORA) adds additional rules for financial institutions – and their third party critical tech providers.

05 Talent

  • War for talent: The need for staff with AI expertise and hybrid skills in a limited talent pool is a general challenge for all businesses and creating a war for talent amongst big companies. In media and other creative industries, striking the right balance between saving human capital on content creation and employing legions of regulatory - and culturally-savvy - teams to moderate content is another modern business dilemma requiring strategic HR decision-making.  
     
  • Employee risk: AI is driving a workplace revolution with a surge in the use of chatbots or large language models (LLMs), such as ChatGPT, for work purposes. A lack of bespoke rules means employers must be alive to the risks as they negotiate a matrix of existing employment laws when AI starts to impact employee roles.
     
  • Training: Employees experimenting with AI tech need guardrails. Organisations need to develop quality training to meet the needs of employees across the entire business, to ensure human supervision and maintain the critical mindset needed to tackle GenAI hallucination risk. 

Looking ahead

Addressing these types of AI related challenges and managing related risks will be a recurring theme in tech-driven businesses for years to come. It requires a strategic approach and takes focus, investment, meticulous planning - and tenacity - to see AI pilots through to scale. 

And whilst experimenting with the power of machines, businesses still need to stay human-centred, actively engaging with their employees and taking them along on the AI journey. All in all this entails a cultural revolution for many businesses in order to be ready to leverage AI’s advantages. 

Tags

ai