In a recent interview with Nasdaq Trade Talks, our US TMT partner Ieuan Jolly discussed the importance of AI policies and governance programs in the age of GenAI.

We are in the middle of an AI revolution right now

Ever since ChatGPT lit up the internet in November 2022, companies can’t stop talking about artificial intelligence. Take this earnings season so far. References to AI during calls with investors are already up 77% from last year. In five months ChatGPT accelerated past 100 million users - it took Google five years to get to the same number with Gmail.

Competitive pressures and advantages from AI optimization are driving business processes, and transforming entire industries, which are increasingly relying on AI.  Companies are dedicating more resources to developing, procuring, or acquiring AI tools and strategies to meet this need. This momentum is unstoppable right now, and it is not just the tech companies that are embracing AI.

AI is transforming industries

We are working with clients in many traditional sectors using and integrating AI into their digital transformation strategy: for example in the automotive and logistics sector with respect to autonomous driving and connected cars; in healthcare with respect to connected devices and wearable technologies; and we are seeing AI turbo-boosting digital marketing and ecommerce.

Consumer engagement

One of the most fascinating effects of GenAI is its ability to increase engagement and dominate users' attention. The currency of economics in this information age is “attention” and these systems are looking for your attention as a consumer. Screen time is exponentially getting longer each day and now we have a tool that will keep you engaged for hours.

The need for guardrails

The creators of these chat bots realize that they are playing with dynamite. These tools can have hugely powerful effects both for the good and the bad of society.

We are aware that there is a version of ChatGPT, which is not available to the public, but which developers used as a testing ground in which questions were asked, which have the potential for life-changing catastrophes on a global scale. For example, you could ask “what were the molecular compositions that gave rise to the 1918 Spanish flu? How could I re-develop that to create a pandemic?” Another question that is capable of being answered, is “how to create a bomb?”

The big challenge with many of these capabilities, is that in the same way as that they are enabled through AI without significant human intervention their answers can be executed at a speed at which humans may not be able to monitor or react to quick enough.  

The threat to individual rights

AI systems are also threatening individual rights. For example, algorithms that moderate content on social media platforms can restrict free speech in an unfair manner and influence public debate. Algorithms rely on massive sets of personal data, the collection, processing and storage of which frequently violates data protection rights.

Algorithmic bias can perpetuate existing structures of inequality in societies and lead to discrimination. The developers who create these tools realize that guardrails need to be installed to limit the harmful effects and are calling for regulation.

The legal framework governing AI is advancing

We see the emergence of AI-specific laws being promulgated and enacted all over the world. There is currently no U.S federal legislation specifically designed to regulate the use of AI. Rather, AI systems and their output are regulated by other existing regulations and frameworks that including data protection, consumer protection, IP, employment and market competition laws.

How to approach self-regulation and AI governance

Self-regulation and governance look different to every company and presents a complex challenge - businesses need to address legal technical, ethical (transparency, fairness, accountability, responsibility) and societal considerations. This might involve the following steps to mitigate risk:

  • Creating ethical guidelines – that provide instructions to developers on how to develop and use AI systems and embed privacy by design. Companies can promote transparency and explainability of GenAI models by providing clear information to users about how the models work, their limitations, and potential biases. This may include disclosing the training data used, evaluation metrics used, model architecture, and as well as providing explanations of how the model generates content.

  • Developing data use policies – that govern the types of data that can be used to train GenAI models. These policies may require explicit consent from users for certain data usage, prohibit the use of certain types of data, such as sensitive personal information, for model training or mandate data anonymization where appropriate. Implementing data hygiene standards becomes even more critical when using large language models (LLMs), which places more pressure to ensure that the data is stored, filtered, and protected for use with AI.

  • Developing data hygiene standards – data, which can include a company’s transaction records, analytics, code, and other types of proprietary information, is considered the backbone of any AI model, because it is used to teach those algorithms to glean patterns and make predictions from it. Data deletion standards and data quality controls – ensuring the data is properly formatted, organized (cleansed and categorized), and relevant for training AI models, not only reduce the risks associated with using LLMs but significantly reduces the size and costs of storage.

  • Ensuring human oversight - can help identify and mitigate potential biases, inaccuracies, or ethical concerns in the outputs of Generative AI models before they are published or shared.

  • Engaging third party auditors – to conduct audits of GenAI systems to ensure compliance with AI governance programs and help build trust with consumers and stakeholders.

Given how fast the technologies move some companies are simply not able to implement formal policies. Some are launching a GenAI pilot program by building a cross-functional governance committee and then creating a risk scale around use-cases. This then gives a certain set of "beta testers" the ability to receive requests for potential use cases and execute against that risk stratification.

Guardrails around the output are also important: for example, clearly labelling output as generated by GenAI and putting limitations on what outputs can be used for. Equally important is the documenting and recording inputs for specific use cases that generate the corresponding outputs.  

Overlaying human review, such as SMEs on particular models and outputs, is important for ensuring a level of quality control in spotting inaccuracies and identifying biases that may creep in.

Overall value of AI governance

Amidst the legal risks associated with utilizing generative AI, an AI governance framework demonstrates a defensible position. Both regulators and plaintiff lawyers actively seek opportunities to pursue legal action against companies they perceive as adopting AI in a reckless manner. 

Establishing an AI governance program significantly reduces the risk of deploying AI systems that may unexpectedly cause harm, while also enabling companies to effectively counter claims that their AI tools were implemented without adequate consideration and oversight.  

Likewise, the adoption of AI increasingly entails potential reputational risk. However, the proactive implementation of an AI governance program serves as a powerful indicator to customers and employees alike that the organization conscientiously contemplates the responsible deployment and diligent monitoring of AI tools. This not only enhances trust and confidence in the organization but also demonstrates a commitment to ethical and responsible AI practices.

Finally, recognizing that the AI models are only as good as the data they are trained on, data governance becomes imperative, particularly when using LLMs to avoid risks, but also to minimize the exorbitant costs associated with storing and processing large unstructured data sets.