Artificial intelligence has taken the business and social world by storm since Chat GPT broke on to the scene in early 2023. One year on, this FT Big Read article describes the phenomenon that our global tech team are seeing across geographies and sectors – businesses moving from exploration to monetarisation of AI. 

While the article focusses on the US$889 billion advertising industry, most of the underlying legal and regulatory issues are sector-agnostic. Below are some key points to consider as you move AI from pilot to pivot.

01| Shadow IT poses potentially costly cyberthreat

“Shadow IT” was prevalent in 2023: employees were so keen to keep pace with peers that they were unilaterally downloading, installing and using AI plug-ins that had not been tested and cleared by internal legal and compliance teams. 

In bypassing usual tech governance protocols, rather than companies’ most proactive staff opening doors to untapped millions in revenue, these companies risk similar sums as cyber-criminals are presented with the numerous vulnerabilities that come with such free-to-download tools. 

We are increasingly training organisations on how to avoid modern hacks, proprietary data (or maybe worse) customer data being stolen, class actions and regulatory fines. A 10-second download may take 10 months or more to fix…

02| Targeted marketing conflicts with customers’ desire for privacy 

To get the best from AI tools, advertising companies and other organisations possessing treasure troves of customer data will want to leverage individuals’ profiles to create more impactful content. 

Unless customers have consented upfront though, commercial objectives may breach the myriad of data privacy rules spreading around the world. Profiling activities typically requires uplifted information notices to be sent to customers, consents to be obtained or opt-outs provided, data processing be fair and impartial, and further explanations of data use to be available – companies often don’t budget for multi-language customer hotlines nor assign resources to dealing with the AI “transparency paradox” when deploying this cutting-edge tech. 

We are tracking these governance issues across borders, where multinationals are challenged most.

03| Convergence creates antitrust risks 

As the FT article points out, when tech leaders’ and big consultants’ in-house marketing teams potentially transform into commercially viable marketing enterprises in their own right, antitrust risks emerge. 

These issues will deepen with more mergers and consolidation between industries, as is the trend with digitalisation. Companies that want to lead in this area must plan ahead, as competition regulators have become more aggressive and have been inserting themselves into transactions where their intervention would have been unexpected a few years ago. 

Whether real world or virtual world, we are increasingly advising new clients and new and existing clients in new markets, as the web of anti-trust risk envelopes the digital landscape.

04| Content creation versus moderation

AI hallucination risk remains, and giving customers free-reign to post on the sky-high billboards described in the article, adds to the need for content filtering. While social media marketing can make overnight sensations of products in markets like mainland China, strict censorship rules mean that organisations have additional regulatory risk if they do not control marketing output. 

Striking the right balance between saving human capital on content creation and employing legions of regulatory- and culturally-savvy teams for the purpose of content moderation will be a business dilemma that increasing surfaces in 2024. Our employment specialists understand AI’s legal and ethical impacts to support today’s strategic HR decision-making.

Recurring themes

These issues and others will keep recurring in tech-driven business. See further discussion on the impact of AI in our latest Tech Legal Outlook and this Linklaters Tech Insights blog.