This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minute read

Key implications of Hong Kong’s new SFC circular on GenAI Language Models

Hong Kong’s Securities and Futures Commission has issued new guidance on the use of GenAI for corporations licensed under its rules – including private equity firms, asset managers and hedge funds. It’s Gen AI circular, together with accompanying Appendix sets out the SFC’s view of key risks and advocates a risk based approach when implementing AI language models. 

The compliance impact of this guidance is expected to be widely felt by industry and throughout the AI third party supply chain. Licensed corporations should begin implementation planning immediately, particularly for high-risk use cases.

Scope and approach

Effective immediately, the AI Circular scope applies to licensed corporations offering services or functionality provided by AI language models or AI language model-based third party products in relation to their ‘regulated activities’ (such as dealing in securities or futures contracts, providing automated trading services, dealing in over-the-counter derivative products), regardless of whether the AI language model is developed by the licensed corporation or an external service provider. 

The AI Circular recommends that licensed corporation adopt a risk-based approach in implementing AI language models commensurate with the materiality of the impact and risk presented by specific use cases of GenAI language models, and notes a non-exhaustive list of “high risk” use cases (identified as providing investment recommendations, investment advice or investment research to investors or clients) for AI language models in which extra risk mitigation should be adopted.

Core principles for risk mitigation

The AI Circular lists four core principles associated with risks of implementing and using AI language models: 

1. Senior management responsibilities

The SFC emphasises the need for licensed corporation’s senior management to have in place sufficient resources, effective policies and procedures and internal controls and “suitably qualified and experienced” governance and oversight throughout the ‘lifecycle’ (from design, implementation, training and testing to management, validation, approval, use and decommissioning) of an AI language model. 

Where group companies are delegated to perform certain functions, the licensed corporation itself remains responsible for overall compliance. The SFC has cautioned against over-reliance on AI language models in agentic AI workflows (where AI language model is incorporated in software programs with the ability to make step-by-step plans) where it is typically difficult to imbue sufficient human supervision and intervention capabilities. 

2. AI model risk management

When designing, customising or improving an AI language model, a licensed corporation is required to have segregated and adequate validation processes (from those that perform validation, approval and ongoing review and monitoring) to test potential issues surrounding hallucination, biases, drift and AI deception, amongst other quality issues on a regular basis.

High-risk use cases are subject to enhanced risk mitigation measures. These include implementing measures such as conducting model validation, ongoing review and monitoring to improve information accuracy; having a human-in-the-loop to address hallucination risk and implementing robust output testing procedures. In addition, licensed corporations adopting high-risk uses are required to comply with notification requirements (see below). 

3. Cybersecurity and data risk management

Licensed corporations are required to keep on top of current and emerging cybersecurity developments and evolving threats in relation to AI language models, such as adversarial attacks (e.g., tricking the AI, overriding system prompts, malicious codes, jailbreak attacks), theft and leaks of user information (including confidential information and personal data). 

Effective cybersecurity policies, procedures and internal controls need to be in place to manage the associated cybersecurity risks, including having measures to promptly identify cybersecurity intrusions and, where appropriate, suspend the use of an AI language model.

Licensed corporations should ensure they comply with the existing Circular on Data Risk Management, as well as the PCPD’s Artificial Intelligence: Model Personal Data Protection Framework.

4. Third-party provider risk management

Licensed corporations should thoroughly evaluate third-party services providers in the selection process to ensure that they align with the licensed corporation’s standards. Third-party contracts should clearly define the responsibilities shared between licensed corporations and service providers concerning cybersecurity and data protection. 

An assessment should be made in relation to the level of dependence of the third party AI language models, the potential (material adverse) impact on the licensed corporation in the event of a third-party induced breach of applicable data privacy or IP laws; and whether appropriate risk management measures can be put in place. Where appropriate, licensed corporations should seek indemnity protection for heightened risks of breach of laws. 

Notification requirements for high risk AI LMs

Licensed corporations adopting high-risk AI language models uses are required to comply with notification requirements regarding significant changes in the nature of their business and services under the Securities and Futures (Licensing and Registration) (Information) Rules

These include notifying the SFC of any significant changes in the nature of their business and the types of service they provide and discussing their plans with the SFC at the business planning and development stage to avoid potential adverse regulatory implications.

Key takeaways

When using AI technologies, licensed corporations will need to consider:

  • Undertaking AI gap assessments between the AI Circular requirements and existing policies and procedures and governance frameworks. Additionally, setting up AI risk assessment frameworks (if not already in place). 
     
  • Maintaining a proper AI governance framework and oversight mechanisms throughout the whole lifecycle of development and use of the AI language model. This will involve monitoring AI use cases throughout the lifecycle of an AI language model.
     
  • Ensuring senior management and staff are appropriately trained on AI governance requirements. Senior management remain responsible for management and supervision.
     
  • Conducting ongoing diligence with new and existing third-party vendors to ensure that they are subject to appropriate contractual obligations (e.g., targeted security measures as well as indemnities for recourse, if needed).
     
  • Identifying and placing additional focus on high-risk use cases and tailored measures to mitigate relevant risks (e.g., human-in-the-loop review).
     
  • More broadly, monitoring broader AI risks and adapting the existing processes to emerging risks. 

Looking ahead

The SFC has indicated it will take a "pragmatic approach" in assessing compliance but licensed corporations should ensure that they have properly documented these processes and procedures. These enhanced compliance measures will involve additional costs and licensed corporations should consider review of their compliance budgets. 

Companies  - regardless of whether they are licensed by the SLFC or not - will undoubtedly feel the impact from this AI Circular. Non-licensed third party providers of AI technology solutions or services may expect to see flow down obligations and added cooperation responsibilities when contracting with licensed corporations to help licensed corporations satisfy the AI Circular requirements. 

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

ai, fintech