This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minutes read

Singapore’s Model AI Governance Framework for Generative AI 2024 – a framework on how organisations can best deploy AI

Singapore’s Infocomm and Media Development Authority have published a Model AI Governance Framework for Generative AI, enhancing an earlier model governance framework covering ‘traditional’ AI with a more comprehensive approach aimed at addressing new risks from the use of generative AI. 

The GenAI Framework provides a useful governance framework to guide organisations in their development or deployment of GenAI. It makes several recommendations which may require AI developers and companies to shoulder more responsibility around GenAI  – with goals around safety, accountability, transparency, and security which are consistent themes and echo similar principles from the existing Traditional AI Framework. 

There are also a number of policy recommendations which, if adopted, could result in a shift in the Singapore approach of soft law voluntary regime to a more AI specific hard law regime being followed in other major jurisdictions.

Key points for organisations and developers

The GenAI Framework presents several “dimensions of trust” for organisations to consider implementing when fostering a trusted AI ecosystem - together with helpful recommendations for AI developers and organisations when deploying and using generative AI. 

Accountability underpins a trusted AI ecosystem and ensures that every step of the AI development chain takes responsibility towards end-users. 

  • Organisations should establish clear processes and procedures for every stakeholder to ensure accountability across the AI lifecycle.
  • Responsibility should be allocated across the full AI ecosystem where possible, and if there are new or unanticipated issues, to consider indemnities and insurance to cover end-users.

Policy recommendation: Existing laws may need to be updated to ensure that emerging risks from the use of AI can be addressed. AI specific laws have been in adopted in China and in the European Union and the US at State level. 

An AI ecosystem's foundation lies in data quality and transparency

  • More detail is provided on how AI developers should deploy privacy enhancing technologies such as anonymisation techniques in the development of AI models, or undertake data quality control measures in order to ensure datasets used to train AI models remain of a high quality.

Policy recommendation: Policymakers should better articulate how existing data protection laws apply to generative AI and develop approaches to resolve difficult intellectual property issues such as the use of copyright material in training datasets.

Trusted and safe development and deployment 

  • Transparency must be ensured in development processes, for example through disclosing information like “food labels” on data used, the training infrastructure for the AI model, evaluation results, safety measures undertaken, risks and limitations of the model, intended use of the model and user data protection policies. 
  • Safety best practices should also be implemented across the AI development lifecycle - for instance, fine-tuning AI models with human feedback to better align the models with human preferences and values and reduce potentially harmful output from AI (like hallucinations). 

Policy recommendation: Governments need greater transparency for higher-risk models especially if they have national security or societal implications. In addition, there is a need to develop a comprehensive safety evaluation framework for models. 

 Incident reporting

  • Organisations are encouraged to establish precise processes enabling swift incident and vulnerability monitoring and reporting. 
  • Depending on the circumstances, certain incidents could trigger notification requirements to the public and/or the regulator.

Policy recommendation: The legal reporting requirements in the EU AI Act for high-risk AI systems is cited as an example - This may be an indicator of future reporting requirements to come which would supplement existing incident reporting requirements under local law. Read our thoughts on the EU AI Act for further insight.

External validation through third-party testing provides transparency and builds greater trust and credibility with end-users 

  • Third-party testing should be conducted in a standardised way with common benchmarks and methodologies, and for such testing to be carried out by independent and qualified testers (e.g., through an accreditation system).

Creating and maintaining secure AI systems lies at the heart of fostering a trusted AI ecosystem 

  • AI developers should employ a “security by design” mindset across the AI system’s development life cycle and adopt security concepts to ensure that systems are not vulnerable to attacks, or misused. 
  • Some safeguards which can be employed include input filters to detect unsafe prompts, or using digital forensic tools to reconstruct a cybersecurity incident conducted through the use of generative AI. 

Content provenance and AI generated output: transparency about AI-generated content is needed to tackle the rise of harmful content or misinformation generated from AI

  • Transparency can be achieved through technical solutions, such as digital watermarking and cryptographic provenance. 
  • Organisations should also ensure that any use of AI generated content, including with partners such as content publishers or social media sites, should support the embedding and display of digital watermarks and provenance details, ideally in a standardised manner. 

Safety and alignment research and development

Policy recommendation: As safety techniques and evaluation tools have limitations, it is important to keep pace with potential AI risks through acceleration and global cooperation in research and development in AI model safety and alignment.

AI for the public good should ultimately be designed and deployed for the public good

Policy recommendation: This can take the form of increased cooperation between governments and international organisations to support each other, and to democratise access to AI. The framework also emphasises workforce upskilling and sustainable growth to reduce the carbon footprint of AI.

Signalling future changes?

There are signposts in the Generative AI Framework which may signal future changes in Singapore law to come as the technology evolves - for example, with respect to fostering greater transparency to the government on higher-risk AI models, and the need to update existing laws for address emerging risks from the use of AI. Organisations may also need to notify the public or the Singapore Government where there is a “severe AI incident”. 

These references signal a potential shift away from Singapore's current voluntary compliance regime (through the use of ‘soft law’ like frameworks or principles). It remains to be seen if or how the Singapore Government will approach these legal issues and whether they will take a leaf from approach of other leading nations in enacting legislation specific around the use of AI.

Tags

ai