IMDA’s approach to AI governance and policy with Denise Wong
Welcome to our AI podcast series. Each episode features business leaders from across the telecoms, media and technology (TMT) industry who discuss their AI-related insights, and what AI means to them, their organisation and the industry as a whole.
In this episode, Analysys Mason’s Paul Jevons, Director and expert in tech-enabled transformation and David Abecassis, Partner and expert in regulation and digital markets, talk to Denise Wong, Assistant Chief Executive for Data Protection and Innovation at the Infocomm Media Development Authority (IMDA).
IMDA is a statutory board under the Singapore Ministry of Communications and Information. It leads Singapore's digitalisation journey by developing a vibrant digital economy and an inclusive digital society.
Paul, David and Denise discuss:
- the Model AI Governance Framework for Generative AI
- opportunities for Singapore’s AI ecosystem
- characteristics of emerging AI risks and the need for governance
- the need for global collaboration on AI opportunities and risks
- data protection and cross-border data flows
- leveraging AI for the public good and creating sustainable long-term impact.
Find out more about Analysys Mason's AI-related research and consulting services here.
Transcript
Paul Jevons:
Hello and welcome to this Analysys Mason podcast series dedicated to the topic of artificial intelligence (AI). My name is Paul Jevons and I'm a director at Analysys Mason. During the series of podcasts, I'll be joined by business leaders from across the TMT landscape to hear their thoughts and gather their insights on AI and we'll be exploring what it means to them, their organisation and the industry. Hello and welcome to this episode of the Analysys Mason podcast series dedicated to the topic of artificial intelligence. Today I'm delighted to be joined by Denise Wong and David Abecassis. Denise is the assistant chief executive for data protection and innovation at the Singapore Infocomm and Media Development Authority. She's in charge of architecting Singapore's digital future in AI governance through digital policy and regulation. Denise, we'll be talking to David, our partner at Analysys Mason, focusing on internet policy and digital platforms. Welcome to you both and I'm going to hand straight over to David to start the conversation.
David Abecassis:
Thank you, Paul. Denise, great to see you. Welcome to this podcast. It's great to see you again after a few months. Last time I saw you was in Kyoto after you've just been on stage with Nick Clegg and specialists of artificial intelligence. So it's great to have you here. Perhaps just as we start the conversation, it'd be great to hear you describe your role in IMDA in Singapore and then perhaps to tell us what's going on with AI in Singapore. It's been very active and it'll be great to hear it from you.
AI activities in Singapore
Denise Wong:
Thank you so much, David, and it's so good to see you again too. So just a little bit about myself and the role with the Infocomm Media Development Authority, I'm the assistant chief executive there. And what I do day to day is really look after and oversee AI governance policy and a big part of that work has been to develop responsible AI frameworks and then what that means I guess globally, but also within our national jurisdiction. Concurrently, I'm the deputy commissioner of the Personal Data Protection Commission and in that regulatory role I oversee personal data protection, policy regulation and enforcements. And so onto your question about Singapore being busy, indeed, it's been a really busy year and one of the things we've been doing is looking at model governance for generative AI. We recently launched the model governance framework for GenAI. I'll just use the term MGF for short, and it's really a comprehensive framework that pulls together different aspects of governance and we have a lot of conversations internationally.
David, you've been in some of those conversations and pulling that together into thematic approach. The framework comprises nine dimensions that need to be looked at holistically as no single intervention will be a silver bullet. So we pulled that together for international views and the idea was that the MGF would speak to key stakeholders, for example, policy makers, the industry as well as the research community. And so just have a discussion on how each of these stakeholders can do their part to build a trusted ecosystem. So for example, for data policy makers can clarify use of personal data for GenAI since PD often operates within the context of existing legal regimes. Key focus of MGF is actually on trusted development and deployment because governments need to work closely with industry to set out safety best practises for model developers as well as the application deployments.
There are on top of these large language models, focusing especially in the areas of development, evaluations and disclosure. There's also some discussion within MGF on safety and alignment research and development because we do need to work with the R&D and research communities to look at what is that cutting edge of the science in terms of model safety mechanisms. I'll just round off by saying that well, MGF actually build on our existing AI governance foundation because this has actually worked that we've done since 2019, 2020 where we looked at MGF for traditional AI and what good governance looks like.
We've also done other works last year with the AI Verify toolkit and AI Verify open source foundation. We put together a GenAI discussion paper, which was actually the conceptual foundation for this current MGF. And we also put together an evaluation catalogue which looks at safety baselines and testing of the GenAI models. And so we released GenAI for international views of that's closed recently. And I would say we've received insightful and positive feedback overall and we're going through that quite carefully right now to see how we can refine the framework for release quite soon.
David Abecassis:
Fantastic. So lots of points to pick up on. I'm just going to try and pick two or three of them in the following questions. One thing that you explained is that the MGF is for generative AI is an update of works that you started related to earlier versions or earlier flavours of AI in 2019, 2020. As a related question, I wanted to ask you what you see as really tangible developments today around GenAI that you see having an impact already and which ones you think may be having an impact more medium term between you're preparing for now with the MGF for GenAI. A related point is whether you can explain perhaps in a few words how the GenAI dimension in the updated MGF is really deferring from previous versions of the governance framework that you had in Singapore.
Denise Wong:
Thanks, David. And that's really broad question. I appreciate it very much. I guess just quick observations on some key trends. There's been a lot of conversation on this already and I think we definitely see a technical and technological evolution with GPTs and the use of GPUs. I think GPT with their ability to generate human-like text has pretty revolutionised this field. Natural language processing, content creation, even coding, all of that is now in the realm of the possible. Now, I think with it's strong emphasis on technology and innovation, we're trying our best to leverage some of these advancements to enhance service delivery in sectors such as finance, legal sector, healthcare and education. And we are observing that companies and organisations across different sectors are adopting large language models, utilising GPTs to meet their operational and business needs. We think that this technology has great potential and it really brings about new opportunities to boost productivity.
For example, streamline interactions, make things more efficient, more user-friendly, and it's really led to a heightened demand for the technology overall. And the widespread availability of this has also democratised access to these new AI capabilities. It's been really good at bringing AI into the collective awareness in the mainstream, not just of the tech community, right? And it is really democratised and brought collective awareness to the general public. And in Singapore we think that this has been really useful, inspire innovation across different sectors. Startups and established companies alike can integrate some of these technologies to improve customer service and service delivery and to respond to queries for example through the use of chatbots. And we see of course early signs of new technology or new applications, for example in medicine with great potential to need to better healthcare outcomes for example. And the same thing in sectors such as finance.
I guess your question was also about where are the new risks or the exacerbated risks. And I think in modern governance framework for traditional AI, I think we also already recognised some of the risks that are in the model. And our approach overall is really to put in place key gut risks in order to address some of these key risks so that we create a trusted ecosystem which will allow maximal innovation. The GenAI technology, while having that transformative potential exacerbates some of the risks that traditional AI solved. And it also adds on new ones, for example, because of the foundational nature of the GenAI, sometimes it's quite difficult to parse apart what's built on top and what's foundational.
And of course the generative nature of it also means that it's creating new content in a way that could be unbounded from what was put in and what the previous use cases were. So these are some of the characteristics that we think may lead to some of new risks that are discussed such as hallucination, that black box nature of it, the ideas of reproducibility and repeatability are challenged with this new technology. And some of these are new risks that we discussed in the GenAI discussion paper and then we reflect out within the model governance stream.
David Abecassis:
So that's very interesting. I have a few more questions on risks a bit later, but I wanted to maybe take one step back and go back onto the opportunity for Singapore. So Singapore obviously is a business hub, a digital hub, a research hub. You explained really well the distinction between the various roles in the GenAI landscape between foundation models all the way to users. How do you see the role of the regulatory framework enabling, supporting and maybe constraining some of what these companies that are either already based in Singapore or thinking of being based in Singapore trying to do with AI? You mentioned safety for users. Is there an angle for also developers of models and other actors in the value chain that you're trying to hit with the regulatory framework?
Regulatory approach in Singapore
Denise Wong:
Answer is yes. I think we are looking at this really end-to-end for the entire AI development life cycle. And as I mentioned earlier, I think the beauty but also the challenging aspect of this technology is that it's foundational in nature and so that it's really about that shared accountability across the entire lifecycle. And there are different actors along that chain. The regulatory approach that we take as mentioned is really about addressing it through principal approaches but also with practical tools. And we think that you can balance the risk while giving industry space to innovate. And so what we try and do is provide advisory guidelines where needed. We put out practical toolkit, the AI Verify toolkit to allow companies to pass and that relevant bias of AI systems, also people who are developing AI systems. So we think that there is a rule for everyone in that team and all the stakeholders to hold the shared accountability, but there must also be a practical way for them to demonstrate their accountability.
So in that way, I would say I think there is something for everyone in the model governance framework was meant to pull that together. I also wanted to touch maybe on a little bit about testing and evaluation as an area that we've been focusing our efforts. So it's okay. And this goes back to what we did in 2022, which rolls out where we rolled out AI Verify, I've mentioned that already. And it's really an AI governance testing framework and software toolkit for companies. And last year we opened source AI Verify and then we launched it under the AI Verify Foundation. We started with seven corporate members and that was really to harness the collective expertise of the global open source community. It's still a minimum viable product. It's not perfect by any means. Today the foundation has over 120 corporate members and the toolkit has grown quite a bit through contributions of the open source community.
And that's a sort of practical contribution to the idea of testing and evaluation for safety. We also set up a GenAI evaluation sandbox and within it launched an evaluation catalogue. And the idea there was slightly different. It was really about experimentation and we are working together with companies, again, along the three archetypes. We have the model developers, we have the users, the application deployers, and then we have the third party testers. We're coming into the sandbox to collaboratively build practical evaluation capabilities and benchmarks and it's a contribution to that quality of knowledge in this nascent field of testing for GenAI. So we do think that in Singapore we want to create that trusted ecosystem where we also recognise that this field is extremely nascent. So there has to be the spirit of innovating and testing and trying out in a safe regulatory environment. And that also allows us to contribute to that conversation globally on what trustworthy GenAI looks like.
Role of different stakeholders
David Abecassis:
So it's really interesting, there is a role for Singapore on the global stage in this space and it's clear literally on the global stage when you are at IGF with global players. But it's also clear in terms of the approach that you've taken. You mentioned the MGF was put for consultation globally and presumably you did get responses from governments and other policy makers elsewhere in the world. The AI Verify Foundation, you mentioned 120 corporate members, many of these corporate members are multinational corporations with a link to Singapore but not Singaporean companies per se. How do you see this global corporation going and in particular, what scope do you see for countries and regulators coming together in this space to generate a framework that has global reach and global relevance?
Denise Wong:
Yeah, absolutely. I think you hit nail on the head. I think this is a global technology issue without passport and the technology and the issues transcend borders and we do have to have international collaboration on these AI issues, both the opportunities as well as the guardrails that we need to put in place. And we've been actively contributing and having conversations with our international counterparts and very heartened by the types of conversations that we've seen in various international forum, IGF being one of them. And we see them in other places throughout the UN, the OECD, many conversations there. I mean we do have to be patient with the process of convergence because it's a very nascent technology and I think every country is still getting to bricks with what the issues are. The technology is itself evolving daily or weekly, but we do see some promising signs of convergence regionally.
For example, I can just give an example that the ASEAN Guide on AI Governance and Ethics, and this is the association of Southeast Asian nations. That regional guide was recently endorsed by the ASEAN digital ministers at the fourth ASEAN Digital Ministers Meeting in February of this year. And that is a milestone marking ASEAN's concerted effort to put out a regional guideline on AI governance. It's a useful starting point for consistency. It provides a collective approach in terms of principles as well as some practical steps to support the trusted design, development and deployment of responsible AI systems. And the combination is still ongoing, but it is a useful anchor to coalesce policy and practical approaches.
And as I said, globally these compositions are happening as well on interoperability, alignment, platforms like OECD for example, we participated in it actively at many countries do. And these are always useful to help alignment them to be reached. We are also participants in the Forum of Small States. The Digital Forum of Small States was launched in 2022 with AI as a key pillar and with digital force, I think the countries involve hope to foster increased inclusivity and bring different country's perspectives into that global conversation around AI because it's important that all views are represented, different cultures, the different national context are extremely important for this type of technology.
David Abecassis:
So I love that you used, it's an issue without a passport and I think different people will have different views about the implications of that. But I really like the term, what that makes me think about is the issue of data protection. And you explained that you're the deputy commissioner in the Personal Data Protection Commission in Singapore. So you are effectively, I guess the guarantor of people's privacy and personal data protection in Singapore specifically. And when we talk about cross-border issues, obviously the issue of cross-border data flows, particularly for personal data is a sensitive one. What do you see as the interplay between personal data and GenAI? How do you think about this given that you have this privileged position of having to, or maybe this difficult position of having to obviously both sides?
Denise Wong:
That's an excellent and a difficult question indeed. I wouldn't call myself a guarantor, but certainly I'm part of the regulatory team. And you're right that this is data. I mean, we know that data is a big part of model training, especially for GenAI models that trained on much amount of data and it's a difficult issue. I will pretend to have all the answers. I think data protection authorities around the world are grappling with precisely this question. I think we do know that data governance is going to be extremely important, understanding how these models ingest data and personal data and what protections that safeguards can be put in place. We recently put out an advisory guideline, but I must say this is not for generative AI, this is for machine learning systems, traditional AI systems.
But where we sort of applied some of the data protection principles to our AI model training and we explain to industry how some of the exceptions that we have in place in our personal data protection act can be used to facilitate model training, but also some recommendations and best practises on how consumers' personal data can be protected. And we are now thinking about these issues with GenAI, but there is a level of difficulty and complexity that I feel many data protection authorities are grappling with. So still early days. Yeah, I think from the data protection front, but indeed the idea of trusted data data governance is critical in the entire GenAI model development story.
Risks and challenges
David Abecassis:
Yes. I think that goes back to what you were saying around emerging risks and maybe new types of risks. I think some of the risks that we hear about increasingly are really around scams and deepfake and misinformation, which in a year like 2024 with a lot of elections in many countries in the world, it's clearly a very pleasant issue. So you mentioned that Singapore, through the form of small states is also acting to try and get different voices into the conversation around AI governance. That segues into my next question, which was really around how do you envision the role of AI, I guess in Singapore, but also in some of these partner nations that you're working with in order to solve some of the challenges that we are already seeing in society, in demographics and so forth.
Clearly the Singapore government has been very proactive in addressing questions of productivity, questions of how to handle an ageing population. It'd be great to hear your vision for a positive future with AI in Singapore and elsewhere. I think all of us would very much like to benefit from the experience that very advanced countries like Singapore can bring.
Denise Wong:
Well, I think we're definitely still learning, but we definitely believe in the transformative potential of AI as you said. I would highlight that we actually put out our National AI Strategy 2.0 last year that was recently launched. And I think in that report we actually made very clear our wider belief that AI should be for the public good, and that's an important thing for us. It should be harnessed in a sustainable way that creates positive impact for the creation of new opportunities, better jobs, safer, more meaningful connections. And that touches on some of the themes that you raised about productivity and automation. We do think that there'll be benefits in social sectors like healthcare and education, and that we can do this in a way that is sustainable.
The NAIS 2.0 identifies 15 actions that Singapore will undertake to support our ambitions over the next three to five years. And among other things, this includes actions to support our aspirations for Singapore to have that necessary trusted environment for AI innovation. But we do think this will help to help us as a credible leader as well as a preferred site for AI development, deployment and adoption for countries and companies all around the world.
David Abecassis:
That's great. What would be great is to hear your message to companies, innovators, researchers who are working in this space and who want to have an impact and be able to operate in a sustainable responsible manner in AI. What's your pitch for them to come to Singapore to do these things that they want to do?
Denise Wong:
Well, I will have to think about that one a little bit, but no, I think that's a great question. I would say that I think in Singapore here, we are very interested in deep diving and understanding what companies want to do in supporting that, the development of at any stage of the AI life cycle and working with companies hand in hand to figure out what trusted development building means. And we've done that already with some model developers who are working with local champions, end users to, for example, figure out what a good and safe HR chatbot looks like. We don't think we know all the answers. We don't pretend to, but we do think that it's important to apply and to concretise some of these guidelines into practical steps in how to use.
And we do think that there are learnings and insights that we can gain from these practical use cases and sandboxes that we can then share our experiences and insights with other companies and with other countries. And my broader message, I guess is that no one has the right answers or we all need to have conversations across all stakeholders and share our knowledge across learning lessons from other areas. Interoperability by design is extremely important and we should have that conversation about taxonomy, about principles, about practise, about alignment and about convergence as early on as possible. And it's very heartening that we are beginning and we are already doing that.
David Abecassis:
Thank you so much. That's a great pitch. So my last question was, there's a lot being written, a lot being recorded, like this podcast on AI, you are leaving and breathing the topic every day. Do you have recommendations for two or three pieces of content that everybody should be listening to, reading, watching?
Denise Wong:
Oh, you're right. There is a lot of content out there, but I think a lot of it has been very good. I do think that it's important to keep an open mind and look at material from leading thinkers in field across different jurisdictions across the entire world. It is important to have that lens, that global lens to understand what are the different cultural, different regional concerns that different jurisdictions, different cultures, different groups might have. There are many good think tanks out there that are also putting out pieces and we definitely keep track of those as well as international organisation. I guess as regulators and as an agency, we also naturally pay attention to emerging issues of concern, but we are also continually looking at new opportunities and how to support industry. And I would encourage everyone to also think about sectoral of applications and really it's become quite a specialist field and so taking the time to understand the technical issues behind the tech, behind the glitz and glamour is quite an important thing.
David Abecassis:
That's great. I mean, I have to say on our side, internally, we are looking at tools that we can develop and use. As you say, getting under the skin of what the technology is intended to do, can do, cannot do, is not as simple as it sounds and it's something that we're grappling with every day as well. Denise, thank you so much for your time. It was a really interesting conversation. I hope that it was also interesting for you to be here and share your views.
Paul Jevons:
Thank you both very much for a fascinating conversation. It was particularly interesting to hear about the very holistic view that you are taking of the topic rather than a particular aspect, which is fascinating and I'm sure our listeners will find that particularly interesting and very valuable. So David, thank you for the questions. But Denise, thank you very much for your time. Really interesting conversation. Thank you both.
Denise Wong:
Thank you so much, David. Thank you, Paul.
Paul Jevons:
If you would like to automatically receive future episodes, please do subscribe to the Analysys Mason podcasts. We also welcome your comments, feedback and reviews. Thank you very much for listening.
Related items
Article
AI regulation is set to diverge over the short term, before converging in future
Video
AI regulation is gaining the strength to contain AI risks
Article
Professional associations are reasserting AI guidance: adult supervision is required