Google DeepMind’s vision: the future of AI with Jennifer Beroshi
Welcome to our AI podcast series. Each episode features business leaders from across the telecoms, media and technology (TMT) industry who discuss their AI-related insights, and what AI means to them, their organisation and the industry as a whole.
In this episode, Analysys Mason’s Paul Jevons, Director and expert in tech-enabled transformation, and David Abecassis, Partner and expert in strategy, regulation and policy, talk with Jennifer Beroshi, Head of Policy Strategy and Development at Google DeepMind.
DeepMind launched in 2010 and took an interdisciplinary approach to building general AI systems. The research lab brought together new ideas and advances in machine learning, neuroscience, engineering, mathematics, simulation and computing infrastructure, along with new ways of organising scientific endeavours.
Jennifer references the Seoul AI Summit, which took place on 21 and 22 May 2024, co-hosted by the governments of the Republic of Korea and the United Kingdom.
In this episode, Paul, David and Jennifer discuss:
- DeepMind and the broader Google ecosystem
- AI applications, systems and specialised use cases
- artificial general intelligence (AGI)
- language models and interfacing with AI
- AI policy regulation and legislation
- safety, risks and AI open models
- the interplay between systems and people
- a future enhanced by AI.
Find out more about Analysys Mason's AI-related research and consulting services here.
Transcript
Paul Jevons:
Hello and welcome to this Analysys Mason podcast series dedicated to the topic of artificial intelligence (AI). My name is Paul Jevons and I'm the director of Analysys Mason. During the series of podcasts, I'll be joined by business leaders from across the TMT landscape to hear their thoughts and gather their insights on AI. We'll be exploring what it means to them, their organisation and the industry.
Hello and welcome to this episode of the Analysys Mason AI podcast series. Today, I am delighted to be joined by Jennifer Beroshi, Head of Policy Strategy and Development at DeepMind, the artificial intelligence research laboratory acquired by Google in 2014. I'm also joined by David Abecassis, a partner at Analysys Mason who focuses on strategy and regulation. So welcome to you both.
David Abecassis:
Thank you, Paul, and really good to have you, Jen. So to kick us off, do you want to tell us a few words about yourself, your background and what your role involves in DeepMind?
Jennifer Beroshi:
Absolutely. So as you just heard, I head up part of our policy team that looks after policy, strategy and development, and I've been at Google DeepMind for the past 5 years. Before that, it's fair to say I've spent most of my career working on tech policy issues. So I've been within the broader Google family for close to 15 years, have gotten to work in a variety of different places and geographies, which have kept it interesting. I've worked in California, spent some time in Washington DC, in South Africa, based out of London, working on Europe, Middle East and Africa, what we call EMEA-wide issues.
And yes, have gotten the pleasure of really diving into the world of AI policy and the impact that we hope it will have on society in the past 5 years. Within the policy team, our focus is on making sure that we are driving our policy positions and building a good evidence base for the work that we do on the advocacy front.
Overview of DeepMind
David Abecassis:
That sounds fantastic, thanks. I hope we talk about some of these issues in detail today. Can you tell us a little bit about DeepMind? So I think people will have heard in the news a lot about it, about some of the products, some of the products and experiments that have been in the public domain, but it'd be great to understand what DeepMind does today, how it fits within the broader Google ecosystem now, perhaps given that you've been in that Google ecosystem for a while, how you see DeepMind being perhaps different in driving some of these innovations in AI?
Jennifer Beroshi:
Yes. Let me start by sharing a little bit more about DeepMind's history. So it's an AI lab that was founded in London in 2010 by three co-founders. Demis Hassabis, who's our CEO to this day, Mustafa Suleyman, and Shane Legg. And there were a couple of things that made this place, in my opinion, pretty unique from the beginning. First of all, is the fact that from its origins, the mission was very much focused on AGI, artificial general intelligence. And this is developing AI systems that can, as we say, outperform humans at most economically productive tasks. So creating really advanced, really general systems. The lab was focused on this at a time when almost no-one else took that concept seriously, so it seems something very, very far out. As you can see from the beginning, the ambition was very high and it was very future-focused.
Now, another differentiator is that back in 2010, many people said that if you were building a new AI lab, it had to be based in Silicon Valley in order to be successful. Demis saw things differently. So our leadership thought that there was actually no better place than London to build a place like this. And it's fair to say that to this day, we remain very rooted in this identity, even as the number of employees has grown, and even as we've become increasingly global in our scale and have opened offices all over the world.
But I do think that there's this ethos, this thesis, that the thriving and diverse community that you have in the UK and in London specifically would be really foundational to help build the teams that you have working on this type of groundbreaking technology.
Now, DeepMind was acquired by Google in 2014, and so that of course has been transformational, being able to make use of Google resources drawing on the vision of Google leadership, I mean truly bringing the impact of that research in AI to a completely different scale and bringing the ability to deploy a lot of that technology in products that impacts billions of people on a daily basis.
Impact of DeepMind within Google ecosystem
David Abecassis:
And to make it available to Google's customers as well. I think Thomas Kurian recently made that point around how customers of Google Cloud can avail themselves from some of these innovations.
Jennifer Beroshi:
Yes, absolutely. So you see this being brought, as you said, to consumers, to other businesses. Again, I think the scale is massive, and that's something that we've been very excited about. And maybe I'll say a little bit more about DeepMind's mission and how we structure our activities. But the DeepMind mission is to solve intelligence, to advance science and benefit humanity, which is again, pretty ambitious, pretty broad, and I would say that that encompasses three main types of activities today.
The first one is advancing the field of AI research. So again, continuing to really drive what is state of the art in terms of AI, building on a long history of scientific breakthroughs.
Then there's using AI to advance scientific discovery. So thinking about how you leverage that technological work in order to then drive impact in discrete domains and some foundational areas of science and really try to unlock productivity, scientific discovery on a scale that you would not be able to do without AI.
And then the third bucket is that, of course, because we sit within the broader ecosystem of Google, we're also increasingly powering the next wave of products and applications that can help unlock creativity, learning, productivity, whether you're a consumer that's just using online tools, again, on a day-to-day basis, or whether you're a large business. And in the past months, you will have seen some milestones of this work because we introduced the Gemini models, which are now sitting under the hood of an increasing number of Google applications. I think it's fair to say a fairly diversified portfolio.
David Abecassis:
And indeed a very broad mission. And I think if we sort of maybe dig into a couple of those. The first part of the mission as you described it was to advance the field of AI research, and I think it's fair to say that DeepMind was one of the most prolific academic papers or research papers, I guess, not academic, but research papers producers for many years in that space. And as you said, you were very ahead of the curve back in the early 2010s on that front, so it'll be useful to understand whether that's still a core part of how DeepMind works as an AI lab.
And the second question that I have on the scientific discovery. So one of the products or one of the initiatives, it's not I guess a product, but one of the initiatives that are in the news very regularly is AlphaFold, which is, as I understand it, and it's not really my domain of expertise, but an AI-enabled tool that helps predict the 3D shape of proteins in a way that wasn't easily doable with previous tools. And it'll be useful to understand how you see some of these advances, then get into the real economy as it were, and also how you choose what you get involved in. Is it a question of trying to identify the problems that are well-suited to those techniques, or is it about taking a problem and then seeing what AI can do to contribute to solving some of these problems? So what comes first, I guess, it would be super interesting to hear your view on that.
Progression from research to real-world impact
Jennifer Beroshi:
I think as you noted, it's incredible that in the past years, we have seen that progression from having something identified at a very early stage of research make its way increasingly to the point where it's powering products, it's powering tools, and it's already having a fairly significant impact out in the world. So I'm continuously impressed by the pace of progress that we've seen in this field.
As you said, it was just some years ago that DeepMind was mostly recognised for the prowess of its AI work in the form of various research papers (which were certainly impressive). Also the way that AI systems performed on games. So you had systems like AlphaGo and AlphaZero, so these were AI systems that demonstrated astonishing capabilities at the time for their ability to come up with gameplay strategies that surpassed human abilities.
And really the interesting thing is that games, you could think of as just the proving ground for these general-purpose systems, and there's a number of characteristics about games that make them great environments to train these types of AI systems. What do you have in games? You have things like well-defined rules. You have very clear metrics for how you need to operate, and you also have clear markers of whether you've been successful, or whether you've won, or whether you've lost a game.
And then I think it's fair to say for a while people were wondering, this is all very impressive to see this performance at games, but maybe someday there will be a real-world impact. And I would argue, we've already seen that, again, on a really tremendous scale. We indeed made that leap from ‘proof of concept’ to ‘how can you use these types of insights in real-world applications?’.
You mentioned AlphaFold. So AlphaFold indeed is an algorithm-based system that we launched in 2020, so under 4 years ago. And what it is, is that indeed it worked by leveraging this type of AI system architecture called the transformer in order to predict the shape that protein structures are going to take.
Now, it turns out this is a critical thing for scientists to know because these structures are so closely linked to their function in the human body and in many other different types of organism. And so this is just something that unlocks a whole different type of applications that can be beneficial, whether you're talking about applications that can help improve our health, drug discovery, whether you're talking about things like creating new materials, I mean the range is truly incredible.
What AlphaFold did is it essentially allowed us to declare that this decades-old challenge known in the scientific community as the ‘protein folding problem’ was solved. So we essentially enabled a degree of progress that had been taking experimental biologists many, many years helping take something that has been to this day prohibitively expensive and helping democratise it at scale. So not only did we develop this tool which we open-sourced, we also used it to predict hundreds of millions of protein structures and host those on a public database online for people to then go and download.
Again, this is something that we did in 2020, and we've already seen massive impact on reach. So we have seen 1.7 million users in close to 200 countries use this database, and we are hearing increasing numbers of reports from people who say this is truly helping them accelerate the research and discover things that would've taken them... I mean, for some of these researchers, they're spending decades trying to derive a protein structure. And they're saving massive amounts of time just by using this tool.
I think that it also really matters from a global good perspective in terms of global, public social benefit. So what you're getting here is a tool that you can then put in the hands of researchers who in many instances, they don't have many funds, they don't have much infrastructure, they're doing important things like helping research in neglected diseases, again, and things that can really benefit a massive set of a world population. Putting this tool into their hands, we're really hopeful will really help level that playing field a little bit in terms of access to scientific tools. So far, we're really encouraged by what we've seen. We're encouraged by the update that we've seen including in the Global South and developing countries.
David Abecassis:
I would love to get back to this point maybe a bit later because I think this notion of how to think about the digital divide in the context of AI, which requires huge amounts of research, huge amounts of resources in order to create, but then could potentially be used quite broadly is super interesting. But maybe we can get back to that in a minute.
I just wanted to get back to this notion of those specialised uses or use cases that AlphaFold and AlphaGo epitomise. How reusable are these approaches? So you mentioned transformers. I think what we are seeing now is the development of something much broader but potentially less precise, less immediately useful around language, but that people can interact with much more naturally, in a very kind of broad and shallow type of application. And that to me, is in stark contrast with the very specialised, incredibly effective applications that you described around... I mean games obviously is what you call the proving ground, but also in medical research. How do you see these two dimensions interacting? I mean you mentioned AGI at the start of the conversation. Do you think that AGI is effectively these two things coming together? How do you think about this?
Broad and narrow AI systems
Jennifer Beroshi:
I think you put your finger on something that's really important for us to consider, and it's this interplay between the broad and the narrow AI systems. And again, it's not necessarily that one is better than the other. It's just that in the near term, they're specialised at different things, and I think it's really important for us to think of how they fit together.
As you mentioned, a broad AI system seeks to generalise, so it seeks to take in all sorts of different data and information points and allow you to take it and plug it into all sorts of different applications, and in essence, just help unlock, I would say, such a broad range of users that you can think of targeting it at all sorts of different applications. And that in itself is incredibly powerful. But of course, in tandem, we're seeing very powerful and very, very useful narrow AI systems like we have with AlphaFold. Again, they're targeted at specific domains. Oftentimes, they're fine-tuned off of a specialised set of data, and again, that makes them really useful at those purposes.
Of course, I would say in the past couple of years, the big blockbuster consumer moment that we had for AI was the launch of ChatGPT, which really showed I would say, how on a day-to-day basis, anyone could go, interact with one of these broad general systems and use it for all sorts of purposes. And that's obviously incredibly powerful. It's incredibly powerful again because of the ability to use that in so many different domains, but also the accessibility of the interface, the chatbot.
But I think we're only, of course, scratching the surface of what these general systems can do. One thing that you're going to see I think going forward is that these general systems over time, they're also going to get the ability to draw on and use the narrow system. So again, the narrow systems themselves will be incredibly valuable. You are going to have applications that are going to build specialised AI systems in fields that are going to be as important as healthcare critical infrastructure. They're going to help us do amazing things.
But the really interesting thing is the development in tandem of these generalised systems that will come to use the narrow systems as a tool. So you might be interacting with the chatbot, and eventually, you could imagine that you could ask it to say, "Go consult or go make use of another AI system that is specialised in a specific function." Again, whether it's protein structure prediction. Again, it could be the generation of content. It could be any number of other given things. And the different things that will allow you to do, will be really significant. In essence, will allow you to outsource so much of what you do to one of these general AI systems which could serve as a common interface of sorts to all sorts of different types of different tools that you could find.
David Abecassis:
That's super interesting and I think that goes to one of the questions that I had, which was the extent to which language as a product, as opposed to proteins or game actions and so on, how language is special. And the hypothesis that I have and that seems to kind of gel quite well with what you described is the fact that language is the mechanism through which humans are, through communication, making heterogeneous activities and systems work together. And that's how we operate on a day-to-day basis.
We take information from many systems that are not designed to work together, and then we action them in some way that's mediated by our own skills and training and mission in order to make something that works more generally. So that seems to gel quite well with what you explained. That sort of also points to I think something else that's special about DeepMind, which is this intersection between computer science and neuroscience. I don't know whether you want to say a few words about this. Obviously, the example that you gave with AlphaFold, and I think there are other examples around the effectiveness of new antibiotics and you talked about materials, and so on. So a lot of those examples are very deeply connected to life sciences, and I wonder how that... I mean, obviously, that interplays with your mission, but I wonder if that's also part and parcel of your ‘DNA’ as it were, as an organisation.
Jennifer Beroshi:
Yes. So let me start with the beginning of your question. I think you're absolutely right that language is somewhat special. Language is how we as humans interact with the world, how we communicate with each other, how we make sense of the vast arrays of information that we have to make use of in our day-to-day activities. And we do see that that means that when you have these language-based powerful general AI systems, they do have the special resonance. And they have a resonance, both because they allow us to do so many different types of things. And again, we could use these language chatbots as interfaces to then do other things for us or make use of other modalities of AI systems for us.
And it also means, again, that we do things like anthropomorphise these systems. We ascribe to them sometimes human-like characteristics. I think that that's also something, that in some contexts, can help make them more useful and more valuable to us, but it's also something that we need to think through carefully. It has so many implications for how we relate to tools and how we relate to each other.
Now, I think it's really important for us as an AI lab and as a society to be really trying to map forward and thinking about the capabilities that these AI systems both have today and how they're going to improve on how they're going to develop. And again, the ability to essentially interface with them in terms of language is a really important dimension. But there are a couple of other things that you touched on. So there is this question of different modalities. So how does the fact that an AI system might operate in language and have language outputs, text outputs, how is that different from, for example, the fact that they're getting increasingly good at doing things like generating images, generating code and increasingly a single model might be multimodal. So I know that's a mouthful, but essentially, you get these models that can on their own operate across these different modalities. You might feed them images, and they might output a different image. Or you might feed them an image, and they might output text. And this has massive implications.
David Abecassis:
Can I ask you? So on this multimodality, because what you described is an interaction between the more broad general models and the more vertical models in the context of, I guess, agents. So the ability for the user to interface with a broader range of agents through something general, but then the agents themselves are more specialised. If you think of multimodality as we start to see it in some of the products that are coming to market, do you think this is a product decision, or do you think it's intrinsically linked to how these systems are built? In other words, are models multimodal because it's useful to package all of this in one product, or is there something intrinsically multimodal to how these models are built that maybe aligns with the five senses that we as humans have?
Jennifer Beroshi:
Yeah. So I think it's a little bit of both of these aspects. So in the near term, for example, what you have seen is you have seen specialised models, so models that specialise in specific domains like image generation. And so far, a model that has been built for that purpose, say Dali, or an Imogen is the one that we have at Google, these models will be better I would say than general models at generating images, again, because they have been so specialised. But we definitely see value in building multimodal models. So a model like Gemini is getting over time increasingly better at its image outputs as well. And we think that that's fundamentally something worth investing in because when you tie together the modalities, it helps them... You need to think of the outputs and also of the inputs that it's able to draw on. So it helps it become better at these types of cross-reasoning abilities that we ultimately want these models to have in order to make them useful.
You mentioned agency. Agency is absolutely another one of these capabilities that we think are really important to be thinking about and how this is going to evolve over time. What this is going to mean for our day-to-day interactions with these systems, and also again, many of the implications that these raise for how we want to be building frameworks for our interactions with them. So models will increasingly be able to execute actions on our behalf to make use of other tools. This makes them incredibly useful and it also means that we need to think of how we keep those actions that they might take aligned with our interests and that we maintain a sufficient line of oversight over them.
Another capability that we haven't mentioned is that these models are getting increasingly better at what we call their context window, their context window referring to the amount of information that they're able to process at any given time. And you could think of this as a sort of memory that we're plugging into these models, and that makes them so much more useful at processing large amounts of information and really distilling insights for us. But again, I think it has a massive set of implications when you think about how some of these capabilities might play out. Again, people who might be using these tools for purposes that we do not want to. Eventually, it might superpower some of those uses of concern.
For example, one of our most recent models that we have launched, Gemini 1.5 Pro, it has an incredible context window so it's able to process so many documents as an input at the same time, that I have found as a policy professional. I give it things like the EU AI Act, the US Executive Order on AI. As you know, these are not exactly the most succinct documents. They're hundreds of pages long. They're very dry. They're very detailed, and I have read them. But on the fly, when I need to, I can actually use this tool to help me find things within these documents in a matter of seconds, and synthesise insights from these documents. And again, it's incredible to just think, "This is how it's helping in my day-to-day." But the proof of concept is there that you can imagine as we get more of these systems, and you can plug in literally hundreds of scientific papers. They are going to be able to make use of these capabilities to really package up insights, and increasingly maybe find some novel insights. And that's going to be incredibly useful for us.
The future of AI capabilities
David Abecassis:
Yeah, in terms of pattern recognitions and bringing together the language side with the insights. And I think that's a good segue into my next question, which is around the pace of development and where you see things going. If I take the example you just gave on how Gemini 1.5 is helping you with your work, you said, and I relate to this because that's exactly the challenge I face day-to-day. You read everything, every line of it, and so you're able to maybe detect when something's not quite right. You're also able to ask the right questions because this is very much an assistant.
David Abecassis:
Yeah, in terms of pattern recognitions and bringing together the language side with the insights. And I think that's a good segue into my next question, which is around the pace of development and where you see things going. If I take the example you just gave on how Gemini 1.5 is helping you with your work, you said, and I relate to this because that's exactly the challenge I face day-to-day. You read everything, every line of it, and so you're able to maybe detect when something's not quite right. You're also able to ask the right questions because this is very much an assistant.
I think one of the dangers of the, and that's another mouthful, but the ‘anthropomorphisation’ of language models is to also ascribe agency and values and maybe a degree of will to those agents. And what I find difficult to appreciate is the speed at which those models will be able to self-check and to work independently in a way that's going to be more useful to you and I, but also ultimately to other people in the real economy. How do you think about this progress? What should we expect in the next 2 years, 5 years, 10 years, and is it even possible to answer this question?
Jennifer Beroshi:
I think the progress that we're going to see in the next couple of years is going to be incredible. I think it's going to be what we've seen so far is that, of course, people have been working on these AI systems for a very long time. It's not something new. It might have just dawned in the mainstream public consciousness, but there's been a lot of work going on here on the research side, and on the application development side for decades. But I do think the pace of development is getting faster. That's because, again, some of these models, we're seeing the application of what we call this scaling loss. The models are scaling very fast and they're improving very quickly. I think that in the next 2 years, we're going to see them get so much better to a degree that it is really going to surprise us.
I think you will see that if you think of where we were just 2 years ago, where a lot of these things were still fairly abstract and you didn't have a tool like ChatGPT that had broken into the public consciousness and being used by so, so many millions of people. I think we'll see massive progress.
I think you're getting at something very important, which is that in tandem with that capabilities development and with those systems getting more powerful, we need to keep up in terms of our understanding of uses, what those capabilities mean, and the limitations of these systems. So the question is, how do we make sure that we're using them as useful tools for us to really augment our activities and domains that are important and valuable to us, but also remaining cognizant of where they might not be always suitable, or where they might be suitable but only within a given boundary.
These AI systems, again, they do still have what we call hallucinations. They make up facts, which means that you don't want to rely on them in all sorts of critical situations. You mentioned that, again, some of us policy people, we have read our text, and so it is good to check when something does or does not sound completely right. I think we're going to have to do that for a while yet, to be thoughtfully cross-referencing information or thinking of, "What is an inappropriate domain of use," or, "If I use AI, in what type of way should I be using it?"
Earlier you actually mentioned the need for a multidisciplinary approach, and that actually has been really core to how we think about it from the Google DeepMind perspective. It's the idea that in order to figure all of these things out, you need to involve different types of expertise, bringing in domains that would have been considered non-traditional when you think about technology development historically, so bringing in communities of experts and things, again, including social sciences, including philosophers, and other experts from specific domains that you might be thinking of deploying an AI application in.
You mentioned AlphaFold, and indeed Google DeepMind, we are far from claiming that we are experts in biology. But when we were launching that specialised tool, we had to go and consult biologists both to consider how we make that tool useful and also how we think through some of the risks, again, in biology being a very sensitive domain. And I think that's an approach that we're going to have to take again as we consider all types of different systems. And you want to be thoughtful in bringing in those perspectives as you're developing the systems. And again, I think during the next 2 years, to your point earlier, is a critical time to make sure we're doing that. We cannot wait 2 years for that to be a systematic practise across AI development.
David Abecassis:
I think that's right. And I think that's something that's very visible and very pregnant, I think, when we look at, in particular, some of the policy and regulatory initiatives that are out there. You mentioned the EU AI Act. That's a piece of legislation that's been in the making for at least 5 years and somehow had to be updated to keep up with the developments.
What's your view on... So you said, and I think we see this from many different factors, that the progress in the next couple of years is going to be extremely fast and potentially discontinuous to what we can see today. I think the things that we see in our spaces, for example, the large amount of investment that's going into new data centres and new compute capacity and the focus to design this capacity so that it's got as little impact on carbon footprint as possible, for example. I'm working on a project in New Zealand, which is trying to reuse some hydroelectric power that's currently used by an aluminium smelter to power an AI training facility. And that's super interesting. And there's this momentum I guess that's been built through the confluence of research and development, getting it hitting a certain point, general consciousness linked to LLMs in the launch of ChatGPT in the public's mind and investors, and so forth, and all of the investment that's going into the facilities and the assets that are required to keep progressing. So it's really the confluence of all these factors.
And it seems to me that when we think about policy and regulation, we're sort of trying to keep up, and it's really hard to do that. I have a lot of sympathy for policy makers and lawmakers who are trying to keep up in a way that's sort of broadly satisfactory to the various stakeholders. If we look at this specifically, and I think there's a link to what you were saying around the multidisciplinary nature of the development there. If we look at what UNESCO put forward, for example, in terms of AI ethics, and you mentioned philosophers, I think there's a perception that we need to take a step back and look at broader principles in order to accommodate that pace of development.
And then the difficulty for policymakers and regulators to say, "Well, okay, we've got all these principles. How do we translate them into something that's practical, pragmatic, and actually effective?" If you look at what's going on now and what you have to grapple with on a daily basis, somebody who's part of this driving engine in the industry, what do you think is looking at the right things? What do you think is perhaps more challenging? How do you think about this interplay between development and policy and regulation today, and how do you think that should evolve?
Jennifer Beroshi:
Oh, boy. Yeah. I think this is a massive challenge, and it's a massive challenge for all of us, those of us who are doing policy. You have policy and governance from the industry perspective and for policymakers too. Because, as you say, the pace of progress is so fast. So I think that there's a challenge here for all of us. There's a high trajectory of progress, and there's a high degree of uncertainty. And so then the question you want to ask yourself is, how should we approach... What's the right approach in the face of that high uncertainty? How should we guide our decision-making?
I think some of that requires, again, thinking from a first-principal's perspective. How do you operate in the face of uncertainty? How do you try to incentivise good uses and minimise bad uses? And how do you try to do this in a way that's really thinking ahead and reflecting the ways, again, the unexpected ways that technology is going to develop in the future?
I'll try to touch on a couple of different things that I think might be useful. I think that from the policy perspective, it's useful for regulators and legislators to think about a couple of different types of things that you want to be thinking through. And I think it is worth saying. First and foremost, it's worth thinking through how we really maximise the opportunity of AI. So we've been talking about these capabilities and these incredible applications, but I think it's also easy to think, "Okay, if you have these things, it might just all happen naturally." And I think that's not quite enough. Both because, again, there are significant questions of which domains you apply these tools for, how do you generate demand and momentum and investment in the right types of domains? How do you direct energy towards pro-social impact? I think that's really important, and I think policymakers have a really important role to play in that, and it sometimes gets a little bit overlooked.
I think that it's also worth thinking about establishing clarity on the applicability of existing regulation for AI harms. And again, that's also not trivial. A lot of people sometimes think, "Oh, just because something is enabled by AI or because it touches on AI, all of a sudden that means that we don't have existing regulatory frameworks to deal with it." And again, in the large majority of cases, that's not true. Just because you have a type of harm or a type of misuse that happens by using AI, that does not magically put it outside the coverage of existing regulation. So a lot of the time, just establishing that certainty is really useful.
I think it's also worthwhile for governments to think through like, "Are there any gaps and are there any gaps in specific domains that are worth plugging, that are worth making sure we bridge?" But I think that there's also a joint challenge here for governments, again, in building the preparedness for the rapid improvement of capabilities. And I think that in some instances that will require bringing in some new and well-scoped types of governance frameworks. In some instances, that might mean that you ask AI developers to make sure that they're testing their most powerful models or models that have special attributes and domains that we might consider particularly sensitive or particularly risky or that are released in a specific way.
There is, for example, a growing debate about models that are released with openly available model weights. What we often as a shorthand, call open-source models, and that's the type of model release where you essentially don't get to reverse the release decision. Once you release these models out into the world, people will get to take them and build them. And that has amazing implications for good, but you can also think of ways that increases the risk. And that's the approach that you've seen. This general approach of thinking through some of these particularly targeted requirements and governance frameworks for some of these more advanced models, that's what you're seeing a increasing amount of policy attention towards.
I mean, you mentioned the UNESCO principles, but you also have the White House, which drew together a number of companies in making a set of commitments in July of 2023, echoed shortly afterwards by the G7 with its code of practise. Again, trying to articulate, "Where do we need this additional tier of safeguards?" We're starting with industry largely identifying, trying to develop areas where we want to build best practise. Which I think is right because the science of safety itself, the science of safety and responsibility for this AI technology is fairly nascent. And I think something you want to avoid is locking in specific types of requirements or thinking that something is a solution when actually we might have a better solution in 2 years, in 5 years. But I think it's important that we start thinking about these things now. And I am very encouraged by how that debate itself has completely taken off. And I would say it's night and day compared to where we were even just a couple of years ago, taking these issues really seriously and thinking through how we may want to prepare as a society.
David Abecassis:
Super interesting, I think. I was thinking as you were talking through these points, and you mentioned that quite a few of the things that we might want to do with AI will feed into processes, industries, sectors, which already have their own regulatory framework. I mean, if we think about medical sciences and drug development, of course, the AI itself doesn't develop the drug. There's a whole supply chain behind the discovery of a new protein that will get to a released drug, and that doesn't change, right?
Jennifer Beroshi:
Mm-hmm.
David Abecassis:
I guess what can change is the pace at which some of this innovation can happen, and how do we harness this potential for much-increased pace at one part of the value chain to get the best out of these innovations. But I think that's super well-made, and I think medical research is also one aspect where there are well-established risk and ethics frameworks, so what can we learn from those.
You just mentioned open-source models, and as you said, there's an emerging debate there. The NTIA in the US had the consultation on open-source models and the potential for those open-source models to be used as what's called dual-use technology, and therefore you have potentially, military applications in various other aspects that are of relevance to safety and national security. I was reading through some of the submissions from various parts to that consultation, and what strikes me is that everybody's making good points. So proponents of a more safety-first approach are saying, "Well, there's not a lot of scope for us to understand the impact of some of these releases. The explainability of these models is currently very limited. And we don't want to sort of open the door to the stable and let the horse bolt before we fully understand what's going on." So that's one approach.
And then you have another approach which says, "Well, closed models are vulnerable to hacking, to reverse engineering." I think Google, I don't know if it was DeepMind or another part of Google AI that showed that you could reverse engineer the weights in some publicly available models, and therefore, some of the proponents of open-source model releases tell us, "Well, if it's all open-source, we can identify problems and risks and bugs and so on much earlier."
It seems to me that we are at the stage where certainly, when we look at large language models, the usefulness of these models for harmful, real-life applications might be quite limited versus the alternative technologies that are there. But in 2 years' time, we'd probably be in a different position. I'm just curious to understand, how you're thinking about this, and how... Because literally, as you said, you can't unrelease a model that you've opened up. And so what's Pandora's box and what's fundamental research? You could probably look up the fundamental design of a nuclear reactor online. But obviously, building one is totally a different story. Where are we on that curve, and how should we think about this safety angle in the context of open models?
Jennifer Beroshi:
Yes. Our general approach, what we try to do is we try to take scientific approach to building AI safely. So what does the scientific approach mean for us? It means that you should be guided by the scientific method. You should be guided by rigorous engineering practises. And you should be trying to build an evidence base that helps guide your decision-making. It also means that we think you should be appropriately cautious. So that means, again, when you're operating in a high uncertainty context, when the potential risks are really high, you should correspondingly, again, adopt best practises and perhaps think, be really thoughtful before you put the technology out into the world.
You touched on this question that is really important for us, which is, when you consider risk, when we think about risk, it's not an easy science. It's not easy to say... You cannot simply say, "Oh, is this AI system risky or not?" The truth is, oftentimes, the answer will be, "Well, it's risky in certain domains or certain contexts or without a certain set of safeguards," but it's difficult to have one answer that will just solve the question for you.
And one really important dimension to consider, and we bake this into our ethics assessments for our technology, is we try to think through, "Okay, what is the marginal risk here?" So when you're introducing a new AI system onto the market, "What is the differential risk? What does it enable you to do that you would otherwise not be able to do?" And I think that's a really important dimension to think through, because, of course, again, some of these AI systems, they carry risks, but then there are other tools or other things out in the world that already enable that risk. The classic question when you're considering something like a self-driving car. A self-driving car might result in casualties, but if you compare that to the baseline of human drivers, I mean human drivers create a lot of casualties very, very tragically.
And when we're talking about risks that come from tools like these large language models, you will hear people talking about questions like, "Well, what if it helps you create something? Something we don't want to see. What if it helps you build a biological weapon or a chemical weapon?" And I think it's relevant here to say, "Okay, what are the types of information that you can already access through the internet?" Right?
David Abecassis:
Yeah.
Jennifer Beroshi:
Exactly. That is a relevant data point. We should really be thinking about what is the differential risk again. And you should consider, I think, in that assessment both, again, what is the type of system that you're putting out in the world? What domain is it likely to help people in? And also, what is the release modality? So how are people going to be able to access this given AI capability? And this is where I think that there's a couple of things that make open-source or open models as we call them. They do make them different.
What makes them different is, as we've already discussed, first of all, the release decision is irreversible. So if you make available the model weights, again, it's very different from releasing an AI system through say an API. You can cut access to an API. You cannot cut access to model weights that you've released and that people have down onto their computers to rebuild.
Another difference is that people can build off of these open-weight systems to a massive degree. They can fine-tune them, and again, build completely different applications. And most of the time, I want to think, this will be amazing for innovation and the development of new applications. People are going to put these things to way more ingenious uses than we could have thought of on our own. There's also a relevant safety aspect in that as well, and that usually, you want people to look at AI systems and be able to scrutinise them and be able to stress test them in a fairly broad way.
Historically, we have seen with software, there's so much value that you get from having an open mechanism for people to tell you about vulnerabilities in your systems. And I say this also as someone who has... I worked on open-source issues for many years in my past, so I'm passionate about these issues. I think you have to recognise that there is here, a relevant risk dimension to consider with open-source models.
And as these models get more and more powerful, we're going to have to grapple with this question, "Okay, at what degree? We think they're going to fuel a lot of innovation. But at what point will you just have an amount of risk that becomes greater than the corresponding benefit that you would get?" And we don't have the answer. I've been racking my brain on the question of, "Where do you set that threshold?" We can't set it on our own. What you need is to have a debate about this and say, "Okay, how do we as a society generate... What evidence do we need? Again, how do we fund more research? How do we get more information about how these tools are being used? And then how do you enable alternative mechanisms for people to access these types of capabilities safely?" If you're going to say there are some systems that you will not release openly, these are all very important questions for us to have.
David Abecassis:
I think everybody's grappling with this, and I think some of the proponents for open models are effectively saying, "We are going to have to answer all these questions as to how to think about risk or to assess it, how to manage it." And part of the rationale behind releasing some of these models today is to say, "Well, let's force ourselves to do that." So I don't know if it's a good idea or not, but I can sort of see the logic. And in a way, doing it at this relatively earlier stage of development is probably less risky than doing it later, but it depends on where we are and where we are going.
Jennifer Beroshi:
Yes, and I should say, Google DeepMind, we have released open models ourselves, in part because we see value in having this conversation. Again, we don't think many of these incredibly significant risks are going to manifest in the models that you have today, the state of the art of today, but we think the risk profile might significantly increase in the next couple of years. And so you want to be releasing models today trying to build best practise for what it looks like, again, to release models openly today, and then start this conversation so that we have the time to get prepared as a society in due course. So you're not scrambling then to rush after a horse-
David Abecassis:
That has bolted.
Jennifer Beroshi:
That as bolted out of the barn. Thank you.
David Abecassis:
It's interesting. I think that covers quite a few of the points I wanted to address on the risk. But I wanted to go back to the analogy you made with self-driving cars. And I think when we started thinking about autonomous cars or self-driving cars about 10 years ago from a policy perspective, one of the questions was, "At what point does risk management in the automotive industry move from being a combination of product safety and insurance and personal liability to a pure product safety question?" In other words, at what point does the human agency reduce to such an extent that it's entirely on the manufacturer?
And it seems to me that we are having the same discussions here, and some of the solutions to high-risk uses in some of the regulatory frameworks that are emerging, involves clearly identifying individuals who have a specific responsibility within a specific use case. How do you think about this? Because I think some of the potential gains that... And I would love to then, move on to that because I think that's the positive aspect of these developments. But AI-enabled systems and tools and automation gather space, some of the benefits come from being able to do stuff that humans can't do. And so how do we think about this sort of interplay between systems and people? And how do you think as a policy person involved in product development at the forefront of this industry, how do you think about what's right to retain at a sort of human level, human scale, I guess, versus what should be opened up a little bit more broadly? And is it a risk-based approach as we've discussed? Are there clear rules that you think are emerging?
Jennifer Beroshi:
Yeah. I think that's a great question. I think that we're still grappling with a lot of this. The question yes, is like, "What should remain something that we do versus something that we essentially get an AI system or a machine to do?" And I think the answer... A couple of thoughts on that. I think part of the answer is that in many instances, the answer is going to be highly domain-specific. And I think in many instances, I do believe that the party with the most accountability will be the party that is deploying a given AI system. And that's because they will know the context of the deployment. They will know the audience that the AI system is being deployed for. They will know the types of ways that people will be interacting with the system. And again, they will have that knowledge that you need in order to create that accountability from the policy perspective.
At the same time, again, there's still quite a bit that we can do upstream, as we call it, in the AI lifecycle. I do think that there's a relevant question for all of us who are working on AI development, development of large AI models to think of, "Okay, but what are certain safeguards, measures that we want to be implementing upstream that are perhaps more general, but they will scale. They will scale across the systems. What are certain types of, I think, it's what we call them values that we want to see reflected in these systems?" And we have to grapple with that. And a lot of those, again, I would say, we're starting to... We have been documenting, and we have been publishing in our own research how we've been approaching that question. But I think we need a much bigger conversation about, again, when it gets pretty philosophical, what types of decisions might we be willing to outsource to a machine versus keeping with us as humans?
And there's a lot of research should be done here. My colleagues at Google DeepMind, just very recently, two weeks ago, published a paper that is hundreds of pages long because that's how thorough an investigation it is on the question of advanced assistance systems. So again, as these AI systems increasingly act as our agents or as our assistants, what are the types of impacts that we are comfortable with them having versus not? We just need a lot more research there. And then we also need to have this conversation. And I don't think we have a one set answer.
David Abecassis:
Yeah. One of the risks there is, can we keep up on the democratic side of things with the technical development, I guess?
Jennifer Beroshi:
Yeah. Absolutely. It's a very difficult challenge to resolve. Again, AI development is reflecting and is respectful of the different cultural values, the different equities of the many, many people that it's going to impact. I mean, you have a tiny amount of people comparatively working on AI development. When you think about the billions of people that is going to impact. So we need certainly more mechanisms to get that broader participation in to make sure that we're scaling the ability to take in insights and allow greater... In some ways, it'll be greater customisation and greater controls of a lot of these tools that people are using on a day-to-day so that they're able to use them in a specific way that is suitable for their lives.
David Abecassis:
In some ways, I think that mirrors a lot of international global governance questions. What do we commonly across many countries maybe at the UN, or what do we agree to have as common values, and then, these points of divergence, where can they be built in to have multiple value systems that reflect maybe the differences between different people, different countries, different value systems?
Jennifer Beroshi:
Yes. The global conversation is absolutely critical. So you have an AI, a global technology. It's going to diffuse across the borders. It already has, but this impact is really global. And you don't want... On the one hand, again, you want to be thoughtful about how you reflect all of these different value systems. And what you also don't want to have, as you're thinking about things like regulatory frameworks, is you don't want to have a regulatory framework in one country, and then you just have perhaps, again, less responsible AI developers just go to a different country and then reach ultimately the same amount of people.
And I think that this conversation has been taking place more and more in the past couple of years. Again, now you have seen this taken up by a fora that include the OECD, the G7, the UN. I think it's now received the right amount of attention, which I found very heartening. You had a safety summit, was it here in the UK? And you have another one that's coming up in South Korea in just a matter of months. And that again, I think, shows that people are assigning it, the right level of priority to at least try to think of, "Okay, how do we think about the governance of advanced AI systems and how do we grapple with it? How do we align on some joint principles?"
I think there's this question of interoperability in policy frameworks is going to be absolutely key as you have new institutions that bring up, and we see countries already setting up things like safety institutes. We have a UK Safety Institute for AI here. We have another one that has been set up in the US. We have a couple of other countries, including Canada, that are looking at creating their own. This is great, creating the capacity for considering some of these questions around testing standards for AI systems. But how do you make sure that they all work together? They're now starting a discussion about things like mutual recognition principles, which I think is really great. But this is absolutely a key priority in terms of where the policy conversation needs to go in the next months and years.
David Abecassis:
I think that's going to keep us busy for years to come in.
Jennifer Beroshi:
They will keep us busy for a long time.
David Abecassis:
And just to finish on risks, is there a helpful way in which you're thinking about the types of risks? So we've touched upon quite a few different things, but if you wanted to maybe address the characteristics or taxonomy of risks that you're thinking about that could be...
Jennifer Beroshi:
Right. Yes. Usually, we think about risks that fall into the domains of misuse of AI, AI accidents. So again, types of AI, uses that you might not want to see even if they're not happening because of malicious purposes. And we also think about longer-term risks that are very specific to highly advanced systems that become increasingly autonomous and increasingly able to operate independently.
And again, from our perspective, we don't see any contradiction between these domains. You just have to think about all of them and how you might be contributing or how you can help avert them and building those safeguards into systems at the level that might be appropriate, again, both AI models, AI applications, but to really think about that proactively. There are certain high-risk domains again, that I would say I spend quite a bit of time thinking about. I think the impacts on privacy and data protection are really critical, and I think we should be thinking about the potential for and monitoring against potential surveillance uses.
I think this is especially important once you get models that are increasingly multimodal as we were discussing, and that get to make use of these longer context windows. Because that would allow someone to do things like put in video feeds into an AI system and again, just extract insights from it at scale.
I also get pretty worried about non-consensual sexual imagery, which sadly is already an issue with the technology that we have today. But the way that that might be, yeah, just really deployed at a larger scale. I think that we need to think about the type of impact that you have on scams and misinformation at scale. I personally think that misinformation in the near term has been a little bit overhyped. That doesn't mean we shouldn't be paying close attention to it. But of course, we should be thinking about it. It's an elections year. We all know that this is highly significant. But again, the question of evaluating against the risks that already exist and how people interpret information. I do think we should keep that in mind and try to be level-headed about it.
I think that in the longer term, certainly, potential risks from how AI systems will enable real hyper-personalisation at scale, so misinformation that might be customised to you and persistent, meaning it attracts you and it follows you over time. We touched on really make the need to make sure that you embed very different definitions of equity into these AI systems. And again, that they're helping us make decisions fairly. I think this would be really important.
I do think it's worth thinking about, again, longer-term financial risks from AI. I do worry in the longer term about an AI that could take actions that we do not control, that could act in what it would define as being its own interests. And those might not be aligned with human values and it could result in real harm to people. This is why I think it's such a priority for us to advance safety research and practise. Again, that addresses near-term long-term harms at the same time. So really what we want to do is be prepared for what potential AI systems might do in the future, and make sure we're really manifesting what we want to see.
David Abecassis:
Maybe just to finish, and I think that's something that I personally am very interested in. If we envision what a positive future enhanced by AI might look like, and I think we can bring into this, the potential advances that you mentioned in science and medical sciences and so on. We can look at things like accessibility and how multimodal agents could help with people who have specific disabilities, for example. And we can also think about enfranchisement and engagement in the Global South and across the digital divide, both in our individual countries and across the world. What's the vision there for positive AI-enabled feature? Is there something that we can distil in a few words?
Jennifer Beroshi:
If I were to distil it in a few words, I would say we see AI as this ultimate tool for humanity. It is the ultimate multiplier for our ingenuity. So it's going to enable us to accelerate science and innovation in almost every domain. And that's incredible about it. That means that as it plays out over time, I think, I do believe this is going to be one of these transformations for society that is going to be as significant, if not more significant than the Industrial Revolution. It might take a while for us to see that adoption. But then it's going to have an incredibly transformative impact.
And I spend a lot of the time thinking about, "How do you measure risk? How do you make sure this is done safely? How do you advance? How do we advance the science of safety?" But don't get me wrong. I am in this field because I think that if we get this right, and we should get this right, it's going to be incredible. I think that AlphaFold already gave us a taste of what the potential is, but it was just the beginning.
If you look at just biology, I mean, AI can help us understand diseases so much better. It can help us deliver things like personalised medicine, develop therapies that have much lower clinical failure rates, and if you really make drug discovery cost 10 times less, again, if you think through just the ramifications of that, the ability to bring the benefits of something like medicine to the world and why, it's truly incredible. And we do think, over time, some of these tools, these generalised tools, like these large language models, they could become scientific assistants in their own right.
They could help us discover insights that we are so far from even conceiving of at the moment. There are lots of things, lots of areas where we tend to think of them at Google DeepMind as root node questions. So there are some types of insights, again, in these big fields like physics, biology, mathematics, and if you unlock some of these scientific insights, it helps you unlock hundreds more. And that's what we're hoping to do. And that's what I'm excited about. Again, the idea that in our society, we have lots of big challenges. These big challenges like climate change. How do we deal with our need to develop more sustainable materials? How do we deal with disease? How do we deal with an ageing population? I think that, again, these are challenges where we almost feel that we need to develop the best tools that we can possibly develop to tackle those challenges, and AI is that great tool.
David Abecassis:
Thank you, Jen. That's great. Your enthusiasm is very contagious. I'm super happy that you were able to make it today.
Jennifer Beroshi:
Thank you.
David Abecassis:
I had a closing question. I don't know if you've thought about this. But a lot of stuff is being written. A lot of stuff is being released on AI today. It's kind of hard when you're looking at this field to really know what's most impactful, most important to keep abreast of. If you had two or three recommendations for things that all listeners should really look into, should really be familiar with, what would these be?
Jennifer Beroshi:
Oh, I think, yeah. I mean, there's so much great content about AI being produced these days. I think it's really wonderful. First and foremost, I'll say if any listeners haven't done it already, I think it's really wonderful to spend time with some of these actual chatbot tools and to really get a sense of what they can do. I mean, they're improving. They're continuously being updated. They improve by the month. So again, perhaps someone might have tried them before, but again, they're truly very good. And it's very useful, again, just to get a sense of asking questions, getting a feel of how different prompts work, asking these models again to summarise documents, to generate images where they can. I think it's truly, again, useful to stay on top of how these capabilities are developing.
And as I mentioned, I myself, if I just go and ask these models again to generate ideas or summarise documents for me that I would otherwise already be interacting with in my day-to-day, again, there are lots of little ways that you can find that they're helpful for our daily activities.
If I'm allowed to make a small plug-
David Abecassis:
Please do.
Jennifer Beroshi:
... we've been working on our team. So on the one hand, Google DeepMind has a blog where we publish our most noteworthy research papers and also updates when we launch new technologies. And we try to do that in a way that is accessible. And again, that is compelling, hopefully, and so I would recommend our blog as a good source of insights into our work.
My team, specifically, the Policy Team, we have our own little substack that we call Google DeepMind AI Policy Perspectives. And there we try to look specifically at developments at the intersection of AI research, ethics research and policy work, and try to highlight some of the developments that are most interesting and relevant from our perspective. So that's also something I'd encourage people to look into.
And I think, again, that you have an increasing number of podcasts that are just so thoughtful about thinking through the societal implications of AI and what we might want to give some further thought to. I think Ezra Klein has a couple of now seminal episodes on AI.
David Abecassis:
AI series.
Jennifer Beroshi:
Yeah, that I think you already know. He's great. I'm thinking, again, just taking a step back, thinking through that broader perspective and also pointing people to a whole additional range of diverse content that they can dive into. But yes.
David Abecassis:
That's right. I'm a big fan. Jen, thank you so much for joining us.
Jennifer Beroshi:
Thank you.
David Abecassis:
It's been a fascinating conversation. And yeah, I hope we'll have a chance to have you again.
Jennifer Beroshi:
Thank you so much.
David Abecassis:
So that thing's just default.
Jennifer Beroshi:
Yeah, this was great. Really appreciate it.
David Abecassis:
Thank you.
Paul Jevons:
Jen, David, that was fascinating, and thank you so much for the time. If you would like to automatically receive future episodes, please do subscribe to the Analysys Mason podcast. We also welcome your comments, feedback and reviews. Thank you very much for listening.
Related items
Article
Unlimited data plans will put an end to the traditional more-for-more strategy
Country report
Iraq: state of the telecoms market 2024
Article
CSPs seem to have settled on their 5G core deployment models; vRAN/Open RAN strategies remain in flux