How Vodafone is leveraging AI for competitive advantage
In this episode, Analysys Mason's Paul Jevons, Director and expert in tech-enabled transformation, and Scott Petty, Chief Technology Officer at Vodafone Group, discuss how AI is affecting Vodafone and its customers.
Topics covered in this podcast
- How GenAI can enhance productivity, drive business transformation, and foster innovation in product and service development
- The opportunities and challenges that AI brings
- AI governance and ethics
- The benefits of AI for Vodafone’s customers
- The role and importance of partnerships with large language model (LLM) solution providers
Find out more about Analysys Mason's AI-related research and consulting services here.
Read the associated articles: The adoption of AI will be ineffective without the right operating models, Scenarios for GenAI and telecoms in 2033 and AI series: who will be the GenAI winners and losers in the TMT industry?
Transcript
Paul Jevons:
Hello and welcome to this Analysys Mason podcast series dedicated to the topic of artificial intelligence. My name is Paul Jevons, and I am a Director at Analysys Mason. During this series of podcasts, I will be joined by business leaders from across the TMT landscape to hear their thoughts and gather their insights on AI. And we'll be exploring what it means to them, their organisation and the industry.
And today I am delighted to be joined by Scott Petty, who is the Chief Technology Officer for Vodafone Group. Scott has an extensive career in technology and innovation leadership and is currently responsible for the Vodafone Group technology organisation with over 30 000 employees and is also a member of the Vodafone Group Executive Committee. So, Scott, welcome.
Scott Petty:
Thanks, Paul. Great to be here.
Paul Jevons:
Jumping straight in, there has been a lot of media attention to the topic of AI overall, but what does AI actually mean to an organisation like Vodafone?
Scott Petty:
Great question, Paul. We've actually been working on AI since 2016, 2017, mostly in the machine learning area and in predictive AI. So using AI capabilities to predict what's going to happen in our networks and for planning. A lot of the recent interest is really around generative AI, which is, in some ways, putting a human interface on AI capabilities in the way that we interact and is really compelling for people to interact with AI capabilities.
We think about AI really in three key areas for us as an organisation. The first one is internally, offering a set of tools that make our people more productive and many of the people listening to this podcast would have experienced some of those. Microsoft Copilot interfacing into Teams or Microsoft Word helps you write content faster and be more productive and deploying that tooling across the organisation I think is important to help people get benefits from generative AI.
The area we have most focused on is business transformation, taking generative AI capabilities and AI overall and helping us transform processes within the organisation to make us much much more effective. That might be in customer care, being able to service and create super agents that interface with customers at a much higher level with much greater quality. A good example would be in Portugal recently, we've launched a new proactive support capability monitoring 700 000 CPEs, and using that to inform the Wi-Fi status and our agents of what's happening when a customer calls them, so we're leveraging AI capabilities to help manage our customer experience better.
And then thirdly, we focused on how can generative AI help us build new products and services that will create new sources of revenue growth for us as an organisation.
Challenges in deploying AI at Vodafone
Paul Jevons:
OK, obviously there are many dimensions and in particular for an organisation that is as multifaceted as Vodafone, but there are lots of issues and challenges. What are some of those challenges that need to be addressed when looking at how best to deploy AI?
Scott Petty:
I'll start with the technical challenges and then move on to the more difficult challenges, so technically, AI is all about data. It's all about the quality of your data, having your data in a place, in a data ocean, that you can then point a large language model, which is the main basis of generative AI, at that data to be able to be trained and give accurate answers for the information that you're trying to use to interface with either customers or people internally. And that's not an easy problem to solve. We have literally thousands of applications with data in all sorts of formats in all different places, and getting that into a common data model and a common data ocean is a really important piece of work because it's really the accuracy of that data that defines how accurate the large language model will be and whether it's prone to some of the things you read about like hallucinations or difficulties in coming up with correct answers.
At a more privacy and security level, bias and fairness is a really important topic. Making sure that the way you leverage large language models can't be unduly influenced to give answers that we would be unhappy about if a human spoke to us that way and it's really important to have the guardrails and security in place to be able to manage that. Privacy is equally important, probably more important than it is in our traditional IT systems, making sure we comply with privacy legislation and things like GDPR.
And then ultimately making sure that we have got ethical usage of how we are using generative AI in our applications. We've been working with Oxford University since 2022. We built an AI framework to help us make sure we had the right governance and rules in place to make sure that not only technically delivered good generative AI solutions but they were fair, unbiased, we could measure our privacy capabilities, and more importantly we could explain the answers that our generative AI applications gave to a user.
It's really the accuracy of that data that defines how accurate the large language model will be. – Scott Petty
Paul Jevons:
Great, thank you. You touched on it in terms of some of the customer interface, but how are customers going to benefit from all this innovation and technology in the near future? And is it a near future thing, or is it more sort of mid-to-long term for the customers to really see some benefit?
Scott Petty:
No, I think it's actually here today and many of the use cases I'll talk about are live across Vodafone, and I think that's true of many organisations. I think customers having a direct interface to generative AI is maybe in the near term, as opposed to what we're doing today. Because of all the things we're trying to learn about large language models—are they accurate? Are they biased? Have we got the guardrails in place?—we're using a model called "humans in the loop." In this way, we're using generative AI technologies really as a co-pilot or an assistant to a human to help them do their job better.
So think about a call centre agent. You call into the call centre, they have to read through all of your call history, information for the times that you’ve called before, they have to go and search around their FAQs to get a whole set of information. Generative AI is a super tool at summarising that data, understanding the customer intent, looking up our technical databases to give better and smarter answers. That lets the agent focus on the quality of the interaction, the empathy of the call and making sure they are accurate in the answers that they are giving. So, you're probably using generative AI technology today when you're talking to a call centre agent, you just don't realise it because we're not yet ready to have a direct human interface into that.
The next step, as we get more comfortable with generative AI, is to use what we call "humans on the loop." So humans are not involved in every interaction but they sitting back and they are monitoring the generative AI capabilities and making sure that its always giving accurate answers, there’s no bias and there’s no hallucinations in the way that it's answering questions, and you'll see that in applications like FAQ or Super Search. So, you log into our website, you're looking for information on an iPhone, our chatbot and generative AI is giving you really human-like interface into the answers that it gives you back, there is still a human monitoring that.
Eventually, we'll get to "human out of the loop," where generative AI is managing the entire interaction with the customer and is learning from those interactions and developing its capabilities. That's probably a little bit further off.
Importance of partnerships
Paul Jevons:
OK, brilliant. No that's really clear and quite an interesting evolution and an interesting journey, that sort of clearly describes how that's going to evolve over time. If I change tack slightly, I just wanted to sort of cover off one other angle which is, Vodafone itself has recently made an announcement for a big partnership with Microsoft. So I wanted to just get your perspective on whether partnerships are a necessary or a central part of really exploiting AI well, or whether it's a case-by-case basis. It's just really a very general view on the importance of the role of partnerships in this.
Scott Petty:
Sure, let me explain our view on LLMs first, and I think that will help answer the question. So I think broadly, there are three ways that an organisation could go about using large language models and building its own generative AI capability. You can use a public LLM, that is what you use when you use ChatGPT. You log in, you ask a question. Every piece of data that you give to that agent or chatbot is used to retrain the model and continue to enhance the LLM. That’s great for public services but it's not really great for private companies or public companies who want to make sure they are meeting their privacy requirements because you're taking customer data and you're giving it to an external model to be able to train on and you're not quite sure what is going to happen with that data after that.
The second model, which is the one we like, which is a hybrid LLM model. You take an enterprise LLM model from a hyperscale developer or someone investing in large language models, and you apply that to your own secure container of data. That means the LLM is trained on your data, answers from your data, but is not used to retrain the public LLMs that are available to everyone in the internet. Now of course your data is secure and private.
We happen to believe that training an LLM is a very large, complex and expensive exercise, and therefore for the vast majority of applications that we are looking at, leveraging that investment from a hyperscaler or large organisation on top of their cloud technology makes a lot of sense.
However, we think that it's really important to have choice in the LLM models that you apply. So, whilst we've announced the partnership with Microsoft, we also work very closely with Google, in fact our data ocean is based on Google GCP, and we're using Microsoft's LLM technology inside secured containers to access and run LLM functions on that data ocean. But equally we use Google's Gemini capabilities and some open-source LLMs in the cloud to test and understand the different performance levels, the different accuracies and of course the different cost models for each of those LLMs.
The third type of LLM is to actually build your own LLM; take an open-source LLM or write the code, build your own AI infrastructure, buy a whole set of NVIDIA GPUs and put them in your data centre and train specifically for that model. Our view is that this industry is far too early in its evolution and it would be very difficult for a telco to keep pace with the speed of developments of LLMs and of the AI infrastructure to go and build and try and run that and keep pace with the direction, the speed of the change of the hyperscalers means that we really believe in the hybrid enterprise model in our own secure data and hence those partnerships become critical in that model.
We think that it's really important to have choice in the LLM models that you apply. – Scott Petty
Paul Jevons:
Scott, really clear. We've come to the end of this session, but I want to say really thank you very much indeed for sharing those insights. Obviously, this is an ever-changing landscape, and it will be interesting to see how it evolves, but thank you very much for your time and thank you for sharing your views on the topic.
Scott Petty:
Thanks, Paul. Great to be here.
Paul Jevons:
If you would like to automatically receive future episodes, please do subscribe to the Analysys Mason podcasts, we also welcome your comments, feedback, and reviews. Thank you very much for listening.