MWC24 showed that transport network operators have an opportunity to support their customers’ use of AI

08 March 2024 | Research

Simon Sherrington

Article | PDF (2 pages) | Transport Network Strategies


"Initial thinking about the impact of AI on telecoms transport networks focused on the potential to reduce costs, but it is becoming clear that there may also be an opportunity for operators to support their customers’ AI usage."

AI_network-735x70x1212064060.jpg

Mobile World Congress 2024 (MWC24) was ostensibly a mobile-orientated event, with multiple demonstrations of the latest gadgets (this year including handsets, sunglasses, VR headsets, laptops, automated cars, robots and even electric flying cars) and high visibility of the leading mobile chipset, device and RAN vendors. However, it was also well-attended by the leading providers of the optic, switching and routing platforms that make up the hidden transport layer behind mobile networks.

Transport infrastructure providers were a bit quieter than in 2023 from a press perspective. Nonetheless, optical systems vendors demonstrated that they are working towards delivering on the range and capacity promises that they made last year. There were also some new product announcements and segment routing IPv6 (SRv6) slicing demonstrations. However, the most interesting discussions were focused on AI and what it could mean for telecoms transport networks.

AI will enable operators to make their transport networks more efficient

It was impossible to escape the AI acronym at MWC24. Indeed, AI has become the new buzz word; in the field of telecoms transport networks, AI is typically being used to refer to machine learning (ML) and analytics, perhaps combined with some form of automation.

MWC24 showed that the technologies and tools that are available to help operators to run their networks more efficiently are beginning to evolve. Product examples and demonstrations highlighted the potential to use AI to improve:

  • energy efficiency, by identifying items of equipment or ports on transport equipment that can be turned off temporarily to reduce energy usage
  • cost efficiency, by turning routes on and off in order to minimise routing, interconnect and peering costs
  • performance management, by analysing the real-time performance of microwave networks and correlating poor performance or outages on links with likely causes
  • process efficiency, by analysing historical faults within the transport network and predicting the likely causes and locations (both geographic and within the technology stack) of new faults in complex, multi-vendor networks.

Development projects are now underway to use large language models (LLMs) and generative AI to help engineers to troubleshoot issues within networks by providing suggested fixes based on textual questions. These tools will draw on data from historical problem analysis, plus the large corpus of operating manuals provided with network equipment, to give users a natural language interface to identify suitable remediation strategies. Feedback mechanisms will also enable the models to learn and improve their recommendations over time. Once specific patterns are really well-understood and fixes can be identified with a very high level of confidence, it is anticipated that problems will eventually be fixed without human intervention.

Operators will also need to support their customers’ AI usage

It is clear that operators will also need to evolve their transport networks to cope with their customers’ growing use of AI. This presents both a challenge and an opportunity.

A lot depends upon where AI functions will be located, and where data analysis will take place. Vendors have debated where the compute intelligence best resides for as long as the internet has existed. At one end of the spectrum it has been proposed that all the compute power should be held in the network, and that people should have dumb terminals (even dumb computers). At the other end it has been imagined that networks would be largely dumb and that all the intelligence would reside in devices. The reality is that intelligence has gone everywhere. The same discussions are happening in the context of AI, and increasingly it looks like the situation will be the same. AI will go everywhere: in big data centres, in edge locations, in transport network equipment such as routers and in end-user devices. As if to illustrate this, Fujitsu, NTT, Nvidia and Red Hat used MWC24 to announce a solution for AI at the edge, Qualcomm announced its AI Hub (a library of AI models designed for on-device deployment) and HPE’s CEO positioned his company’s acquisition of Juniper Networks as integral to HPE’s ability to support accelerated compute across thousands of locations.

Understanding the data flows and how they need to be managed is important in the context of AI in telecoms transport networks. Key questions to answer include the following.

  • Where will model training take place?
  • Where will the final developed models reside?
  • How will data get to where it needs to be, and how much will be moving about?
  • How will compute resources be geographically distributed?
  • How time-critical or latency-sensitive will the data flows be at each stage?
  • How time-critical or latency-sensitive will AI-dependant applications be?

Operators need to decide whether they will simply provide the connectivity to support the movement of traffic needed for AI training and decision-making, or whether they will deploy additional services too. Opportunities will emerge to create new business models that support low-latency, very-high-availability connections for very large data flows, perhaps on a temporary basis, even if they focus exclusively on the connectivity. This suggests that there is a role for slicing through the transport layer with very high security. It might make sense for operators to focus on selling wholesale services or to enable monetisation using APIs that are accessed by AI service providers.

It was quite clear at MWC24 that many telecoms players are now thinking about these issues, galvanised by the sudden emergence of LLMs and generative AI as serious business tools. Discussing what this means for transport network traffic, transport network capabilities and the revenue opportunities for transport network operators will be one of the key themes in Analysys Mason’s Transport Network Strategies programme in the coming few months.

Article (PDF)

Download

Author

Simon Sherrington

Research Director, expert in fibre infrastructure and sustainability