AI regulation is gaining the strength to contain AI risks
The AI world has been developing in a regulatory 'wild west' – the industry has been able to flourish in a context of limited constraints. But as use cases emerge, and take-up of increasingly capable AI tools spreads, regulation becomes vital. While internet/tech companies have been able to embrace the dynamism of a ‘move fast, break things’ philosophy, the same is not true of telecoms networks. As critical infrastructure, these networks have had to consider resilience and reliability as a brake on reckless innovation. Regulatory frameworks are now coming into force to ensure a safe balance between dynamic innovation and a safe, fair playing field for developers and users.
Transcript
Hi, my name's James Allen. I head up our regulatory practice, and I mostly work for regulators and operators talking to regulators.
What is the current regulatory landscape for AI in the TMT industry?
The current regulatory landscape for AI in the TMT industry is developing quite strongly. There is, at the moment, very little in the way of specific restrictions on telecoms operators in relation to AI. But there are moves afoot in both the US and the EU for general AI regulation, of which the EU AI Act is just one.
Will future regulatory frameworks address ethical concerns related to the use of AI in the TMT industry?
Ethical concerns in relation to regulation of AI will almost certainly be an issue. You can see this in at least two ways. First is in terms of sustainability of the AI models themselves. They take enormous amounts of power to run the billion-dollar supercomputers needed to train the models.
And the second is rights preservation. For example, avoiding discrimination against individuals will be an issue, and people using AI systems will need to show that they are not contravening those rights.
When providing telecoms services, will there be a need to protect consumers?
There will be a need to protect consumers interacting with AI systems of various kinds, and telecoms will only be part of that, but it will be part of that. So if, for example, you're interacting with a chatbot in relation to, I don’t know, signing up for a new contract, there will be a need to make sure that you are not being ripped off when you do that. So, yes.
How can companies ensure that their AI-driven processes are fair, transparent and comply with ethical standards?
Making sure that your processes comply with the relevant standards is going to require oversight. It's in principle no different to existing process requirements such as quality assurance. There will just be an extra dimension related to the use of AI concerned with the data used to train the models, the models themselves, and the source of the responsibility chain, if you like, that is going all the way from who provides you with these things to the outcome faced by the consumers.
What is your one key message to the TMT industry right now?
One key message to the industry would be there is a conflict, if you like, between move fast and break things, which is the viewpoint or the headspace of a large amount of the software industry where these major AI developments are coming from, and the telecoms world, which is a critical infrastructure world where if it breaks, people could get hurt. And regulators will not like it if move fast and break things means that people get hurt. So in general, let's try not to break things.
Bringing human intelligence to AI