Data and AI guru Richard Benjamins argues that intelligent regulation boosts innovative use of GenAI, and explains the pros and cons of three main approaches to leveraging the tech
Until March 2024, Richard Benjamins was Chief Responsible AI Officer at Telefonica and founder of its AI for Society and Environment initiative, where he played a pivotal role in shaping Telefónica’s approach to the ethical use of AI. He is co-founder and CEO of the Spanish observatory for ethical and social impacts of AI (OdiseIA), founder and Co-CEO of RAIght.ai, a responsible AI startup and chair of EIT Digital’s Supervisory Board.
Benjamins was in conversation with Mobile Europe‘s editor, Annie Turner, at our most recent virtual Telco to techco conference. This article only briefly highlights some of the topics we covered – watch the whole session on video now.
We started by discussing GenAI models. Benjamins said he doesn’t think there is much point in telcos creating their own large language models (LLMs) as there are a number of LLMs available around the world that “can be perfectly used by telecommunication operators and any business around the world”.
He thinks a good way to think about LLMs is that they are like the public cloud – “in that most of companies across the world will use those services on the cloud. That doesn’t mean that they can be profitable or or innovative, etc because they will run it on top of that.”
He warned, “It’s very hard for an individual organisation to keep the pace with those big companies that invest a huge amount of money in this technology. If a telco wants to…build such a model by themselves, they need to be aware that they need a huge amount of resources, a huge amount of skills to be better. Otherwise, you have your own language model, but it’s worse in all aspects, security, safeguarding costs, etc. compared to others.”
Three ways to leverage LLMs
Benjamins went into detail about the three main ways for an enterprise to take an existing model and apply its own data/documents to train the model for specific cases. He discussed the potential benefits and risks of each one, including the amount of investment and effort they require. The three approaches are: extended prompting in which APIs play a key role; retrieval augmented generation (RAG); and fine tuning a large model that exists with an enterprise’s own documentation.
He concluded, “Companies have to experiment [to find] which is is better, but shouldn’t forget that extended prompting is the best way to start and to test.”
Small can be good too
He is a fan of small language models for certain use cases, explaining, “a large language model can have from a north of 200 billion parameters: a small language model only has in the order of 4 billion [which] is still [such] a huge amount that we cannot even imagine what it means. But in terms of consumption and in terms of energy, the resources needed and the cost of development, it is much cheaper.”.
Benjamins talked about the growing use of GenAI by individuals at work and highlighted some of the dangers, such as around data privacy. He thinks all use of AI needs awareness and training, and transparency, in writing a document say, that it was largely produced by GenAI. He noted, “In the end you are always accountable as a person for what you send out, even if it’s 100% generated with GenAI.”
Generates info rather than retrieves
He added, “Do a read through of what the system generates, and check whether there are those bias elements, and then, if you detect them, remove them rather than just sending out everything as it comes back. Also, you have to check the facts, because Generative AI, as the word says, generates information rather than retrieves information…oftentimes they are in sync but sometimes that’s where the hallucinations come. It generates things that could be true, but actually are not true. So you also have to check that and take accountability for that.”
Regulation does not squash innovation
Sometimes regulation is seen as the opposite of innovation, and if we have one, then the other is stifled. There has been particularly heated debate about this concerning GenAI. Benjamins said, “The trade-off between regulation being responsible and innovation, I think is largely an overhyped discussion that is used anywhere, at any occasion, without a lot of depth.”
“Let me explain it, he continued. “I think there are many business benefits for being responsible with AI. The first is that investors require it more and more [as part of looking into companies’ Environmental, Social and Governance] policies.”
He acknowledged that in the current geopolitical situation there is strong pushback on any regulation, even concerning climate so, he said, “I think self-regulation in this respect is even becoming more important in the coming years, even though I think regulation will catch up later again.
Next Benjamins argues that employees and customers demand more and more responsible behaviour from companies, and do no only focus on profits, but the impact of the company, in society and the environment. Some studies found that more than “65% of employees look at those things, especially if we think about AI and data. And there is a enormous competition for talent. It is very, very hard to get the right talent.” Going about things responsbility makes it easier to attract and keep talent, according to Benjamins.
Thirdly, a highly persuasive argument from Benhamins is that, “If you have a governance model, you have things in place that detect potential problems early on. There is a system in place that can do that with roles, responsibilities, etc. That means that some people, some companies, start to innovate more and faster because…if you have this safety net, people feel safe…it’s like a sandbox. They can experiment, because they know if they do something they shouldn’t. Somebody will catch it. And that drives innovation in the culture of information, because people feel a secure in a place where they can innovate. So these are all reasons why governance and guardrails in place also gives creative people freedom,
Telecoms in poll position
“I’m very pleased to say that the telecommunications industry is leading in terms of responsible use of AI,” Benjamins states.
He points to The GSMA Responsible AI Maturity Roadmap launched last September, based on input from many operators, including Telefonica, to help operators benchmark their progress and understand the whole picture and offer recommendations about how to progress. For example, the highest level, 5, requires a governance model with roles and functionalities, to ensure all AI systems are registered and analysed for risks.
He adds, “Being responsible can include regulation, or it can be self-regulation, or it can be international recommendations, like OECD, or Unesco or the Council of Europe – there are many international recommendations are out there.”
Mind gyms
In something of an intriguing aside, Benjamins said that as tech has taken the physical effort labour out of so very many daily tasks, we now need to make an effort to keep fit, such as going to the gym. Benjamins thinks we will need “mind gyms” as we rely more and more on GenAI to do certain types of mental heavy lifting…