If data is a telco’s lifeblood, does it matter who gets a stake in its heart?
Danielle Royston, CEO of TelcoDR, is possibly the industry’s most outspoken champion of the public cloud in building 5G networks.
The cloud has its critics among mobile operators, with technology strategists at BT and T-Mobile both voicing their concerns this week about whether the public cloud should be a ‘core’ competence.
However, Royston’s confidence in the telco cloud is unshakeable, so we asked what lies behind that faith.
What are the categories of time and money-saving that a mobile operator like Telenor makes by working with Google Cloud?
I think Telenor is saying that it’s bringing Google Cloud in and using it for as much as it can, says Royston.
Has Telenor lost control of its core?
The telco is moving its IT systems to Google Cloud, using the hyperscaler’s artificial intelligence (AI) and machine learning (ML) tools to get insights from its data, and developing new solutions. I presume Telenor will be making the most of Google Cloud talent to deliver on this last point. Orange and Google signed a similar deal in July last year to ‘accelerate the transformation of Orange’s IT infrastructure and the development of future cloud services.’ It is easy to announce a strategic partnership, hence why we’ve seen so many. I’ve not noticed a huge amount of movement since though, so it’ll be interesting to see how Telenor’s cloud transformation unfolds and how quickly.
How does the arrangement compare to the pact between Vodafone and VMware, over its Telco Cloud?
The two are totally different. The Vodafone and VMware pact is very specific: it’s about network workloads running in proprietary data centres. It’s not about moving workloads to the public cloud. Vodafone is using cloud native and virtualization principles for network workloads, whereas the Telenor deal is broader, covering IT, creation of new software and some network and using the public cloud.
What are the cloud options for a telco?
Telcos can opt for a number of approaches. First, private cloud, which involves using cloud native design principles to design software applications, but running the workload in a proprietary or non-public cloud environment. As everyone who’s familiar with TelcoDR will know, this is not the advised option for telcos. It’s twice as expensive as doing it ‘the old way’ on premise.
They can also go for the hybrid cloud approach. As the name suggests, this involves using cloud native design principles to design software applications, but running some workloads in a proprietary or non-public cloud environment and some workloads in a public cloud. So, we are talking AWS, Azure, or GCP.
Again, this is not an efficient route to take. On premise workloads are twice as expensive and they add operational complexity. The approach forces people into non-optimal technical decisions in an attempt to make everything work cohesively. It’s the worst of both worlds.
Fake clouds!
Now we get onto the public cloud options. Telcos can choose to work with one public cloud vendor, either AWS, Azure, or GCP. Note: there are no other clouds. Despite #fakeclouds like Oracle or IBM trying to convince telcos otherwise, the facts speak for themselves. Just compare the capital expenditure of these ‘clouds’ to the big three: it’s not even close. The benefits of public cloud include an ability to run workloads on compute power managed by one of three hyperscalers, all of which have spent years and massive sums investing in infrastructure and building out software services. Telcos can attain 50 per cent plus reductions in the total cost of ownership of their workloads if they properly use the public cloud.
The last chance
The final option for telcos is the multi-vendor public cloud model. So, it’s the same as above, but just using more than one of the big three hyper-scalers. I would not do this either. Some think that this gives you greater resiliency, or negotiating power with your public cloud vendor. What it actually does is give you an impossible to manage operational nightmare. Do not complicate what is already a complex operation just for negotiating leverage. You’ll have more power with your public cloud vendor if you consolidate your budget and become a bigger customer for them.
What about the data that has to be converted?
All of a telco’s software needs to be refactored or rewritten for the public cloud. Most of this software was designed decades ago when the public cloud didn’t exist. If telcos are going to do it right, and not just move their age-old problems around from one place to the next, if they want to save big on Capex/Opex, and if they want to do new things with the technology – they’re going to have to rewrite almost everything. Telcos should settle in on this task: this is not for the faint of heart; Tier 1 telcos take about four to five years to make this move.
Which are the biggest pains in the Aas?
The biggest problem with moving to the public cloud isn’t a technical one – despite what some telcos think, says Royston. It’s a cultural one. Moving to the public cloud involves a change in the culture of a telco from being network-centric, where everything is about the network, to being customer-centric.
Being network-centric may have worked ten or fifteen years ago when we used our phones to call and text, and the industry was all about ducts and poles. Now, customers expect more. Digital-first and cloud players are delivering on this – and making big profits as a result. Telcos must change. They have to be willing to orient the business around driving the net promotor score (NPS) into the high 60s; that’s where the competition is. It will not be enough just to be better than the next mobile network operator. You need to be as good as Amazon and Apple.
In terms of the challenges concerning application areas, a good way to look at it is to ask, ‘what are the things that are easy to revert back if you don’t like the public cloud? What are the two-way doors? And what are the ‘one-way doors’, the changes that aren’t easy to revert?’ To be honest, it’s all a pain in the “aas” to rewrite. But when I talk with strategy officers, I put it into three categories of difficulty: easiest, medium and hardest
Three degrees of difficulty
Easiest: these are the applications that aren’t mission critical. They might be older and bound up with technology debt, in other words they’re applications on the decline. Telcos can start to build cloud talent muscle by experimenting with non-critical apps first, getting the hang of being bill-by-use, for instance, without the risk. Categorise workloads into easy to move, harder to move, and then hardest to move. Start with the easy ones and work your way up.
Medium: start to expose a tech stack at the edge that’s a hyperscaler tech stack. This is harder to do and is more of a one-way door as the investment is too big to revert later. Telcos who follow this path must be pretty sure about the cloud.
Hardest: telcos can use their data to get insights which can double the ARPU and send the NPS into the high 60s. Only the most advanced telcos will be able to do this, and only after they’ve done stages One and Two and changed their culture as well. You can’t jump to this Stage Three straight away.
Time is of the essence
Every telco’s biggest enemy is time, says Royston. Some telcos are already working on stages one and two, whereas others I talk to still think private cloud is the same as public cloud! There’s a lot of learning still to be done.
How does the cloud operator recondition the data so much quicker?
The cloud operator doesn’t do anything to the data; it’s up to the telco to do the work. The cloud provider simply provides the world’s best tools for anyone to use: from a very small telco to the world’s largest. This is made possible because the cloud provider levels the playing field, offering access to awesome software on a pay-by-usage basis. This is the kind of stuff that historically only the biggest and most successful telcos could access, with vendors locking out smaller players.
The cloud providers offer things like databases, artificial intelligence (AI) software, machine learning (ML) software and analytics software, which any telco can mix and match, experiment with or deploy broadly. It’s instantly scalable, resilient, turnkey and you can build in capacity. This takes away all of the hard work that was traditionally associated with getting new ideas off the ground. It’s a developer’s paradise.
Telcos no longer have to waste time and resources trying, for instance, to persuade Procurement to ask Finance to order them a server or two to install something on. Instead, they can spin up a server, a database, a service, anything, within hours. This creates the environment for telcos to focus on what matters: their subscribers.
The data dilemma
When telcos grew by acquisition they amassed endless incompatible data systems inherited from all the small companies they acquired. Would they be unsuitable for a cloud data conversion – or would their jumbled systems make them a perfect client for AWS, Azure and Google?
This is a problem all telcos have; loads of technical debt from decades of acquisitions, customisations and integrations. The public cloud is not a panacea for this problem. However, it does (finally!) make it possible – and cost effective – to solve the problem.
It should be the telcos themselves who solve this problem. One of the most valuable things they have is their data. Today, much of that is siloed, sitting across different systems, on premise, and inaccessible, meaning telcos are unable to use analytics, machine learning models, or AI to mine valuable insights from it.
Would you put your lifeblood in the ether?
If data is a company’s lifeblood, why give to a vampire that lives in the cloud?
If a telco transfers its data to a public cloud vendor, the public cloud vendor doesn’t use the telco’s data. If AWS or Google used the telco’s data, it would be colossally detrimental to their business – it’d lose all trust. In fact, the opposite is true. Think of all the governments and public sector organisations, the banks and financial services, the healthcare providers, the schools and colleges: all of these parties trust the hyper-scalers with their data.
The public cloud vendors have several layers of trust to ensure their users, and their end-users and customers, feel secure. These include contracts, biometric security, automatic logging of anything that touches a user’s data and an ability to request this data at any time. They also offer users the ability to bring their own encryption key (BYOE), and feature custom chips to ensure that when instructions come from other machines, they come from within the data centre. This makes it impossible to insert a thumb drive, or attach an external hard drive.
A breach would destroy any cloud provider’s business, and as such, they take security and privacy very, very seriously. So, in short: no, they are not giving it away to the vampires!