More
    spot_img
    HomeDigital Platforms & APIsIs multi-cloud a sensible strategy for the reasons you think?

    Is multi-cloud a sensible strategy for the reasons you think?

    -

    Annie Turner mulls multi-cloud through the lens of the Pentagon spending up to $9 billion with four cloudcos

    The Pentagon put multi-cloud firmly in the spotlight when it awarded contracts collectively worth up to $9 billion for its Joint Warfighting Cloud Capability (JWCC) in December. The JWCC is the multi-cloud successor to the Joint Enterprise Defense Infrastructure (JEDI) – the IT modernisation project awarded solely to Microsoft Azure in 2019 that was supposed to run for 10 years, The remit was to build a massive, common commercial cloud for the Department of Defense (DoD).

    The choice of provider was controversial from the start. AWS started legal proceedings almost immediately, claiming the award was influenced by President Trump’s very public dislike of Amazon and its founder Jeff Bezos. Other parties expressed concerns about such a big contract going to a single provider, hence JEDI was officially terminated in July 2022 and a multi-cloud approach taken this time.

    JWCC takes over from JEDI

    The JWCC contracts went to the world’s largest three cloud providers, Alphabet’s Google, Amazon Web Services (AWS) and Microsoft plus Oracle. The separate contracts will run until 2028 and provide the DoD with “enterprise-wide, globally available cloud services across all security domains and classification levels”, according to the official announcement.

    The ideas behind having more than one supplier for government agencies to choose from is that it will help keep prices down and spur innovation. Also, few organisations are more concerned about security, resilience and scale than the DoD and, on the face of it, multi-cloud ticks all those boxes and diversifies risk, but does multi-cloud really deliver?

    The failings of failover

    Ross Brown, SVP of Partners and Services, at Oracle tweeted in December 2021, when the JWCC was still a mote in the Pentagon’s eye: “Failure is inevitable, planning for it shouldn’t be held back because of an anti-customer strategy to hold their systems hostage by artificially high egress and inter region transfer costs to spur single cloud development models.”

    In other words, if one cloud fails, organizations need to have failover to another. What Brown perhaps somewhat disingenuously calls “anti-customer strategy” could also be seen as each cloudco differentiating its offerings with different network architectures and attributes, and varied storage, security and Platform-as-a-Service capabilities. Presumably the government agencies covered by the contract will choose the cloud platform that best meets their needs.

    Opponents of failover argue that among other things, it imposes an immense burden on application developers and running everything in parallel just in case is horrifically expensive, time-consuming and wasteful. One approach to make the failover as straightforward as possible would be to keep to the lowest common denominator of cloud offerings, but of course this also minimises the advantages and innovation.

    Some industry commentators argue regulators’ enthusiasm for failover is due to poor understanding of how big public cloud platforms work. Gartner’s Distinguished VP Analyst, Lydia Leong, think cloud failover is “almost always a terrible idea” and outlines her reasons in this blog. She uses the analogy of insisting on failover to another cloud like forcing commercial airlines to maintain backup fleets of aircraft from a different manufacturer in case of a software glitch should ground its fleet.

    Leong argues, “The huge cost and complexity of a multi-cloud implementation is effectively a negative distraction from what you should actually be doing that would improve your uptime and reduce your risks, which is making your applications resilient to the types of failure that are actually probable.”

    Reinforcing dominance

    Yulia Gontar, Strategic Growth Executive at Super Protocol, doesn’t think inoperability is the main worry regarding the JWCC, so much as “the threats it may open up.” Super Protocol is built as a massive ecosystem of interoperable solutions and services, with the aim of decentralising cloud computing and giving it back to the community, enabling any party to communicate with any party securely, using confidential computing everywhere (see below).

    Gontar says the Pentagon’s contract will reinforce the immense market dominance of the companies involved. Already, the four cloudcos chosen by the Pentagon control over two-thirds of the global market and just two of them – AWS and Microsoft Azure – account for over 60% of it, according to Gartner and others.

    On the other hand, we have other US government departments, the European Commission and regulators the world over on a mission to curb Big Tech’s overweening market power because it is seen as a bad thing, stifling competition and innovation. For example, on the day this article was completed, the US Department of Justice sued Google over its dominance of digital advertising and stated its intention to break the company up to counter that dominance.

    Central problem of centralisation

    As well as so much being in so few hands, there is also the issue of them being centrally controlled. “These large public cloud providers have a lot of servers and data centres distributed all over the world but they are all interconnected to one closed platform…and have some central authority that decides what’s can and cannot be done.”

    On the day this article was completed, Microsoft Azure suffered an outage, potentially impacting millions of people around the globe who couldn’t access applications like Teams and Outlook. At the time of writing, it wasn’t clear how many people had been affected, but CNN reported Microsoft had identified a network connectivity issue with devices across its wide area network which affects connectivity between clients on the internet and Azure, and connectivity between services in data centres.

    A kick up the breaches

    Outages aren’t the only concern about centralisation. Nowadays even the largest data breaches no longer attract the headlines and outrage they used to. Instead, they are regarded as a regrettable but unavoidable a fact of life. Nor are data breaches only caused by cyberattacks. Deliberate data leaks, such as that perpetuated by whistle-blower Edward Snowden, was nothing to do with cloud, but underlines what a sitting duck massive, centralised caches of data can be.

    Nor are all leaks deliberate, Gontar points out. In summer 2022, it was reported that details about more than 1 billion Chinese citizens were leaked from the Shanghai Police’s repository on Alibaba cloud, which is part of the Chinese government’s private security network. The cache was offered on a cybercrime forum for 10 Bitcoins, the equivalent then of about $200,000.

    Likewise, “The Microsoft data leak in 2022 of was due to the misconfiguration of a server,” she adds. More than 65,000 companies had their data exposed because an endpoint was publicly accessible over the internet without requiring proper authentication. This would seem to undermine a key selling point of cloud; that even if another cloud tenant’s data or other resources are breached, every tenant is insulated from the others.

    Yet researchers made a frightening discovery about Microsoft Azure in August 2021, described in Protocol magazine in summer 2022: “They reported gaining access to databases in thousands of customer environments, or tenants, including those of numerous Fortune 500 companies. This was possible because the cloud runs on shared infrastructure – and as it turns out, that can uncover some shared risks that cloud providers thought were solved problems.” And cloud users too.

    Fortunately, those who hacked Microsoft’s Cosmos DB service were not cybercriminals, but researchers from Wiz, a cloud security start-up. They called the vulnerability ChaosDB. According to Shir Tamari, Head of Research at Wiz, a cross-tenant flaw like ChaosDB is “the most severe vulnerability that could be found in a cloud service provider”.

    So far, there has not been a multi-tenancy cyberattack – or not one that’s been made public – but that could change. A cross-tenant vulnerability was also discovered in Oracle Cloud in September 2022, by some of the same researchers. This weakness would have allowed an attacker to gain read/write access to the disks of other customers. The vulnerability was mostly caused by a lack of permissions verification in an API for storage expansion.

    Zero-trust approach

    Obviously, security is top of mind for the Pentagon and in 2022, ahead of awarding the $9 billion contracts, the DoD announced in it would adopt a zero-trust strategy, which it defines as an “evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources. At its core, ZT assumes no implicit trust is granted to assets or users based solely on their physical or network location.”

    ZT relies on general purpose computing, which requires confidential computing as the baseline. Confidential computing is technology that isolates and encrypts data while it is being processed through exclusive control of encryption keys. Data has been protected by encryption when at rest (in storage or databases) or in transit for years, but not during processing or runtime.

    Confidential computing makes the data itself, and the tech used to protect it, invisible and unknowable to anything and anybody else, including the cloud provider. It is intended to inspire greater confidence about how well data in the public cloud is protected, but it is not universally available nor uniformly deployed, and lacks standards. Work to address these issues is underway in the Confidential Computing Consortium, but AWS, which has about 40% market share, is conspicuous by its absence.

    Confidential computing offers a way to secure data in the public cloud as required by regulations like Europe’s General Data Protection Regulation and the US’ Health Insurance Portability and Accountability Act.

    Gontar concedes that the cloudcos awarded the JWCC Pentagon contracts offer confidential computing in a sense already but argues “because they are so large and centralised, with a very long history of developing infrastructure, they would not be able to transform their whole global whole infrastructure into this kind of confidential continuity quickly and it is not yet in place [holistically]”.

    She also looks ahead to the potential of the metaverse largely being controlled and run on a handful of platforms and says this “huge scale personal data, which combines the real and the virtual worlds, including data about behaviours of people in a digital environment. This will pose a significant, even larger threat to the people’s privacy and identity if breached.”

    Gontar’s view is that the only way to overcome the potential threats of these big trends is to ensure people own their decentralised, digital identities, and indeed governments are moving in that direction, including the US. “They have they understood and are at the stage of piloting decentralised identity projects. If identities are owned by people themselves and are verifiable and trustworthy, then mass attacks will not happen and the national threat would be much lower,” she says.

    “Unless you become decentralised and include open source you will be exposed to these vulnerabilities and it’s just a question of time before a data leak happens, accidental or malicious.”

    Why trusting nobody is the best option

    Gontar is not arguing for private cloud in preference to public cloud because “with private cloud, you still have massive amounts of sensitive data, which is still vulnerable to attack or a leak or
    breach without decentralisation”.  Super Protocol’s world view is a trustless and permissionless cloud infrastructure where there is no central control and any party can interoperate with any other party, so long as both sides agree. They just develop their own solution and use decentralised IDs.

    In a decentralised, confidential computing cloud, although no-one ‘trusts’ anybody else, parties can work together because they don’t share the data which speeds things up. For example, if you’re a
    government agency, wanting to interact with a small business, and a citizen wants to use the services of that small business supplied on behalf of the government, the firm must verify the user is who they claim to be. In a decentralised environment running open source this can be done without recourse
    to the government for documentation.

    She says, “The whole infrastructure is being developed for everybody, at scale and… is much more advantageous than closed, centralised cloud providers and their markets, and the economies they create and impose on the whole world at the moment.”