Why cloud IT infrastructure demand continues to fluctuate as 2019 draws to a close

Demand for computer servers, disk storage systems, and networking hardware deployed within an enterprise hybrid cloud environment remains strong. Moreover, the investment in non-cloud on-premises infrastructure seems assured by the CIO and CTO need to deliver superior security and compliance with IT regulatory requirements in several key industries.

According to the latest worldwide market study by International Data Corporation (IDC), vendor revenue from sales of IT infrastructure products for cloud environments — including public and private cloud — declined 10.2 percent year-over-year in the second quarter of 2019 (2Q19), reaching $14.1 billion.

Cloud IT infrastructure market development

IDC also lowered its forecast for total spending on cloud IT infrastructure in 2019 to $63.6 billion, down 4.9 percent from last quarter's forecast and changing from expected growth to a year-over-year decline of 2.1 percent.

Vendor revenue from hardware infrastructure sales to public cloud environments in 2Q19 was down 0.9 percent compared to the previous quarter (1Q19) and down 15.1 percent year over year to $9.4 billion.

This segment of the market continues to be highly impacted by demand from a handful of hyperscale cloud service providers, whose spending on IT infrastructure tends to have significant upward and downward swings. That ongoing fluctuation creates volatility for the IT infrastructure vendors.

After a strong performance in 2018, IDC expects the public cloud IT infrastructure segment to cool down in 2019 with spending reaching $42 billion — that's a 6.7 percent decrease from 2018. Although it will continue to account for most of the spending on cloud IT environments, its share will decrease from 69.4 percent in 2018 to 66.1 percent in 2019.

In contrast, spending on private cloud IT infrastructure has shown more stable growth since IDC started tracking sales of IT infrastructure products in various deployment environments. In the second quarter of 2019, vendor revenues from private cloud environments increased 1.5 percent year-over-year reaching $4.6 billion. IDC expects spending in this segment to grow 8.4 percent year-over-year in 2019.


Overall, the IT infrastructure industry is at crossroads in terms of product sales to cloud vs. traditional IT environments. In 3Q18, vendor revenues from cloud IT environments climbed over the 50 percent mark for the first time but fell below this important tipping point since then.

In 2Q19, cloud IT environments accounted for 48.4 percent of vendor revenues. For the full year 2019, spending on cloud IT infrastructure will remain just below the 50 percent mark at 49 percent.

Longer-term, however, IDC expects that spending on cloud IT infrastructure will grow steadily and will sustainably exceed the level of spending on traditional IT infrastructure in 2020 and beyond.

Spending on the three technology segments in cloud IT environments is forecast to deliver growth for Ethernet switches while computing platforms and storage platforms are expected to decline in 2019.

Ethernet switches are expected to grow at 13.1 percent, while spending on storage platforms will decline at 6.8 percent and compute platforms will decline by 2.4 percent. Compute will remain the largest category of spending on cloud IT infrastructure at $33.8 billion.

Sales of IT infrastructure products into traditional (non-cloud) IT environments declined 6.6 percent from a year ago in Q219. For the full year 2019, worldwide spending on traditional non-cloud IT infrastructure is expected to decline by 5.8 percent, as the technology refresh cycle driving market growth in 2018 is winding down this year.

By 2023, IDC expects that traditional non-cloud IT infrastructure will only represent 41.8 percent of total worldwide IT infrastructure spending — that's down from 52 percent in 2018. This share loss and the growing share of cloud environments in overall spending on IT infrastructure is common across all regions.

Most regions grew their cloud IT Infrastructure revenues in 2Q19. Middle East & Africa was fastest growing at 29.3 percent year-over-year, followed by Canada at 15.6 percent year-over-year growth. Other growing regions in 2Q19 included Central & Eastern Europe (6.5 percent), Japan (5.9 percent), and Western Europe (3.1 percent).

Cloud IT infrastructure revenues were down slightly year-over-year in Asia-Pacific (excluding Japan) (APeJ) by 7.7 percent, Latin America by 14.2 percent, China by 6.9 percent, and the USA by 16.3 percent.

Outlook for cloud IT infrastructure investment

Long-term, IDC expects spending on cloud IT infrastructure to grow at a five-year compound annual growth rate (CAGR) of 6.9 percent, reaching $90.9 billion in 2023 and accounting for 58.2 percent of total IT infrastructure spend.

Public cloud data centres will account for 66 percent of this amount, growing at a 5.9 percent CAGR. Spending on private cloud infrastructure will grow at a CAGR of 9.2 percent.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Businesses stung by highly convincing Office 365 voicemail scam

Keumars Afifi-Sabet

31 Oct, 2019

Cyber criminals are stealing the login credentials of Microsoft Office 365 users using a phishing campaign that tricks victims into believing they’ve been left voicemail messages.

In the last few weeks, there’s been a surge in the number of employees being sent malicious emails that allege they have a missed call and voicemail message, along with a request to login to their Microsoft accounts.

The phishing emails also contain an HTML file, which varies slightly from victim to victim, but the most recent messages observed include a genuine audio recording, researchers with McAfee Labs have discovered.

Users are sent fake emails that inform them of a missed call and a voicemail message

When loaded, this HTML file redirects victims to a phishing website that appears to be virtually identical to the Microsoft login prompt, where details are requested and ultimately stolen.

“What sets this phishing campaign apart from others is the fact that it incorporates audio to create a sense of urgency which, in turn, prompts victims to access the malicious link,” said McAfee’s senior security researcher Oliver Devane.

“This gives the attacker the upper hand in the social engineering side of this campaign.

This Office 365 campaign has made great efforts to appear legitimate, such as through designing the phishing site to resemble the Microsoft login page. Another trick the cyber scammers use to look real is by prepopulating victims’ email addresses into the phishing site and requesting just the password.

The phishing site appears virtually identical to the actual Microsoft login prompt and preloads victims’ emails

Users are presented with a successful login message once the password is provided, and are then redirected to the office.com login page.

Researchers found three different phishing kits being used to generate malicious websites, Voicemail Scmpage 2019, Office 365 Information Hollar, and a third unbranded kit without attribution.

The first two kits aim to gather users’ email addresses, passwords, their IP addresses and location data. The third kit uses code from a previous malicious kit targeting Adobe users in 2017, the researchers said, and it’s likely the old code has been reused by a new group.

A wide range of employees across several industries, from middle management to executive level, have been targeted, although the predominate victims are in the financial and IT services fields. There’s also evidence to suggest several high-profile companies have been targeted.

McAfee has recommended as a matter of urgency that all Office 365 users implement two-factor authentication (2FA). Moreover, enterprise users have been urged to block .html and .htm attachments at the email gateway level so this kind of attack doesn’t reach the final user.

“We urge all our readers to be vigilant when opening emails and to never open attachments from unknown senders,” the researchers added. “We also strongly advise against using the same password for different services and, if a user believes that his/her password is compromised, it is recommended to change it as soon as possible.”

The use of audio in this campaign points to a greater tenacity among cyber fraudsters, who are adopting more sophisticated social engineering techniques. For example, earlier this year artificial intelligence (AI) combined with voice technology was used to impersonate a business owner and fool his subordinate into wiring £200,000 to a hacker’s bank account.

HCL and Google partner on new cloud business

Bobby Hellard

31 Oct, 2019

HCL technologies and Google Cloud have announced the launch of a cloud business unit to support enterprise cloud adoption.

This is a dedicated business group within HCL that will be supported by engineering and business teams from Google Cloud.

HCL, an Indian-based IT and consultation company that spun out of the R and D division of HCL Enterprises in 1991, said it currently has more than 1,300 professionals trained on the Google Cloud Platform (GCP) and plans to expand this capacity to more than 5,000 specialists in the near future.

Enterprise customers will receive support in areas like containerisation, hybrid and multi-cloud, the companies claimed.

Google Cloud CEO Thomas Kurian said: “The cloud is at the heart of innovation and digital transformation for enterprises and that it unlocks new opportunities for them to tackle their most important challenges.”

“Through our partnership with HCL, we can help organizations deploy Google Cloud broadly and at scale, and move their most critical, data-intensive workloads to GCP.”

Under this joint investment, customers will be able to migrate SAP workloads to the Google Cloud Platform, deploy Hybrid and multi-cloud with Google Cloud’s Anthos, and adopt AI and machine learning services in areas such as e-commerce, supply chain and marketing.

“HCL and Google have a deep and long-standing relationship, and this new business unit is a strategic step forward in our partnership,” said C Vijayakumar, CEO of HCL.

“I am confident that the Google Cloud Business Unit will accelerate the execution of digital transformation of global organisations as well as incubate new IP and solutions that will redefine the market.”

The unit will also provide details on using Anthos, data and analytics, AI, collaboration with G Suite and more.

This renewed investment in Google’s cloud arm has dented its overall earnings, with the tech giant missing analysts estimates for its third-quarter by about $1.7 billion.

Digital Realty to acquire Interxion for $8.4bn in biggest data centre deal ever

Digital Realty has announced it is to acquire Interxion in an $8.4 billion (£6.52bn) transaction which is touted as the largest data centre deal in history.

Interxion, a provider of European colocation data centre services, will join the Digital Realty team to create a ‘leading pan-European data centre presence’. The companies estimate the combined entity will cover more than two thirds of the GDP in Europe, with Interxion currently holding 53 carrier- and cloud-neutral facilities across 11 European countries.

The companies claim various benefits of the transaction: building upon Digital Realty’s record in hyperscale development and its associated benefits for enterprise customers is cited, as well as solving the public and hybrid cloud architectural requirements of a global customer base.

Digital Realty’s CEO, A. William Stein, will serve as chief executive of the combined company, while Interxion CEO David Ruberg will head up the EMEA business.

The deal beats Digital Realty’s own record when it purchased DuPont Fabros for $7.6bn in 2017.

“This strategic and complementary transaction builds upon Digital Realty’s established foundation of serving market demand for colocation, scale and hyperscale requirements in the Americas, EMEA and Asia Pacific and leverages Interxion’s European colocation and interconnection expertise, enhancing the combined company’s capabilities to enable customers to solve for the full spectrum of data centre requirements across a global platform,” said Stein in a statement.

Consolidation of the market was to be expected, according to analyst firm Synergy Research. In a note at the beginning of this year, the company said it expected to see ‘a lot more’ data centre M&A over the coming five years. Analysis of the transactions over the past two years saw a significant dip in overall value in 2018 compared with 2017, despite a greater number of deals.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

10 years of DevOps: With the hype cycle moving on – what’s next?

This year marks 10 years since the term DevOps was first coined, during a now legendary presentation at a Toronto tech conference.

Anyone who’s seen the 90s brawler film Fight Club will know—the first rule of Fight Club is: you don’t talk about Fight Club. All those years ago, IT professionals weren’t part of Fight Club—but a small number formed their own underground DevOps club. But even though the first rule of DevOps Club is, “always talk about DevOps,” it’s taken a decade to catch on in a significant way.

Finally, DevOps doesn’t seem so underground. And like many actually-good-for-you approaches, the small movement of IT professionals pursuing a better vision for operations management, is being consumed by the vendor hype cycle. Capitalising on this, many vendors have plastered their websites with DevOps SEO terms to market a multitude of dubious DevOps-in-a-box solutions.

Those truly familiar with the process will already know—DevOps needs to be initiated internally. Its success doesn’t lie in third-party solutions; it lies in people and culture shifts. Despite lots of companies today claiming they’re “doing DevOps,” many still experience the problems solved by agile ways of working. In SolarWinds' 2018 IT Trends Report, IT pros named inadequate infrastructure and organisational strategy as the top two barriers to achieving optimal IT performance.

The good news is, hype cycles around trendy IT terms and solutions inevitably come to an end. In the next few years, when vendors and commentators have shifted their attention to the next buzzword, DevOps will be given room to grow organically and reach its true potential.

The waiting game

Once industry commentators have had their field day and moved on from DevOps, the developers who created the concept for themselves in the first place can own it once again. It’s these developers who, from firsthand experience, know about the IT challenges and goals today’s enterprises face. Once DevOps is aligned to address these common problems, it’ll become a concept you can better apply in IT environments.

All enterprises must understand, however, DevOps—real DevOps—will take time. Developers and Ops work in fundamentally different ways, which could spark arguments and disagreements in the first days, even months, of DevOps working. Management should hold their nerve and let these initial difficulties run their course—because once they have, these teams will be well on the way to DevOps nirvana.

DevOps is a process made by technologists, for technologists—external pressures will only hinder the process. Let’s take virtualisation as a case in point. Virtualisation was once the “cool” new concept every enterprise wanted to adopt. Like the situation today, vendors were putting pressure on data centre professionals to prematurely introduce virtualisation with subpar software and servers. Technologists resisted and waited until they were ready—these wholesale technological shifts aren’t simply “bolt-on” solutions. Now we can’t think of IT operations without virtualisation.

Despite its uniquely transformative potential, the adoption of DevOps is ultimately following the same blueprint as many new approaches before it. There’s been the successful early adopters, and increasingly (some) enterprises have taken up the charge. The suggests we’re moving closer to the hype decline phase, where even late adopters may finally get to enjoy the same longer weekends and faster change rates of the early adherents a decade ago.

The new standard

But where should DevOps be for you? And what would that look like?

In the next five years, traditional operations teams will discover at least a few DevOps practices help solve complex problems thrown up by new technologies. Better, it’s not necessary to accept the full dogma of DevOps or Agile to see systems benefits and new ways of working like DevOps.

The benefits of these remodeled IT environments are only growing in popularity. A 2018 survey of 2,400 developers and general IT professionals, conducted by the Cloud Native Computing Foundation, found the use of serverless technology had grown 22% in one year. As enterprises look to reap the benefits of digital transformation and cloud native technologies, the adoption rate of DevOps will only grow.

It’s not surprising cloud native and serverless environments are where DevOps have shown significant early results. Automation of standard tasks and the breaking down of data silos is precisely what’s needed to speed up delivery time and render critical problems easier to solve. Once enterprises begin to witness the impact of DevOps in these high-visibility scenarios—they’re likely to see value and want to extend DevOps ways into other areas of IT.

As adoption picks up, IT professionals and developers will increasingly expect all of their tools and technologies to be totally compatible with DevOps. The runaway success of tools like Gradle, Git, and Jenkins reflects the growing numbers of IT professionals who are prioritising tools speeding up deployment times and facilitate the collaborative ways of working that are fundamental to DevOps.      

It’s not too soon for vendors to start catering to DevOps—in fact, it’s high time they did. The annual Google Cloud Accelerate State of DevOps report analyses data from thousands of IT pros to provide a health check of the DevOps industry. This year’s report found the self-reported number of what Google classes as “elite DevOps performers” has almost tripled, now at 20% of all organisations.

All of these organisations in part identify by tools that compliment how their teams work. The rise of DevOps isn’t just inevitable—it’s evident. Vendors need to stop using “DevOps” as SEO juice and manfully incorporate DevOps-friendly features, especially for established enterprise technologies.

Give and take

Bringing about the future of DevOps will require IT professionals to make some unnatural adjustments. They need to learn and embrace automated deployment pipelines and learn at least the basics of code. But fundamentally, IT professionals need to come to terms with their own reservations about automation and experiment. That’s the path to understanding the value that DevOps brings.

Many hype-weary IT professionals might feel threatened by the automated solutions accompanying DevOps, and understandably so. But checking with peers, most discover even just the automation techniques of DevOps presents greater career opportunities. Increasingly, IT professionals are becoming accustomed to automating their “work” and realise the skills benefits of doing so. Some even report they’d never go back to waterfall operations again. Perhaps it’s just the reduction of overnight and weekend maintenance windows.

The hype cycle engulfing DevOps is finally subsiding, and this is a Great Thing. One by one, developers and operations engineers are discovering DevOps isn’t yet another overlay requirement like ISO, but a set of culture and process changes that realigns IT to meet the goals of today’s businesses. When we see a vendor slide mention CDV (Continuously Delivered Value™) we’ll know we’re on the way. IT teams have always understood this, and who doesn’t like tools increasing speed to market, greater innovation, and being associated with services users appreciate.

Read more: DevOps learnings: Why every successful marriage requires a solid foundation

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google’s cloud remains on a solid course – even if Alphabet earnings missed expectations

Alphabet posted earnings which missed analyst expectations – yet as the company’s ‘other revenues’ bucket continues to grow, the word on Google Cloud remained positive from the executives.

Other revenues, of which Google Cloud is a part – the company continues to not show its full hand – reached $6.42 billion (£5.01bn) for Q319, an increase of 38.5% year on year and a rise of almost 4% from the previous quarter.

Profit for the overall business declined 23%, with the earnings of $10.12 per share falling well below Wall Street expectations of $12.42. However, total revenues of $40.5bn were seen as positive, with advertising revenues up 17% from this time last year.

Google Cloud’s highlights in Q3 were varied and legion. In terms of product and footprint, the company continued its European expansion with a launch in Poland last month, while the release of Dataproc on Kubernetes in the same month solidified Google’s leadership at container management for an enterprise level. On the partnership front, deals were struck in August with VMware, extending the companies’ collaboration, as well as with enterprise blockchain provider Cypherium.

Alphabet CEO Sundar Pichai was keen to evangelise the gains made by Google Cloud, particularly noting ‘customer momentum across multiple areas on [Google Cloud CEO] Thomas [Kurian’s] leadership’ to analysts.

Pichai elaborated on how Google Cloud customers fit in to other emerging areas when fielding an analyst question around quantum computing, an area in which Google declared ‘supremacy’ last month. “This is an important tool in the arsenal,” said Pichai. “While quantum will take many years to really start making a difference, we want to be at the cutting edge of driving it.

“I do think over time for sure, we do see a lot of interest from Cloud customers, particularly in cutting-edge verticals about quantum computing – so that’s an area where I think [we] will participate in as a business,” added Pichai.

Analysts had previously been asking Google to disclose specific figures around its Cloud business. In Q1, Goldman Sachs analyst Heather Bellini posed that very question, only to get a committed non-committal in response. This is understandable; as each of the cloud infrastructure giants count their beans with different methods, specific numbers may be seen as an apples versus oranges comparison. Microsoft continues to give Azure revenues in terms of percentages rather than an exact number, while AWS – which does give specifics – hit almost $9bn in its most recent quarter.

You can read the full Alphabet earnings release here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Cloud investments dent Google’s Alphabet earnings

Bobby Hellard

29 Oct, 2019

Google parent Alphabet’s quarterly earnings were dented by heavy investment in its cloud computing business.

The tech giant missed analysts estimates for third-quarter profit by about $1.7 billion, though it beat revenue estimates by about $175 million.

Google is the world’s leading provider of internet search, advertising and video services, but Google Cloud is a key segment of its overall business. Currently, this part of its operation is a distant third to rivals AWS and Microsoft’s Azure.

The company has said it will continue to spend on cloud, AI and consumer hardware as it looks to compete in these “new areas”.

“Our businesses delivered another quarter of strong performance, with revenues of $40.5 billion, up 20% versus the third quarter of 2018 and up 22% on a constant currency basis,” said Ruth Porat, CFO of Alphabet and Google. “We continue to invest thoughtfully in talent and infrastructure to support our growth, particularly in newer areas like Cloud and machine learning.”

Net income in Q3 was $7.1 billion, or $10.12 a share, down from $9.2 billion, or $13.06 a share, in the same period a year earlier, the company reported on Monday. According to data compiled by Bloomberg, analysts expected $12.35 a share.

Google has been building data centres, buying equipment and recruiting engineers and salespeople to support its cloud unit. CEO Thomas Kurian was hired at the end of 2018 from Oracle to help in this regard.

Quarterly estimates for the other major cloud providers have seen the opposite; cloud computing has boosted revenue. Leading the way, AWS has annually recorded increased earnings for the last five years.

Establishing itself in second, Microsoft has invested heavily in Azure, making a number of shrewd acquisitions this year, and announced that Azure’s 73% growth had pushed Microsoft up to a market cap of $1 trillion, in April.

“In many of these areas we are the new entrant and we create competition, and sometimes the competitive pressures can lead to concerns from others,” CEO Sundar Pichai said.

How to avoid the big upcoming cloud storage problem – which could run you down

When organisations migrate to the cloud they have an application problem: deciding which apps to migrate and in what order as well as which ones to reconfigure as cloud-native. 
Once in the cloud they have a data problem: budgets that are flat or in decline and data volumes that are growing exponentially.

Where people go wrong is thinking ONLY about the application problem in advance. All too often when we cross the road we look left or right when we should be looking both left AND right.

It is wrong to think of cloud as a commodity. Cloud price wars have eased and the pace of decline in prices for cloud compute and storage has slowed almost to a halt. The reason for this shift is market maturity, with people having more faith in the cloud model than they once did. 451 Research analysts have said that the cloud has not yet become a commodity and as such, the cloud market is "not highly price-sensitive" at the moment, despite businesses wanting to get the best deals they possibly can.

CIOs are often overly focused on the cost of compute, where the cost of compute is not decelerating as fast as it did during the height of the price war. However, they should also be focusing on the cost of object and block storage. Prices for storage may have more scope to fall than for compute, but if you’re being charged for data and your data is growing exponentially then you have a problem.

Few, if any, organisations are throwing away any of their old data and new data is being added at an exponential rate – a rate that will only increase with 5G and IoT. This exponential explosion in the volume of data is a real problem.

Many of us are some way down the cloud path. Most of the initial gains that we experienced from moving to the cloud came from the low hanging fruit. Such gains came from transformational projects that could deliver immediate improvements in service or reductions in cost, or that addressed the most immediate challenges at hand.

Typically, though, we put off the biggest challenges, those that would require either organisational transformation, including interdepartmental collaboration and structural reform, or technological transformation, including re-engineering or refactoring applications from the ground up. 

For many organisations, the easy gains have already been realised and the real challenges lie ahead.

Indeed, many of the easy gains came from virtualised applications that could easily be ‘lifted and shifted’ to the cloud and connected to cloud-based block storage. Now with budgets that are flat or in decline and data volumes that continue to grow, there is a looming crisis relating not only to the ongoing cost of data storage, but also to the cost of both ingress and egress [The cost of moving data and applications into the cloud (ingress) or move anything out of the cloud (egress) or even moving it between regions].

Things should be fine for those that ‘looked both ways’ and ensured that such costs were calculated in advance and built into the business case. However, those that ‘only looked left’ will have been hit from the right by unexpected costs that are outpacing the growth of their budgets. Indeed, all too many CIOs have gone from being unintelligent in their use of data in legacy environments to unintelligent in their use of data in the cloud.

If you, like many, have been overly focused on the cost of infrastructure and compute, but as quickly as savings have already been realised (and the easy ones have all been realised already), you have started experiencing exponential data growth and with it cost, and you’re locked in by egress charges, then you’ve got a BIG problem. Even if you are using existing commercial infrastructure or commodity cloud services to cap infrastructure costs, if your data use is unintelligent or is growing fast, both of which are true in many organisations, then your costs will be spiralling in the wrong direction.

The only way out is to 1) rationalize your data and 2) find an intelligent longer-term solution for your data storage that doesn’t lock you in to a single cloud provider and doesn’t include egress or ingress charges.

Thankfully there are multi-cloud storage solutions, like HPE’s new Cloud Volumes service, that not only include AI to maximise the intelligence with which you manage your storage, but provide a direct link to both your own on prem systems as well as all the public cloud providers, but are also free of ingress and egress charges (once you bitten the bullet and met the initial one-off charge from your current cloud provider of moving any existing data onto this new platform).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft beats AWS to $10bn JEDI contract: Defining multi-cloud and analysing administrative influence

Analysis The announcement from the Department of Defense (DoD) on Friday that Microsoft had been awarded the long-detailed $10 billion JEDI (Joint Enterprise Defense Infrastructure) cloud computing contract elicited responses of surprise from many in the industry.

The release confirming the contract to Microsoft made for interesting reading. Transparency was the name of the game: the award was ‘conducted in accordance with applicable laws and regulations’, it ‘cleared review by the GAO and Court of Federal Claims’, with all bids ‘treated fairly and evaluated consistently with the solicitation’s stated evaluation criteria.’

Yet one paragraph up, the DoD notes that the award ‘continues [its] strategy of a multi-vendor, multi-cloud environment… as the department’s needs are diverse and cannot be met by any single supplier.’

Arguably the biggest point of discussion around the entire procurement focused on the single or multi-cloud approach. Writing for this publication in August, David Friend, CEO of cloud storage provider Wasabi, made his opinions on multi-cloud clear.

“We will see the cloud market become increasingly decentralised in the years to come, as more specialist vendors spring up to meet specific customer needs at better prices,” wrote Friend. “We just have to hope the JEDI contract doesn’t feed the giant at the expense of the competition being able to grow.”

The contract award has potentially done that, though perhaps not in the way Friend intended. Noting Amazon’s market leadership in cloud infrastructure, an argument can be made that, on business terms, giving this award to a strong, entrenched second player – as Microsoft is – would facilitate a continued competitive market.

The question remains, however: is this single cloud or multi-cloud? AWS has been running the CIA’s cloud for the better part of half a decade; confirmation arrived in February 2015 that it was running on ‘final operational capability.’

Cloud pundit Bill Mew sees it as the latter given AWS’ other commitment – but criticised the procurement process. “A lot of people were arguing that it should be a multi-cloud bid and therefore open up to a number of different competitors,” Mew told CloudTech. “The DoD argued the reverse – we need one supplier simply because we need the level of tight integration and security.

“I totally buy that if that’s their argument – but then why are they not going to the same supplier the CIA have?” Mew added. “There are going to be hundreds of other government sector contracts coming up. You have to think at an overall strategic level within government – is the single cloud approach the one we’re taking or are we actually going to ascribe a multi-cloud approach where we want a healthy market? And if so, why didn’t we set out right at the outset what the interoperability standards are within that environment?”

Rumour and conjecture has been rife regarding the process behind the contract award. Around the time Oracle’s initial legal challenge around its exit from the process was dismissed, President Trump announced he was looking into the contract, citing – as reported by CNBC – “tremendous complaints from other companies.” According to the same publication on Saturday, former secretary of defence James Mattis claims in a new book that President Trump told him to ‘screw Amazon’ out of the contract.

Mew argues that, should AWS challenge this award – the Washington Post cites one legal analyst who said it was a ‘virtual guarantee’ – its case will be ‘far stronger’ than Oracle’s.

“I’m normally somebody who trusts the system, but there’s already been so much of a mess in terms of this procurement, and we have an administration here who have shown themselves to be not entirely unopen to bias,” he said. “One has to have a level of cynicism. I think it will all come out in time.”

One other fact to consider is around the contract itself. $10 billion is a naturally eye-catching number – Microsoft noted in its financials last week ‘material growth’ in $10 million Azure deals – but the contract has plenty of wiggle room. DoD official communications note a two-year base contract period with $1m guaranteed.

“This is an enormous vouch of credibility for Microsoft and Azure – there’s no taking away from how important this is to them,” said Mew. “However, if you look at the contract, it doesn’t mandate that $1bn is spent every year, it is a very flexible framework.”

A statement from AWS read: “We’re surprised about this conclusion. AWS is the clear leader in cloud computing, and a detailed assessment purely on the comparative offerings clearly lead to a different conclusion. We remain deploy committed to continuing to innovate for the new digital battlefield where security, efficiency, resiliency, and scalability of resources can be the difference between success and failure.”

When asked about plans to appeal, AWS did not return comment at publication time.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Hosting online banking in the public cloud a ‘source of systemic risk’ amid rising IT failures

Keumars Afifi-Sabet

28 Oct, 2019

The financial services industry is not doing enough to mitigate a rising volume of IT failures, spurred on by a reluctance to upgrade legacy technology, a parliamentary inquiry has found.

Regulators, such as the Financial Conduct Authority (FCA), are also not doing enough to clamp down on management failures within UK banks, which often use cost or difficulty as “excuses” not to make vital upgrades to legacy systems.

With online banking rising in popularity, the severity of system failures and service outages has also seen an “unacceptable” rise, according to findings published by the House of Commons’ Treasury Select Committee.

The report concluded the impact of these failures range from an inconvenience to customer harm, and even threats to a business’ viability. The lack of consistent and accurate recording of data on such incidents is also concerning.

“The number of IT failures that have occurred in the financial services sector, including TSB, Visa and Barclays, and the harm caused to consumers is unacceptable,” said the inquiry’s lead member Steve Baker MP.

“The regulators must take action to improve the operational resilience of financial services sector firms. They should increase the financial sector levies if greater resources are required, ensure individuals and firms are held to account for their role in IT failures, and ensure that firms resolve customer complaints and award compensation quickly.

“For too long, financial institutions issue hollow words after their systems have failed, which is of no help to customers left cashless and cut-off. And for too long, we have waited for a comprehensive account of what happened during the TSB IT failure.”

MPs launched this inquiry to examine the cause behind such incidents, reasons for their frequency, and what regulators can do to mitigate the damage.

As the report identified, TSB’s IT meltdown during 2018 is the most prominent example of an online banking outage in recent years.

The major incident, which lasted several days, was caused by a major transfer of 1.3 billion customer records to a new IT system. A post-mortem analysis by IBM subsequently showed the bank did not carry out rigorous enough testing.

TSB has not been the only institution to have suffered banking outages, with figures compiled by the consumer watchdog Which? showing customers with major banks suffered outages 302 incidents in the last nine months of 2018. Another example of a prominent incident saw NatWest, RBS and Ulster Bank hit by website outages in August this year.

Beyond the work banks must do to ensure their systems are resilient, the MPs found that regulators must do far more to hold industry giants to account when failures do occur. Poor management and short-sightedness, for example, are key reasons why regulators must intervene to ensure banks aren’t exposing customers to risk due to legacy systems.

When companies embrace new technology, poor management of the transitions required is one of the major causes of IT failure, the report added, with time and cost pressures leading banks to “cut corners”.

Banks themselves, moreover, must adopt an attitude to ensure robust procedures are in place when incidents do occur, treating them not as a possibility but a probability.

Data protection and GDPR compliance are primary goals for major firms. Learn about the security features that will help you achieve and sustain compliance in this whitepaper.

Download now

Meanwhile, the use of third-party providers has also come under scrutiny, with the select committee urging regulators to highlight the risks of using services such as cloud providers.

The report highlighted Bank of England statistics that show a quarter of major banks, and a third of payment activity, is hosted on the public cloud. This means banks and regulators must think about the implications for concentrating operations in the hands of just a few platforms.

The risks to services of a major operational incident at cloud providers like Amazon Web Services (AWS) or Google Cloud Platform (GCP) could be significant, with the market posing a “systemic risk”. There should, therefore, be a case for regulating these cloud service providers to ensure high standards of operational resilience.

The report listed a number of suggestions for mitigating the risk of concentration, but conceded the market is already saturated and there was “probably nothing the Government or Regulators can do” to reduce this in the short-term.

Some measures, such as establishing channels of communication with suppliers during an incident, and building applications that can substitute a critical supplier with another, could go towards mitigating damage.

“This call for regulation and financial levies is a step in the right direction towards holding banks accountable for their actions,” said Ivanti’s VP for EMEA Andy Baldin.

“Some calls to action have already been taken to restrict how long banking services are allowed to be down for without consequence, such as last year’s initiative to restrict maximum outage time to two days. However, the stakes are constantly increasing and soon even this will become unacceptable.

“Banks must adopt new processes and tools that leverage the very best of the systems utilised in industries such as military and infrastructure. These systems have the capability to reduce the two-day maximum to a matter of minutes in the next few years – working towards a new model of virtually zero-downtime.”