How the cloud wars are beginning to enter a new phase

Opinion Imagine a business where the product keeps getting better and better, like storage space and video quality, but the prices keep falling steadily as the product continues to improve. It’s happening in cloud computing – and this ‘race to zero’ has people in the industry deeply worried.

The current leader, Amazon Web Services (AWS), slashed their prices an astounding 44 times in the past six years. Amazon’s strategy, it seems, is more of a low cost, high volume model. This strategy seems to be working according to research company Gartner, who reported that AWS stores twice as much customer data as the next seven leading public cloud companies. AWS profits in 2016 grew over 120% and sales grew over 60%, extinguishing any assumptions that AWS was a loss leader product for Amazon as a whole.

Amazon is betting at such a low cost, enterprise businesses will treat cloud services like you treat products at Walmart. With such low prices on products, you’ll keep adding more to your cart. The other leading cloud providers, like Microsoft and Google, are willing to keep up and match Amazon’s prices, with Gartner analyst Raj Bala describing it as aggressive, even ‘punitive’.

Aaron Levie, CEO of Box, has predicted that storage will be free and unlimited in the future. If the current race to lower prices continues, it looks as if Levie’s prediction will soon come to pass. This means, however, that cloud companies have to build in other functionality besides storage alone. From security improvements to converged storage where data storage is combined with data computing, cloud companies are looking for related and integrated services they can charge for.

In the race to differentiate, Microsoft rolled out a hybrid cloud which integrates into its subscription based Office 365 program in April 2016. Satya Nadella, Microsoft’s CEO, explained how this integration works. “If a certain calculation needs massive resources, the transaction can be handed off to its cloud products to provide a sort of turbo charge,” he said. This strategy might be very successful, since many people already use Office at work and the ease of integration might be the incentive for companies to choose Azure over AWS.

Google’s Cloud Platform has strengths with its big data analytics capability, but also has some restraints as it is unable to integrate with existing data platforms, making it more difficult for late adopters to convert to the service. As Kurt Marko explains, it can be a ‘poor choice for cloud laggards and organisations looking for a place to offload legacy virtual infrastructure and applications’.

In March this year, IBM announced it was rolling out more cloud offerings in an effort to compete with AWS and to make the point that storage is not the aspect of the cloud which enterprises really need. One study predicts 85% of enterprise businesses will move to multi-cloud architectures by 2018. IBM’s new products utilise Watson to help clients manage and leverage data stored between multiple clouds.

One analyst, Rodney Nelson at Morningstar Inc., questioned how effective IBM’s rollout would be, noting “the major cloud vendors are flooding the market with new features on a consistent basis…all have moderate to major leads on IBM in that regard.” IBM’s most competitive idea may be moving focus away from offering cloud services and moving towards offering services that will integrate with all cloud providers.

The intensely competitive cloud computing environment is volatile, with new companies throwing their proverbial hat in the ring, such as the Chinese company Alibaba. It’s also pushing out current competitors, with telecom companies like Verizon and AT&T losing market share due to lack of expensive yet impactful infrastructure investment and innovations.

There are so many facets which will affect the final outcome of the cloud computing industry. From the businesses who are quickly trying to implement and then harness the cloud’s potential, there are CIOs who are actually choosing the cloud provider, business people who are learning how to use big data and cloud computing for business intelligence, to companies who are fighting for the biggest piece of the pie and are investing millions to stay competitive. It’s impossible to predict what the final outcome will be.

Keynote: #DevOps and Cloud Craftsmanship | @CloudExpo @CAinc #AI #DX

Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant business advantage.

read more

Tencent announces data centre expansion including first European sites

Chinese Internet firm Tencent has announced expansion plans for its data centres, opening five new facilities across three continents by the end of this year.

The first new data centre in Silicon Valley officially opened its doors earlier this week, with future sites being planned in Frankfurt, Moscow, Mumbai, and Seoul. The company added the facilities will be used to serve ‘online games, online finance, video and other Internet-related industries’.

Tencent aims to take the overall number of its overseas data centres to eight with this expansion. The company already has sites in Hong Kong, Singapore and Toronto, as well as operating more than a dozen data centres in mainland China.

“We want to enhance our overseas cloud capability to meet the rising demand from companies around the world as they look for fast, reliable, secure and cost-effective services during the global expansion and migration to the cloud era,” said Rita Zeng, vice president of Tencent Cloud in a statement. “I am confident that we can meet their needs with our technical capability, global network, as well as experience accumulated in serving the massive user-base in our home market.”

The move is similar in focus to Alibaba, another Chinese vendor with serious cloud aspirations. The eCommerce provider announced in November last year its plans to open new data centres in Germany – also in Frankfurt – the Middle East, Australia, and Japan by the end of the year.  

Last month, Tencent gave another indication as to its focus by announcing its cloud services would be beefed up with GPU accelerators from NVIDIA, with a wider aim to give customers machine learning and natural language processing capabilities by combining a traditional CPU with a graphics processing unit. The vast majority of leading cloud players are NVIDIA customers, including Amazon Web Services (AWS), Microsoft, Google, and IBM.

CollabNet Named “Bronze Sponsor” of @CloudExpo | @CollabNet #DX #DevOps

SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON’s 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value Stream Mapping for the development and operations tool chain by offering DevOps Tool Chain Integration and Traceability; DevOps Tool Chain Orchestration; and DevOps Insight and Intelligence. CollabNet also offers traditional application lifecycle management, ALM, for the enterprise through its TeamForge product.

read more

Four nines and failure rates: Will the cloud ever cut it for transactional banking?

Banks looking to take their cloud plans to the next level are likely to have returned to the drawing board following the latest Amazon Web Services outage, which disrupted the online activities of major organizations from Apple to the US Securities and Exchange Commission. One estimate suggests US financial services companies alone lost $160 million – in just four hours. It’s been a timely reminder that any downtime is too much in an always-on digital economy, certainly for financial services.

The sobering point is that AWS was still delivering within the terms of its service-level agreement (SLA). This promises 99.99% service and data availability (otherwise known as “four nines” availability). This may be good enough for a lot of things, but it won’t do for banking. 

Over a year that 0.01% scope for unavailability equates to almost nine hours of unplanned outage – and that’s on top of any planned downtime for maintenance or updates. Combine the two and you’re looking at more than a day’s worth of service loss across a 12-month period. It’s hardly a recommendation for banks to move critical, live data into the cloud – however compelling the business drivers.

Banks need five nines (99.999%) service and data availability – the levels aimed for on their own premises. That’s a downtime tolerance of between 0.32 and three seconds per year. And public cloud services are not set up to match that. It would be uneconomical.

The end doesn’t justify the means

Moving real-time transactions into the cloud is the final frontier for traditional regulated financial institutions. And there’s no question that they need, and want, to do this. It’s vastly more cost-efficient, and it’s the only way they can hope to compete with nimble financial upstarts, whose agility owes everything to being able to crunch huge numbers at high-speed using someone else’s top-of-the-range server farms.

Financial authorities such as the UK’s Financial Conduct Authority have already accepted the cloud, which on the face of it gives banks the green light to be more ambitious. But not really, because the issued guidance doesn’t bridge the reality gap traditional banks need to get across – in other words, the inadequate service level for scenarios other than data archiving or disaster recovery.

In data archiving and backup applications, the cloud’s appeal hinges on its cost-efficiency, scalability and durability. But durability should not be confused with availability. Even if data is tightly safeguarded, and can be brought back online efficiently after a system crash or other crisis, this adds no value in a live-data scenario. If there is any chance that at some point access may be interrupted, the other merits of cloud don’t matter in this context.

And that’s why banks haven’t made the final leap to using cloud in a production environment – because these otherwise very viable on-demand data centres can’t offer them the very high availability assurances they need.

Lost market opportunity

So banks are stuck. The inability to move core systems and live data into the cloud is costing them competitively in lost market opportunity.

If they could make the leap, it would pave the way foradvanced customer analytics, intelligent service automation, complex stock correlations, and predictive fraud detection: data-intensive applications that demand massive computer power – at a scale that their proprietary data centres simply can’t deliver.

But AWS and other mainstream cloud infrastructure providers have designed their services and service level agreements to meet the needs of the majority: where the risk of interrupting a morning’s business, social feeds or even hedge fund activity, though costly, is at least partly offset by huge infrastructure savings.

Remaining open to new options

Banks absolutely need to be more ambitious and creative in their use of the cloud. Their future differentiation depends on having access to the same computer power, speed and flexible resource as their more nimble, less risk-averse competitors. But they are not going to make the transition until the service levels they rely on for core systems can be delivered.

Inadequate service levels are a significant stumbling block, but lessons will be learnt each time a high-profile cloud service is compromised. In the meantime, barriers to what banks need to do can be overcome. Solving the data availability issue comes down to the way data is synchronized between sites (e.g. primary servers and secondary data centres), so that live data is always available in more than one place at the same time. It sounds impossible, but it isn’t.

Achieve this (and at WANdisco we have) and the nines will take care of themselves.

[session] @VMware to Present at @CloudExpo | @CSWolf #VM #AI #ML #DX

Building a cross-cloud operational model can be a daunting task. Per-cloud silos are not the answer, but neither is a fully generic abstraction plane that strips out capabilities unique to a particular provider. In his session at 20th Cloud Expo, Chris Wolf, VP & Chief Technology Officer, Global Field & Industry at VMware, will discuss how successful organizations approach cloud operations and management, with insights into where operations should be centralized and when it’s best to decentralize.

read more

[session] @JuniperNetworks to Present at @CloudExpo | @BethGage #AI #DX

The age of Digital Disruption is evolving into the next era – Digital Cohesion, an age in which applications securely self-assemble and deliver predictive services that continuously adapt to user behavior. Information from devices, sensors and applications around us will drive services seamlessly across mobile and fixed devices/infrastructure. This evolution is happening now in software defined services and secure networking. Four key drivers – Performance, Economics, Interoperability and Trust – will shape the way users, service providers and the industry creates the foundation for the next networking era.

read more

Announcing @HDScorp to Exhibit at @CloudExpo New York | #Cloud #Storage

SYS-CON Events announced today that Hitachi Data Systems, a wholly owned subsidiary of Hitachi LTD., will exhibit at SYS-CON’s 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City.
Hitachi Data Systems (HDS) will be featuring the Hitachi Content Platform (HCP) portfolio. This is the industry’s only offering that allows organizations to bring together object storage, file sync and share, cloud storage gateways, and sophisticated search and analytics to create a tightly integrated, simple and smart cloud-storage solution. HCP provides massive scale, multiple storage tiers, powerful security, cloud capabilities, and multitenancy with configurable attributes for each tenant, all backed by legendary Hitachi reliability. Designed to preserve data for long durations, HCP carries built-in data protection mechanisms and is designed to fluently evolve as storage technologies change.

read more

[session] @Akamai to Present #IoT and #FogComputing | @ThingsExpo #AI #DX

With billions of sensors deployed worldwide, the amount of machine-generated data will soon exceed what our networks can handle. But consumers and businesses will expect seamless experiences and real-time responsiveness. What does this mean for IoT devices and the infrastructure that supports them? More of the data will need to be handled at – or closer to – the devices themselves.

read more

Nutanix “Platinum Sponsor” of @CloudExpo NY & CA | @Nutanix #DevOps #DX

DevOps is often described as a combination of technology and culture. Without both, DevOps isn’t complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.

read more