Gartner warns of competitive landscape shifting on IaaS

(c)iStock.com/Alija

Analyst house Gartner has warned over the current position of the infrastructure as a service (IaaS) market, claiming it to be in a “state of upheaval”, despite global spending expected to grow almost 33% in 2015.

According to VP and distinguished analyst Lydia Leong, customers are getting great value out of cloud IaaS, but as 2014 was a “year of reckoning” for many IaaS providers, some have been left high and dry, believing their current strategy was failing them.

Leong said: “Cloud IaaS can now be used to run most workloads, although not every provider can run every type of workload well. Cloud IaaS is not a commodity.

“Although in theory cloud IaaS has very little lock-in, in truth cloud IaaS is not merely a matter of hardware rental, but an entire data centre ecosystem as a service,” she added. “The more you use its management capabilities, the more value you will receive from the offering, but the more you will be tied to that particular service offering.”

Richard Davies is CEO of ElasticHosts. He argues that despite the growth in IaaS, recent figures from IBM, McKinsey, Google, and Gartner show users are still over-provisioning by up to 50% – and this number could rise further.

“By billing on usage, rather than capacity, users can be charged for the actual capacity they use, rather than the amount they provision,” he said. “As companies grow their investments in public cloud infrastructure, being able to reduce these costs by 50% will make a significant difference to the bottom line.”

Gartner urges buyers to be “extremely cautious” when selecting providers. According to the analyst house’s Magic Quadrant on infrastructure as a service, Amazon Web Services (AWS) dominates, followed by Microsoft and Google’s offerings. When other researchers look at the figures – Synergy Research immediately springs to mind – AWS is clearly out in front, with Microsoft a clear second place.

Gartner argues global spending on IaaS will reach almost $16.5 billion (£10.65bn) in 2015.

Will the next generation of Linux Containers knock load balancers off kilter?

(c)iStock.com/Artush

Modern IT infrastructure needs to be highly flexible as the strain on servers, sites and databases grows and shrinks throughout the day. Cloud infrastructure is meant to make scaling simple by effectively outsourcing and commoditising your computing capacity so that, in theory, you can turn it on and off like a tap. However, most approaches to provisioning cloud servers are still based around the idea that you have fixed-size server “instances”, offering you infrastructure in large blocks that must each be provisioned and then configured to work together. This means your infrastructure scaling is less like having a handy tap and more like working out how many bottles of water you’ll need.

There are traditional approaches to ensure all these individual instances work efficiently and in unison (so that those bottles of water don’t run dry or go stagnant); one of the more popular tools for cloud capacity management today is the load balancer. In fact, load balancers are quite often bought alongside your cloud infrastructure. The load balancer sits in front of your servers and directs traffic efficiently to your various cloud server instances. To continue the analogy, it makes sure everyone drinks their fill from the bottles you’ve bought, using each bottle equally, and no one is turned away thirsty. If your infrastructure undergoes more load than you have instances to handle, then the load balancer makes an API call to your cloud hosting provider and more servers are bought and added to the available instances in the cluster. Each instance is a fixed size and you start more of them, or shut some down, according to need. This is known as horizontal scaling.

Existing virtualisation technology also allows individual server instances to be scaled vertically after a reboot. A single instance can be resized, on reboot, to accommodate increased load. This would be like going from a small bottle of water to a 5 gallon demijohn when you know that load will increase. However, frequently rebooting a server is simply not an option in today’s world of constant availability, so most capacity management is currently done by adding servers, rather than resizing them.

However, there are many challenges with this traditional horizontal scaling approach of running multiple server instances behind a load balancer. The current situation wherein extra servers must be spun up to handle spikes in load means greater complexity for those that have to manage the infrastructure, greater costs in having to scale up by an entire server at a time, and poor performance when load changes suddenly and extra servers can’t be started quickly enough. Since computing power is provisioned in these large steps, but load varies dynamically and continuously, it means enterprises are frequently paying to keep extra resources on standby just in case a load spike occurs. For example, if you have an 8GB traditional cloud server which is only running 2GB of software at present, then you still will be paying for 8GB of provisioned capacity. Industry figures show that typical cloud servers may have 50% or more of expensive, but idle, capacity on average over a full 24/7 period.

The latest developments in the Linux kernel have presented an interesting alternative to this approach. New capabilities of the Linux kernel, specifically namespaces and control groups, enabled the recent rise of containerisation for Linux cloud servers in competition to traditional virtualisation. Container-based isolation, such as Linux Containers (LXC), Docker and Elastic Containers, mean that server resources can be fluidly apportioned to match the load on the instance as it happens, ensuring cost-efficiency by never over- or under-provisioning. Unlike traditional virtualisation, containerised Linux cloud servers are not booted at a fixed size, but instead individual servers grow and shrink dynamically and automatically according to load while they are running.

Naturally, there are certain provisos to this new technology. Firstly, as it currently stands, a Linux host can only run Linux-based cloud servers. Also, the benefit of not needing a load balancer at all is most relevant to servers which scale with the resources of a single large physical host server. Very large systems that need to scale beyond this will still require load-balanced clustering, but can also still benefit from vertical scaling of all of the servers in that cluster.

Vertical scaling of containerised servers can therefore handle varying load with no need to pre-estimate requirements, write API calls or, in most cases, to configure a cluster and provision a load-balancer. Instead, enterprises simply pay for the resources they use, as and when they use them. Going back to our analogy, this means you simply turn the tap on at the Linux host’s reservoir of resources. This is a giant leap forward in commoditising cloud computing and takes it closer to true utilities such as gas, electricity and water.

Dockerize Networking | @DevOpsSummit [#DevOps #Docker #Containers]

While Docker continues to be the darling of startups, enterprises and IT innovators around the world, networking continues to be a real mess. Indeed, managing the interaction between Docker containers and networks has always been fraught with complications. Without automation in networking, the vision of running Docker at scale and letting IT run the same apps unchanged on the laptop and in the data center or for any cloud cannot be realized.

read more

OpenStack does some soul searching, finds its core self

Bryce: 'OpenStack will power the planet's clouds'

Bryce: ‘OpenStack will power the planet’s clouds’

The OpenStack Foundation announced new interoperability and testing requirements as well as enhancements to the software’s implementation of federated identity which the Foundation’s executive director Jonathan Bryce says will take the open source cloud platform one step closer to world domination.

OpenStack’s key pitch beyond being able to spin up scalable compute, storage and networking resources fairly quickly, is that OpenStack-based private clouds should be able to burst into the public cloud or some private cloud instances if need be. That kind of capability is essential if the company is going to take on companies like AWS, VMware and Microsoft, but has so far been quite basic in terms of implementation.

But for that kind of interoperability to happen you need three things: the ability to federate the identity of a cloud user so permissions and workloads can port over to whatever platforms are being deployed on (and to ensure those workloads are secure); a definition of what vendors, service providers and customers can reliably call core OpenStack, so they can all expect a standard collection of tools, services, and APIs to be found in every distribution; and, a way to test interoperability of OpenStack distributions and appliances.

To that end, the Foundation announced a new OpenStack Powered interoperability testing programme, so users can validate the interoperability of their own deployments as well as gain assurances from vendors that clouds and appliances branded as “OpenStack Powered” meet the same requirements. About 16 companies already have either certified cloud platforms or appliances available on the OpenStack Marketplace as of this week, and Bryce said there’s more to come.

The latest release of OpenStack, Kilo, also brings a number of improvements to federated identity, making it much easier to implement as well as more dynamic in terms of workload deployment, and Bryce said that over 30 companies have committed to implementing federated identity (which has been available since the Lighthouse release) by the end of this year – meaning the OpenStack cloud footprint just got a whole lot bigger.

“It has been a massive effort to come to an agreement on what we need to have in these clouds, how to test it,” Bryce said. “It’s a key step towards the goal of realising an OpenStack-powered planet.”

The challenge is, as the code gets bulkier and as groups add more services, joining all the bits and making sure they work together without one component or service breaking another becomes much more complex. That said, the move marks a significant milestone for the DefCore group, the internal committee in charge of setting base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. The group have been working for well over a year on developing a standard definition of what a core OpenStack deployment is.

Webcast: Scalable Analytics Using Amazon S3 & Snowflake | @CloudExpo [#Cloud]

DoubleDown Interactive, a provider of online free-to-play casino games, needs to process huge amounts of streaming event data from their games and make that data rapidly available to data scientists and analysts. Hear how Amazon S3 and Snowflake’s Elastic Data Warehouse have made it possible for DoubleDown to deploy a reliable, scalable data architecture for their analytics needs.
You’ll learn how Amazon S3 and Snowflake technology help address key challenges in building data pipelines and processing data and how DoubleDown combines Amazon Kinesis, S3, and Snowflake to ingest, store, and process their data faster, more easily, and less expensively.

read more

TD Bank uses cloud as catalyst for cultural change in IT

Peacock said TD Bank is using the cloud as a way to help catalyse cultural change at the firm

Peacock said TD Bank is using the cloud as a way to help catalyse cultural change at the firm

North American retail banking outfit TD Bank is using OpenStack among a range of other open source cloud technologies to help catalyse cultural change as it looks to reduce costs and technology redundancy, explained TD Bank group vice president of engineering Graeme Peacock.

TD Bank is one of Canada’s largest retail banks, having divested many of its investment banking divisions over the past ten years while buying up smaller American retail banks in a bid to offer cross-border banking services.

Peacock, who was speaking at the OpenStack Summit in Vancouver this week, said TD Bank is in the midst of a massive transition in how it procures, deploys and consumes technology. The bank aims to have about 80 per cent of its 4,000 application estate moved over to the cloud over the next five years.

“If they can’t build it on cloud they need to get my permission to obtain a physical server. Which is pretty hard to get,” he said.

But the company’s legacy of acquisition over the past decade has shaped the evolution of both the technology and systems in place at the bank as well as the IT culture and the way those systems and technologies are managed.

“Growing from acquisition means we’ve developed a very project-based culture, and you’re making a lot of transactional decisions within those projects. There are consequences to growing through acquisition – TD is very vendor-centric,” he explained.

“There are a lot of vendors here and I’m fairly certain we’ve bought at least one of everything you’ve ever made. That’s led to the landscape that we’ve had, which has lots of customisation. It’s very expensive and there is little reused.”

Peacock said much of what the bank wants to do is fairly straightforward: moving off highly customised expensive equipment and services, and moving on to more open, standardised commodity platforms, and OpenStack is but one infrastructure-centric tool helping the bank deliver on that goal (it’s using it to stand up an internal private cloud). But the company also has to deal with other aspects a recent string of acquisition has left at the bank, including the fact that its development teams are still quite siloed, in order to reach its goals.

In order to standardise and reduce the number of services the firm’s developers use, the bank  created an engineering centre in Manhattan and elected a team of engineers and developers (currently numbering 30, but will hit roughly 50 by the end of the year) spread between Toronto and New York City, all focused on helping it embrace a cloud-first, slimmed-down application landscape.

The centre and the central engineering team work with other development teams and infrastructure specialists across the bank, collecting feedback through fortnightly Q&As and feeding that back into the solutions being developed and the platforms being procured. Solving developer team fragmentation will ultimately help the bank move forward on this new path sustainably, he explained.

“When your developer community is so siloed you don’t end up adopting standards… you end up with 27 versions of Softcat. Which we have, by the way,” he said.

“This is a big undertaking, and one that has to be continuous. Business lines also have to move with us to decompose those applications and help deliver against those commitments,” he added.

Blue Box Builds Enterprise Edition | @CloudExpo @BlueBox [#Cloud]

Blue Box has launched a new enhanced version of its Blue Box Cloud product, targeting enterprise IT buyers who have high demands with regard to performance and service levels.
Blue Box Cloud Enterprise edition complements the standard edition of Blue Box Cloud. The Enterprise Edition features Dell PowerEdge R630 servers running the Intel® Xeon® processor family E5-2600 v3 with hyper-threading. Blue Box Cloud Enterprise servers offer 96 physical cores and 96 hyper-threads, offering an effective 192 cores each. This configuration also offers double the RAM and storage for the hypervisor. Dell also provides servers for the object storage, running OpenStack Swift. Block storage, based on OpenStack Cinder, is provided by Nimble Storage with their feature rich and high performance Adaptive Flash platform.

read more

Architecting with @CloudFoundry By @RagsS | @DevOpsSummit [#DevOps]

As the world moves from DevOps to NoOps, application deployment to the cloud ought to become a lot simpler. However, applications have been architected with a much tighter coupling than it needs to be which makes deployment in different environments and migration between them harder. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, Netflix and so on is at the heart of CloudFoundry – a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS.
In his session at 16th Cloud Expo, Raghavan Srinivas, an Architect/Developer Evangelist for EMC, will discuss the microservices architecture in detail and how to architect applications for the cloud in general, CloudFoundry in particular. He will harness the power of dependency injection that Spring provides to pick a variety of data sources and services.

read more

IBM, Deloitte to jointly develop risk management, compliance solutions

IBM and Deloitte are partnering to use big data for compliance in financial services

IBM and Deloitte are partnering to use big data for compliance in financial services

IBM and Deloitte are partnering to develop risk management and compliance solutions for the financial services sector, the companies said this week.

The partnership will see Deloitte offer up its financial services and risk management consulting expertise to help IBM develop a range of cloud-based risk management services that combine the technology firm’s big data analytics and Watson-as-a-Service cognitive computing platform.

Deloitte will also work with joint customers to help integrate the solutions into their organisation’s technology landscape.

“The global enterprise risk management domain is undergoing significant transformation, and emerging technologies like big data and predictive analytics can be used to address complex regulatory requirements,” said Tom Scampion, global risk analytics leader at Deloitte UK. “We are excited to be working with IBM to apply their market leading technologies and platforms to enable faster, more insightful business decisions. This alliance aims to completely re-frame and re-shape the risk space.”

“Financial services firms are under tremendous pressure, which has forced them to spend the majority of their IT budgets addressing regulatory requirements.  There is an opportunity to transform the approach organizations are taking and leverage the same investments to go beyond compliance and deliver real business value,” said Alistair Rennie, general manager of analytics solutions, IBM. “Combining [Deloitte’s] knowledge with our technology will provide our clients with breakthrough capabilities and deliver risk and regulatory intelligence in ways previously not possible.”

Deloitte’s no stranger to partnering with large incumbents to bolster its appeal to financial services clients. Last year the company partnered with SAP to help develop custom ERP platforms based on HANA for the financial services sector, and partnered with NetSuite to help it target industry verticals more effectively.

Why Hybrid Cloud Is Tough to Manage By @DerekCollison | @CloudExpo [#Cloud]

We’ve now entered the proclaimed “Year of the Hybrid Cloud,” during which more than 65 percent of enterprise IT organizations say they will commit to hybrid cloud technologies, according to IDC FutureScape for Cloud. Hybrid cloud is attractive because organizations believe they can achieve greater levels of scalability and cost-effectiveness by using a combination of in-house IT resources and public cloud environments tailored to their unique needs.
But it’s important for enterprises to understand that this new approach to IT architecture presents several significant management problems. In fact, the hybrid cloud can result in greater IT complexity and cost since each environment brings with it its own unique tools for deployment, management and monitoring, and companies will have to hire more employees who specialize in each type of cloud technology.

read more