Category Archives: Infrastructure as a Service

CIF: Enterprises still struggling with cloud migration

Enterprises are still struggling with their cloud migrations, the CIF claims

UK enterprises are still struggling with their cloud migrations, the CIF research shows

The latest research published by Cloud Industry Forum (CIF) suggests over a third of UK enterprises IT decision-makers believe cloud service providers could have better supported their migration to the cloud.

The CIF, which polled 250 senior IT decision-makers in the UK earlier this year to better understand where cloud services fit into their overall IT strategies, said its clear UK business are generally satisfied with their cloud services and plan to use more of them. But 35 per cent of those polled also said their companies still struggle with migration.

“The transition to cloud services has, for many, not been as straightforward as expected. Our latest research indicates that the complexity of migration is a challenge for a significant proportion of cloud users, resulting in unplanned disruption to the business,” said Alex Hilton, chief executive of the Cloud Industry Forum.

“There may be a case that cloud service providers need to be better at either setting end user expectations or easing the pain of migration to their services. But equally, it’s important that end users equip themselves with enough knowledge about cloud to be able to manage it and ensure that the cloud-based services rolled out can support business objectives, not hinder them.”

Piers Linney, co-chief executive of Outsourcery said the research highlights the need for providers to develop a “strong integrated stack of partners.”

“IT leaders looking for a provider should first assess their existing in-house skills and experience to understand how reliant they will be on the supplier to ensure a smooth transition. Equally, cloud suppliers need to be more sensitive to their customers’ requirements and tailor their service to the level of support needed for successful cloud adoption,” he said.

“The most critical factor is for IT leaders to really get under the bonnet of their potential cloud provider, make sure that the have a strong and highly integrated stack of partners and a proven track record of delivery for other customers with needs similar to their own.”

iomart buys cloud consultancy for SystemsUp for £12.5

iomart is buying IT consultancy SystemsUp for an estimate £12.5m

iomart is buying IT consultancy SystemsUp for an estimate £12.5m

UK cloud service provider iomart announced it has entered into a deal to acquire IT consultancy SystemsUp, which specialises in designing and delivering cloud solutions, for up to £12.5m.

The deal will see iomart pay £9m in an initial cash consideration for the London-based consultancy with a contingent consideration of up to £3.5m depending on performance.

iomart said the move would broaden its cloud computing expertise. SystemsUp designs and delivers solutions made to run on Google, AWS and Microsoft public clouds among other platforms, and specialises in the public sector cloud strategies.

“The market for cloud computing is becoming incredibly complex and the demand for public cloud services is increasing at pace,” said Angus MacSween, chief executive of iomart. “With the acquisition of SystemsUp, iomart has broadened its ability to engage at a strategic level and act as a trusted advisor on cloud strategy to organisations wanting to create the right blend of cloud services, both public and private, to fit their requirements.”

While iomart offers its own cloud services the company seems to recognise the need to build up skills in a range of other platforms; the company said SystemsUp will remain an “impartial, agnostic, expert consultancy.”

Peter Burgess, managing director of SystemsUp said: “We have already built up a significant reputation and expertise in helping organisations use public cloud to drive down IT costs and improve efficiency. As part of iomart we can leverage their award winning managed services offerings to deepen and widen our toolset to deliver a broader set of cloud services, alongside continuing to deliver the strategic advice and deployment of complex large public and private sector cloud projects.”

The move comes six months after iomart’s last acquisition, when the company announced it had bought ServerSpace, a rival cloud service provider, for £4.25m.

Alibaba announces partner programme to boost cloud efforts

Alibaba's partner programme will help it expand internationally

Alibaba’s partner programme will help it expand internationally

Alibaba’s cloud division Aliyun has launched a global partnership programme aimed at bolstering global access to its cloud services.

The company’s Marketplace Alliance Program (MAP) will see it partner with large tech and datacentre operators, initially including Intel, Singtel, Meeras, Equinix and PCCW among others to help localise its cloud computing services and grow its ecosystem.

“The new Aliyun program is designed to bring our customers the best cloud computing solutions by partnering with some of the most respected technology brands in the world. We will continue to bring more partners online to grow our cloud computing ecosystem,” said Sicheng Yu, vice president, Aliyun.

Raejeanne Skillern, general manager of cloud service provider business at Intel said: “For years Intel and Alibaba have collaborated on optimizing hardware and software technology across the data center for Alibaba’s unique workloads. As a partner in Aliyun’s Marketplace Alliance Program, Intel looks forward to continuing our collaboration to promoting joint technology solutions that are based on Intel Architecture specifically tailored to the rapidly growing market of international public cloud consumers.”

The move is part of Alibaba’s efforts to rapidly expand its presence internationally. This year the company put its first datacentre in the US, and just last week announced Equinix would offer direct access to its cloud platform globally. The company, often viewed as the Chinese Amazon, also plans to set up a joint venture with Meeras in Dubai that specialises in systems integration with a focus on big data and cloud-based services.

Equinix to offer direct access to Alibaba’s cloud service

Equinix will offer direct links to Alibaba's cloud

Equinix will offer direct links to Alibaba’s cloud

Equinix has signed an agreement with Alibaba that will see the American datacentre incumbent provide direct access to Chinese ecommerce firm’s cloud computing service.

The deal will see Equinix add Aliyun, Alibaba’s cloud computing division, to its growing roster of cloud services integrated with its cloud interconnection service, and offer direct access to Aliyun’s IaaS and SaaSs in both Asia and North America.

Equinix said it’s aiming this primarily at large multinationals looking to expand their infrastructure into Asia.

“Our multi-national enterprise customers are increasingly asking for access to the Aliyun cloud platform, as they deploy cloud-based applications across Asia,” said Chris Sharp, vice president of cloud innovation, Equinix.

“By providing this access in two strategic markets, we’re empowering businesses to build secure, private clouds, without compromising network and application performance,” Sharp said.

Sicheng Yu, vice president of Aliyun said: “Aliyun is very excited about our global partnership with Equinix, who not only has a global footprint of cutting-edge datacentres, but has also brought together the most abundant cloud players and tenants in the cloud computing ecosystem on its Equinix Cloud Exchange platform. Connecting the Equinix ecosystem with our Aliyun cloud services on Cloud Exchange will provide customers with the best-of-breed choices and flexibility.”

The move will see Equinix expand its reach in Asia, a fast-growing market for cloud services, and comes just one week after Equinix announced it would bolster its European footprint with the TelecityGroup merger.

Containers ready for the primetime, Rackspace CTO says

John Engates was in London for the Rackspace Solve conference

John Engates was in London for the Rackspace Solve conference

Linux containers have been around for some time but only now is the technology reaching a level of maturity enterprise cloud developers are comfortable with, explained John Engates, Rackspace’s chief technology officer.

Linux containers have been all the rage the past year, and Engates told BCN the volume of the discussion is only likely to increase as the technology matures. But the technology is still young.

“We tried to bring support for containers to OpenStack around three or four years back,” Engates said. “But I think that containers are finally ready for cloud.”

One of the projects Engates cited to illustrate this is Project Magnum, a young sub-project within OpenStack building on Heat to produce Nova instances on which to run application containers, and it basically creates native capabilities (like support for different scheduling techniques); it effectively enables users and service providers to offer containers-as-a-service, and improves portability of containers between different cloud platforms.

“While containers have been around for a while they’ve only recently become the darling of the enterprise cloud developers, and part of that is because there’s a growing ecosystem out there working to build the tools needed to support them,” he said.

A range of use cases around Linux containers have emerged over the years – as a transport method, as a way of quickly deploying and porting apps between different sets of infrastructure, as a way of standing up a cloud service that offers greater billing granularity (more accurate / efficient usage) – the technology is still maturing and has suffered from a lack of tooling. Doing anything like complex service chaining is still challenging with existing tools, but that’s improving.

Beyond LXC, one of the earliest Linux container projects, there’s now CoreOS, Docker, Mesos, Kubernetes, and a whole host of container-like technologies that bring the microservices / OS ‘light’ architecture as well as deployment scheduling and cluster management tools to market.

“We’re certainly hearing more about how we can help support containers, so we see it as a pretty important from a service perspective moving forward,” he added.

Skyscape, DeepSecure strike cloud data compliance deal

Skyscape is partnering with DeepSecure to bolster its security cred

Skyscape is partnering with DeepSecure to bolster its security cred

Cloud service provider Skyscape is partnering with DeepSecure in a move the companies said would help public sector cloud users meet their compliance needs.

DeepSecure traditionally sells to the police, defence and intelligence sectors and provides secure data sharing and data management services as well as cybersecurity systems, and the partnership will see Skyscape offer its customers DeepSecure’s suite of data sharing and security services.

The move could give Skyscape, which already heavily targets the public sector, a way in with some of the more heavily regulated clients (security-wise) there.

“We’re delighted to announce our partnership with DeepSecure, a likeminded company with a significant track record when it comes to helping organisations share data securely,” said Simon Hansford, chief executive of Skyscape Cloud Services.

“DeepSecure is certainly a good cultural fit for us as a fellow UK sovereign SME that specialises in delivering secure digital services to the UK public sector.  The firm also shares our commitment to offering a consumption-based pricing model for its security services, which aligns with our own pay-as-you-go model for our full catalogue of assured cloud services,” Hansford said.

OpenStack does some soul searching, finds its core self

Bryce: 'OpenStack will power the planet's clouds'

Bryce: ‘OpenStack will power the planet’s clouds’

The OpenStack Foundation announced new interoperability and testing requirements as well as enhancements to the software’s implementation of federated identity which the Foundation’s executive director Jonathan Bryce says will take the open source cloud platform one step closer to world domination.

OpenStack’s key pitch beyond being able to spin up scalable compute, storage and networking resources fairly quickly, is that OpenStack-based private clouds should be able to burst into the public cloud or some private cloud instances if need be. That kind of capability is essential if the company is going to take on companies like AWS, VMware and Microsoft, but has so far been quite basic in terms of implementation.

But for that kind of interoperability to happen you need three things: the ability to federate the identity of a cloud user so permissions and workloads can port over to whatever platforms are being deployed on (and to ensure those workloads are secure); a definition of what vendors, service providers and customers can reliably call core OpenStack, so they can all expect a standard collection of tools, services, and APIs to be found in every distribution; and, a way to test interoperability of OpenStack distributions and appliances.

To that end, the Foundation announced a new OpenStack Powered interoperability testing programme, so users can validate the interoperability of their own deployments as well as gain assurances from vendors that clouds and appliances branded as “OpenStack Powered” meet the same requirements. About 16 companies already have either certified cloud platforms or appliances available on the OpenStack Marketplace as of this week, and Bryce said there’s more to come.

The latest release of OpenStack, Kilo, also brings a number of improvements to federated identity, making it much easier to implement as well as more dynamic in terms of workload deployment, and Bryce said that over 30 companies have committed to implementing federated identity (which has been available since the Lighthouse release) by the end of this year – meaning the OpenStack cloud footprint just got a whole lot bigger.

“It has been a massive effort to come to an agreement on what we need to have in these clouds, how to test it,” Bryce said. “It’s a key step towards the goal of realising an OpenStack-powered planet.”

The challenge is, as the code gets bulkier and as groups add more services, joining all the bits and making sure they work together without one component or service breaking another becomes much more complex. That said, the move marks a significant milestone for the DefCore group, the internal committee in charge of setting base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. The group have been working for well over a year on developing a standard definition of what a core OpenStack deployment is.

Microsoft jumps into the data lake

Azure Data LakeAt the company’s annual Build conference this week Microsoft unveiled among other things an Azure Data Lake service, which the company is pitching as a hyperscale big data repository for all kinds of data.

The data lake concept is a fairly new one, the gist of it being that data of varying types and structures is created at such a high velocity and in such large volumes that it’s prompting a necessary evolution in the applications and platforms required to handle that data.

It’s really about being able to store all that data in a volume-optimised (and cost-efficient) way that maintains the integrity of that information when you go to shift it someplace else, whether that be an application / analytics or a data warehouse.

“While the potential of the data lake can be profound, it has yet to be fully realized. Limits to storage capacity, hardware acquisition, scalability, performance and cost are all potential reasons why customers haven’t been able to implement a data lake,” explained Microsoft’s product marketing manager, Hadoop, big data and data warehousing Oliver Chiu.

The company is pitching the Azure Data Lakes service as a means of running Hadoop and advanced analytics using Microsoft’s own Azure HDInsight, as well as Revolution-R Enterprise and other Hadoop distributions developed by Hortonworks and Cloudera.

It’s built to support “massively parallel queries” so information is discoverable in a timely fashion, and built to handly high volumes of small writes, which the company said makes the service ideal for Internet of Things applications.

“Microsoft has been on a journey for broad big data adoption with a suite of big data and advanced analytics solutions like Azure HDInsight, Azure Data Factory, Revolution R Enterprise and Azure Machine Learning. We are excited for what Azure Data Lake will bring to this ecosystem, and when our customers can run all of their analysis on exabytes of data,” Chiu explained.

Pivotal is also among a handful of vendors seriously bought into the concept of data lakes. However, although Chiu alluded to cost and performance issues associated with the data lakes approach, many enterprises aren’t yet at a stage where the variety, velocity and volume of data their systems ingest are prompting a conceptual change in how that data is being perceived, stored or curated; in a nutshell, many enterprises are still too siloed – not the least of which in how they treat data.

AWS a $5bn business, Bezos claims, as Amazon sheds light on cloud revenue

Amazon publicly shed light on AWS revenues for the first time

Amazon publicly shed light on AWS revenues for the first time

Amazon reported first quarter 2015 sales revenues of $22.7bn, an increase of 15 per cent year on year from $19.7bn, and quarterly cloud revenues of $1.57bn. This is the first time the e-commerce giant has publicly disclosed AWS revenues.

North America saw the bulk of Amazon’s sales growth, with revenue swelling 24 per cent to $13.4bn and operating income increasing 79 per cent to $517m. Outside North America, revenues actually decreased 2 per cent to $7.7bn (excluding the $1.3 billion year-over-year unfavourable foreign exchange impact, revenue growth was 14 per cent).

The company was for the first time pleased to report AWS revenue grew close to 50 per cent to $1.57bn in Q1 2015, with operating income increasing 8 per cent to $26m and a 16.9 per cent operating margin.

“Amazon Web Services is a $5 billion business and still growing fast — in fact it’s accelerating,” said Jeff Bezos, founder and chief executive of Amazon.

“Born a decade ago, AWS is a good example of how we approach ideas and risk-taking at Amazon. We strive to focus relentlessly on the customer, innovate rapidly, and drive operational excellence. We manage by two seemingly contradictory traits: impatience to deliver faster and a willingness to think long term.”

Brian Olsavsky, vice president, chief financial officer of global consumer business said that excluding the favourable impact from foreign exchange, AWS segment operating income decreased 13 per cent. But speaking to journalists and analysts this week Olsavsky reiterated the company was very pleased with the results, and that it would “continue deploying more capital there” as it expands

AWS has dropped its prices nearly 50 times since it began selling cloud services nearly a decade ago, and this past quarter alone has seen the firm continue to add new services to the ecosystem – though intriguingly, Olsavsky refused to directly answer questions on the sustainability of the cloud margins moving forward. This quarter the company announced unlimited cloud storage plans, a marketplace for virtualised desktop apps, a machine learning service and a container service for EC2.

AWS bolsters GPU-accelerated instances

AWS is updating its GPU-accelerated cloud instances

AWS is updating its GPU-accelerated cloud instances

Amazon has updated its family of GPU-accelerated instances (G2) in a move that will see AWS offer up to times more GPU power at the top end.

Announced on the tail end of 2013, AWS teamed up with graphics processing specialist Nvidia to launch the Amazon EC2 G2 instance, a GPU-accelerated instance specifically designed for graphically intensive cloud-based services.

Each Nvidia Grid GPU offers up to 1,536 parallel processing cores and give software as a service developers access to higher-end graphics capabilities including fully-supported 3D visualization for games and professional services.

“The GPU-powered G2 instance family is home to molecular modeling,  rendering, machine learning, game streaming, and transcoding jobs that require massive amounts of parallel processing power. The Nvidia Grid GPU includes dedicated, hardware-accelerated video encoding; it generates an H.264 video stream that can be displayed on any client device that has a compatible video codec,” explained Jeff Barr, chief evangelist at AWS.

“This new instance size was designed to meet the needs of customers who are building and running high-performance CUDA, OpenCL, DirectX, and OpenGL applications.”

The new g2.8xlarge instance, available in US East (Northern Virginia), US West (Northern California), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo), offers four times the GPU power than standard G2 instances including: 4 GB of video memory and the ability to encode either four real-time HD video streams at 1080p or eight real-time HD video streams at 720P; 32 vCPUs; 60 GiB of memory; 240 GB (2 x 120) of SSD storage.

GPU virtualisation is still fairly early on in its development but the technology does open up opportunities for the cloudification of a number of niche applications in pharma and engineering, which have a blend of computational and graphical requirements that have so far been fairly difficult to replicate in the cloud (though bandwidth constraints could still create performance limitations).