Category Archives: Datacentre

Equinix announces sixth London datacentre

Equinix has announced five new datacentres globally in the past month

Equinix has announced five new datacentres globally in the past month

Datacentre giant Equinix has announced the launch of its sixth London-based International Business Exchange (IBX) datacentre.

Equinix said the datacentre, LD6, will offer customers the ability to leverage its cloud interconnection service – which lets users create private network links to Microsoft Azure, Amazon Web Services (AWS) and Google Cloud services among others.

The company said the $79m facility, which is located in Slough, is extremely energy efficient (LEED gold-accredited), and utilizes mass air cooling technology with indirect heat exchange and 100 percent natural ventilation.

It measures 236,000 square feet (8,000 square meters) and has capacity for 1,385 cabinets, with the ability to add another 1,385 cabinets in phase two of the facility’s development. Once phase two is complete, the Equinix London Slough campus will provide more than 388,000 square feet (36,000 square meters) of colocation space interconnected by more than a thousand dark fiber links.

“LD6 is one of the most technically advanced datacentres in the UK. It has been designed to ensure that we can continue to provide state-of-the-art colocation for our current and future customers,” said Russell Poole, managing director, Equinix UK. “This latest addition to our thriving London campus sets new standards in efficiency and sustainability.”

The facility is among five new datacentres announced last month. Equinix announced plans in March to roll out new state-of-the-art datacentres in New York, Singapore, Melbourne and Toronto.

DigitalOcean drops into Frankfurt

DigitalOcean is among a number of US-based incumbents moving into Germany

DigitalOcean is among a number of US-based incumbents moving into Germany

In a bid to tap further into the European market DigitalOcean has expanded its presence in Germany with a new datacentre in Frankfurt.

The dev-focused cloud provider already has a presence in Amsterdam and last year partnered with Equinix to make its cloud platform available in one of the company’s London-based Tier III datacentres.

“We’re here to give our full support to developers throughout the world by offering a simple, ideal cloud solution and infrastructure experience for hosting applications,” said DigitalOcean co-founder and chief exec Ben Uretsky. “Innovative companies in Germany deserve the best tools possible in order to continue to grow and succeed.”

The company also said it wanted to appeal to local companies with strong data residency requirements, a common theme among cloud providers throwing their weight into the German market.

Germany – particularly Berlin – has a big startup scene, but putting the datacentre in Frankfurt means it can benefit from being close to a number of large fibre interconnections. Other US-based incumbents to move into Frankfurt over the past few months include IBM (which also partners with Equinix) and AWS.

The facility is DigitalOcean’s third datacentre in Europe and tenth globally.

The cloud beyond x86: How old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there's good reason to believe old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there’s good reason to believe old architectures are making a comeback

When you ask IT pros to think of cloud the first thing that often comes to mind is web-delivered, meter-billed virtualised compute (and increasingly storage and networking) environments which, today, tends to imply an x86-centric stack built to serve up mostly any workload. But anyone watching this space closely will see x86 isn’t the only kid on the block, with SPARC, ARM and Power all vying for a large chunk of the scale-out market, as enterprises seek to squeeze more power out of their cloud hardware. What will the cloud stack of tomorrow look like?

Despite the dominance of x86 in the datacentre it is difficult to ignore the noise vendors have been making over the past couple of years around non-x86 architectures like ARM (ARM), SPARC (Oracle) and Power (IBM), but it’s easy to understand why: simply put, the cloud datacentre market is currently the dominant server market, with enterprises looking to consume more software as a service and outsource more of their datacentre operations than ever before.

Sameh Boujelbene, director of server research at Dell’Oro Group says over 50 per cent of all servers will ship to cloud service providers by 2018, and the size of the market (over $40bn annually by some estimates) creates a massive opportunity for new – and in some cases old non-x86 vendors aiming to nab a large chunk of it.

The nature and number of workloads is also changing. The number of connected devices sending or requesting data that needs to be stored or analysed, along with

the number and nature of workloads processed by datacentres, will more than double in the next five years, Boujelbene explains. This increase in connected devices and workloads will drive the need for more computing capacity and more physical servers, while driving exploration of more performant architectures to support this growing workload heterogeneity.

This article appeared in the March/April edition of BCN Magazine. Click here to download the issue today.

But it’s also important to recognise how migration to the cloud is impacting the choice of server form factors, choice of server brand and the choice of CPU architecture from the datacentre or cloud service provider perspective. Needless to say, cloud service providers have to optimise their datacentre efficiency at every turn.

“Generally, they are moving from general purpose servers to workload optimised servers,” Boujelbene explains. “We see cloud accounts going directly to white box servers shipped by ODMs directly to cloud accounts not only to cut costs but also because ODMs allow customisation; traditional server OEMs such as Dell, HP and IBM simply didn’t want to provide customised servers few years ago.”

Boujelbene sees big opportunities for alternative architectures to x86 such as ARM, SPARC or Power because they provide better performance to run specific types of workloads, and Intel is reacting to that trend by making customised CPUs available to some large cloud accounts. The company has about 35 customised CPU SKUs, and growing, and late last year won a pretty large contract to supply Amazon Web Services, the largest and most established of the public cloud providers, with custom Intel Xeon E5-2666 v3 (Haswell) processors.

Others in the ecosystem, some likely to have joined the fray at some point and others less so, are being enticed to get involved. Mobile chip incumbent Qualcomm announced plans ‘with its own ARM-based offerings’ in November last year to enter the server chip market at some point over the next two years, which the company believes represents a $15bn opportunity over the next five years.

And about a month before the Qualcomm announcement HP unveiled what it called the first “enterprise-grade ARM-based server,” its Moonshot range – the first to support ARM’s v8 architecture. Around the same time, Dell’s chief executive officer and founder Michael Dell intimated to a room of journalists his company, a long time Intel partner, would not be opposed to putting ARM chips in its servers.

SPARC and Power are both very compelling options when it comes to high I/O data analytics – where they are notably more performant than commodity x86. ARM’s key selling points have more to do with the ability to effectively balance licensing, design and manufacturing flexibility with power efficiency and physical density, though the company’s director of server programmes Jeff Underhill says other optimisations – being driven by cloud – are making their way to the CPU level.

“Cloud infrastructure by its very nature is network and storage-centric. So it is essential it can handle large numbers of simultaneous interactions efficiently optimising for aggregate throughput rather than just focusing on the outright performance of a single server. Solutions with integrated high performance networking, as well as storage and domain specific accelerators augmenting their general processor capabilities, offer significantly improved throughput versus traditional general purpose approaches,” Underhill says.

Underhill explains that servers are actually becoming more specialised, though there is and will continue to be a need for general-purpose servers and architectures to support them.

“The really interesting thing to look at is the area where networking and server technologies are converging towards a more scalable, flexible and dynamic ‘infra- structure’. Servers are becoming more specialised with advanced networking and storage capabilities mixed with workload specific accelerators,” he says, adding that this is pushing consolidation of an increasing number of systems (particularly networking) onto the SoC.

Hedging Their Bets

Large cloud providers – those with enough resource to write their own software and stand up their own datacentres – are the primary candidates for making the architectural shift in the scale-out market because of the cost prohibitive nature of making such a move (and the millions of dollars in potential cost-savings if it can be pulled off well).

It’s no coincidence Google, Facebook and Amazon have, with varying degrees of openness, flirted with the idea of shifting their datacentres onto ARM-based or other chips. Google for instance is one of several service providers steering the direction of the OpenPower Foundation (Rackspace is another), a consortium set up by IBM in December 2013 to foster cross-industry open source development of the Power architecture.

Power, which for IBM is the core architecture under- lying its high-end servers and mainframes as well as its more recently introduced cognitive computing as a service platform Watson, is being pitched by the more than 80 consortium members as the cloud and big data architecture of choice. Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation says there is a huge opportunity for the Power architecture to succeed because of barriers in how technology cost and performance at the CPU level is scaling.

“If you go back five or six years, when the base transistor was scaling so well and so fast, all you had to do was go to the next–gen processor to get those cost-to-performance takedowns you were looking for. The best thing you could do all things considered or remaining equal is hop onto the next gen processor. Now, service providers are not getting those cost take-down curves they were hoping for with cloud, and a lot of cloud services are run on massive amounts of older technology platforms.”

The result is that technology providers have to pull on more and more levers – like adding GPU acceleration or enabling GPU virtualisation, or enabling FPGA attachment – to get cost-to-performance to come down; that is driving much of the heterogeneity in the cloud – different types of heterogeneity, not just at the CPU level.

There’s also a classic procurement-related incentive for heterogeneity among providers. The diversity of suppliers means spreading that risk and increasing competitiveness in the cloud, which is another good thing for cost-to-performance too.

While McCredie says that it’s still early days for Power in the cloud, and that Power is well suited to a particular set of data-centric workloads, he acknowledges it’s very hard to stay small and niche on one hand and continue to drive down cost-to-performance. The Foundation is looking to drive at least 20 to 30 per cent of the scale- out market, which – considering x86 has about 95 per cent share of that market locked up – is fairly ambitious.

“We have our market share in our core business, which for IBM is in the enterprise, but we also want share in the scale-out market. To do that you have to activate the open ecosystem,” he says, alluding to the IBM-led consortium.

It’s clear the increasingly prevalent open source mantra in the tech sector is spreading to pretty much every level of the cloud stack. For instance Rackspace, which participates with both OpenStack and Open Compute Project, open source cloud software and hard- ware projects respectively, is actively working to port OpenStack over to the Power architecture, with the goal of having OpenStack running on OpenPower / Open Compute Project hardware in production sometime in the next couple of years. It’s that kind of open ecosystem McCredie says is essential in cloud today and, critically, that such openness need not come at the cost of loose integration or consequent performance tax.

SPARC, which has its roots in financial services, retail and manufacturing, is interesting in part because it remains a fairly closed ecosystem and largely ends up in machines finely-tuned to very specific database workloads. Yet despite incurring losses for several years following its acquisition of Sun Microsystems, the architecture’s progenitor (along with Motorola), Oracle’s hardware business mostly bucked that trend (one experienced by most high-end server vendors) throughout 2014 and continues to do so.

The company’s 2015 Q2 saw its hardware systems grow 4 per cent year on year to roughly $717m, with the SPARC-based Exalogic and SuperCluster systems achieving double-digit growth.

“We’ve actually seen a lot of customers that have gone from SPARC to x86 Linux now very strongly come back to SPARC Solaris, in part because the technology has the audit and compliance features built into the architecture, they can do one click reporting, and be- cause the virtualisation overhead with Solaris on SPARC is much lower when compared with other virtualisation platforms,” says Paul Flannery, senior director EMEA product management in Oracle’s server group.

Flannery says openness and heterogeneity don’t necessarily lead to the development of the most per- formant outcome. “The complexity of having multiple vendors in your stack and then having to worry about the patching, revision labels of each of those platforms is challenging. And in terms of integrating those technologies – the fact we have all of the databases and all of the middleware and the apps – to be able to look at that whole environment.”

Robert Jenkins, chief executive officer of CloudSigma, a cloud service provider that recently worked with Oracle to launch one of the first SPARC-as-a-Service platforms, says that ultimately computing is still very heterogeneous.

“The reality is a lot of people don’t get the quality and performance that they need from public cloud because they’re jammed through this very rigid frame- work, and computing is very heterogeneous –which hasn’t changed with cloud,” he says. “You can deploy simply, but inefficiently, and the reality is that’s not what most people want. As a result we’ve made efforts to go beyond x86.”

He says the company is currently hashing out a deal with a very large bank that wants to use the latest SPARC architecture as a cloud service – so without having to shell out half a million dollars per box, which is roughly what Oracle charges, or migrate off the architecture altogether, which is costly and risky. Besides capex, SPARC is well suited to be offered as a service because the kinds of workloads that run on the architecture tend to be more variable or run in batches.

“The enterprise and corporate world is still focused on SPARC and other older specialised architectures, mainframes for instance, but it’s managing that heterogeneous environment that can be difficult. Infrastructure as a service is still fairly immature, and combined with the fact that companies using older architectures like SPARC tend not to be first movers, you end up in this situation where there’s a gap in the tooling necessary to make resource and service management easier.”

Does It Stack Up For Enterprises?

Whereas datacentre modernisation during the 90s entailed, among other things, a transition away from expensive mainframes running Unix workloads towards lower-cost commodity x86 machines running Linux or Microsoft-based software packages on bare metal, for many large enterprises, much of the 2000s focused on virtualising the underlying hardware platforms in a bid to make them more elastic and more performant. Those hardware platforms were overwhelmingly x86-based.

But, many of those same enterprises refused to go “all- in” on virtualisation or x86, maintaining multiple compute architectures to support niche workloads that ultimately weren’t as performant on commodity kit; financial services and the aviation industry are great examples of sectors where one can still find plenty of workloads running on 40-50 year old mainframe technology.

Andrew Butler, research vice president focusing on servers and storage at Gartner and an IT industry veteran says the same trend is showing up in the cloud sector, as well as to some extent the same challenges.

“What is interesting is that you see a lot of enter- prises claiming to move wholesale into the cloud, which speaks to this drive towards commoditisation in hardware – x86 in other words – as well as services, fea- tures and decision-making more generally. But that’s definitely not to say there isn’t room for SPARC, Power, mainframes or ARM in the datacentre, despite most of those – if you look at the numbers – appearing to have had their day,” Butler says.

“At the end of the day, in order to be able to run the workloads that we can relate to, delivering a given amount of service level quality is the overriding priority – which in the modern datacentre primarily centres on uptime and reliability. But while many enterprises were driven towards embracing what at the time was this newer architecture because of flexibility or cost, performance in many cases still reigns supreme, and there are many pursuing the cloud-enablement of legacy workloads, wrapping some kind of cloud portal access layer around a mainframe application for instance.”

“The challenge then becomes maintaining this bi-mod- al framework of IT, and dealing with all of the technology and cultural challenges that come along with all of this; in other words, dealing with the implications of bringing things like mainframes into direct contact with things like the software defined datacentre,” he explains.

A senior datacentre architect working at a large American airline who insists on anonymity says the infrastructure management, technology and cultural challenges alluded to above are very real. But they can be overcome, particularly because some of these legacy vendors are trying to foster more open exposure of their APIs for management interfaces (easing the management and tech challenge), and because ops management teams do get refreshed from time to time.

What seems to have a large impact is the need to ensure the architectures don’t become too complex, which can occur when old legacy code takes priority simply because the initial investment was so great. This also makes it more challenging for newer generations of datacentre specialists coming into the fold.

“IT in our sector is changing dramatically but you’d be surprised how much of it still runs on mainframes,” he says. “There’s a common attitude towards tech – and reasonably so – in our industry that ‘if it ain’t broke don’t fix it’, but it can skew your teams towards feeling the need to maintain huge legacy code investments just because.”

As Butler alluded to earlier, this bi-modality isn’t particularly new, though there is a sense among some that the gap between all of the platforms and archi- tectures is growing when it comes to cloud due to the expectations people have on resilience and uptime but also ease of management, power efficiency, cost, and so forth. He says that with IBM’s attempts to gain mind- share around Power (in addition to developing more cloudy mainframes), ARM’s endeavour to do much the same around its processor architecture and Oracle’s cloud-based SPARC aspirations, things are likely to remain volatile for vendors, service providers and IT’ers for the foreseeable future.

“It’s an incredibly volatile period we’re entering, where this volatility will likely last between seven years possibly up to a decade before it settles down – if it settles down,” Butler concluded

VMware, Telstra bring virtualisation giant’s public cloud to Australia

Telstra and VMware are bringing the virtualisation incumbent's public cloud service to Australia

Telstra and VMware are bringing the virtualisation incumbent’s public cloud service to Australia

VMware announced it is partnering with Telstra to bring its vCloud Air service to Australia.

VMware said the initial VMware vCloud Air deployment in Australia is hosted out of an unspecified Telstra datacentre.

“We continue to see growing client adoption and interest as we build out VMware vCloud Air with our newest service location in Australia,” said Bill Fathers, executive vice president and general manager, Cloud Services Business Unit, VMware.

“VMware’s new Australia service location enables local IT teams, developers and lines of business to create and build their hybrid cloud environments on an agile and resilient IT platform that supports rapid innovation and business transformation,” Fathers said.

Last July VMware made a massive push into the Asia Pacific region, inking deals with SoftBank in Japan and China Telecom in China to bring its public cloud service to the area. But the company said it was adding an Australian location in a bid to appeal to users that have strict data residency requirements.

Duncan Bennet, ‎vice president and managing director, VMware A/NZ added: “Australian businesses will have the ability to seamlessly extend applications into the cloud without any additional configuration, and will have peace of mind, knowing this IT infrastructure will provide a level of reliability and business continuity comparable to in-house IT. It means businesses can quickly respond to changing business conditions, and scale IT up and down as required without disruption to the overall business.”

Telstra has over the past couple of years inked a number of partnerships with large enterprise IT incumbents to strengthen its position in the cloud segment. It was one of the first companies to sign up to Cisco’s Intercloud programme last year, and earlier this month announced a partnership with IBM that will see the Australian telco offer direct network access to SoftLayer cloud infrastructure to local customers.

Alibaba throws its weight behind ARM architecture standards

Alibaba is joining the Linaro Group, an organisation which aims to eliminate software fragmentation within ARM-based  environments

Alibaba is joining the Linaro Enterprise Group, an organisation which aims to eliminate software fragmentation within ARM-based environments

Chinese e-commerce and cloud giant Alibaba announced it has joined the Linaro Enterprise Group (LEG), a group of over 200 engineers working on consolidating and optimising open source software for the ARM architecture.

Linaro runs a number of different ARM-based initiatives  aimed at cultivating software standards for ARM chips for networking, mobile platforms, servers and the connected home. It mainly targets upstream development but also aims to coordinate work that helps reduce “costly low level fragmentation.”

More recently the organisation launched a working group focused specifically on developing software standards for ARMv8-A 64-bit silicon, architecture a number of server vendors and ODMs have started adopting in their portfolios in a bid to test the ARM-based cloud server market.

Alibaba, which operates six cloud datacentres – mostly in China – and recently expanded to the US, said it will collaborate with a range of companies within LEG to optimise the ARMv8-A software platforms.

“Alibaba Group’s infrastructure carries the world’s largest e-commerce ecosystem, in addition to China’s leading cloud services,” said Shuanlin Liu, chief architect of Alibaba Infrastructure Service.

“We need the best technical solutions as we step into the DT (data technology) era. Hence, we’re investing heavily in the innovation of a wide range of technologies, including the ARM architecture. We will continue to work closely with partners to accelerate the development and growth of the ecosystem,” Liu said.

Alibaba said the move may help it deliver cloud services that have been workload-optimised right down to the chip, and help lower the TCO; lower energy usage and higher density are two leading characteristics driving interest in ARM for cloud datacentres. But due in part to x86′s dominance in the datacentre there is a conspicuous lack of ARM-based software standards and workloads, which is what LEG

“As one of the world’s largest cloud operators, Alibaba is continually pushing technology boundaries to efficiently deploy new services at a massive scale,” said Lakshmi Mandyam, director, server systems and ecosystems, ARM. “Their collaboration with the ARM ecosystem will accelerate and expand open source software choices for companies wishing to deploy ARMv8-A based servers. We welcome Alibaba’s participation in Linaro and the new dimension it will bring to an already vibrant community.”

The past couple of years have seen a number of large cloud service providers flirt with the prospect of switching to ARM architecture within their datacentres, most notably Amazon. The latest move signals Alibaba is interested in moving in that direction, or at least  signal to vendors it’s willing to do so, but it may be a while before we see the cloud giant roll out the ARM-based servers within its datacentres.

UK MoD launches dedicated private cloud for internal apps

The UK MoD is using a hosted private cloud for internal shared services apps

The UK MoD is using a hosted private cloud for internal shared services apps

The UK’s Ministry of Defence (MOD) Information Systems and Services (ISS) has deployed a private cloud based in CGI’s South  Wales datacentre which is being used to host internal applications for the public sector authority.

The ISS said it received Approval to Operate for the new Foundation Application Hosting Environment (FAHE), which is hosted as a private cloud instance in CGI’s facilities, and that the first applications have successfully transitioned onto the new platform.

The hosting environment was procured through the G-Cloud framework, the UK government’s cloud-centric procurement framework, and the contract will run for at least two years.

“FAHE provides the foundation of our Applications Services approach and a future-proofed platform for secure application hosting. Our vision is that ISS will be the Defence provider of choice for applications development, hosting, and management,” said Keith Jefferies, ISS Programmes, EMPORIUM deputy head, UK Ministry of Defence.

“FAHE is the first delivery contract under the broader banner of the Applications Programme and we have selected CGI on their ability to deliver a secure environment coupled with a flexible commercial model that allows us to rapidly up and down-scale in line with future demand,” Jefferies said.

Steve Smart, UK vice president of space, defence, national and cyber security at CGI said: “MOD ISS is taking an important step towards delivering the Government’s vision of using  flexible cloud services. The CGI platform is compliant to Defence and pan-Government ICT strategies and architectures. It will provide multi-discipline services from the most appropriate source with the agility and cost of industry best practice.”

The move comes just a few months after the MoD contracted with Ark to design a new state-of-the art datacentre in Corsham, Wiltshire, a move that will allow the department to decommission its Bath facility and save on energy and operations costs.

Fujitsu partners with Equinix on Singapore cloud datacentre

Fujistu has opened its third cloud datacentre in Singapore this week

Fujistu opened its third cloud datacentre in Singapore

Fujitsu has set up another datacentre in Singapore this week amidst what it sees as increasing demand for cloud services in Singapore and neighbouring countries in the Asia-Pacific region.

The datacentre, hosted in Equinix’s western Singapore facility, will host Fujitsu’s portfolio of cloudservices and offer a number of new connectivity features “currently under development” that would allow enterprises to federate with other cloud platforms.

The recently announced datacentre is Fujitsu’s third in Singapore, and it already operates over 100 worldwide; the company’s cloud services are hosted from six datacentres globally.

The company said it chose to add another datacentre in Singapore because of its strategic location and attractiveness to large multinational firms.

“In recent years, companies increasingly are embracing cloud services as a platform to support the accelerating pace of business in Asia. In particular, because of its low level of natural disaster related risk and its position as an international network hub with reliable broadband network lines, Singapore is often chosen as the location for integrated systems operations by many companies that are pursuing multinational business expansion,” the company said in a statement.

Fujitsu is the latest cloud vendor to view Singapore as a relatively untapped market for cloud services. This week CenturyLink, which recently expanded its managed services presence in China, added public cloud nodes to one of its Singapore datacentres.

Apart from locally established multinationals and the booming financial services sector, the Singapore Government has also shown itself to be looking to invest more in both using cloud services and growing usage of cloud platforms in the region.

According to Parallels, local SMBs are also hopping onto cloud platforms with reasonable pace. The firm believes the SMB cloud services market in Singapore is projected to hit $916M in 2017, with a three-year CAGR of 21 per cent.

OpenPower members reveal open source cloud tech mashups

OpenPower members have been busy creating open source server specs based on the Power8 architecture

OpenPower members have been busy creating open source server specs based on the Power8 architecture

OpenPower Foundation members pulled the curtain back on a number of open source cloud datacentre technologies including the first commercially available OpenPower-based server, and the first open server spec that combines OpenStack, Open Compute and OpenPower architectures.

Members of the open source hardware community, which IBM – the community’s founding organisation – said now numbers over 110 organisations, revealed a number of joint hardware initiatives falling under the OpenPower umbrella.

The Foundation announced the first OpenPower-based servers, developed by Chinese ODM Tyan (TYAN TN71-BP012), a variant of those IBM recently said it would add to its SoftLayer datacentres. The servers will be commercially available in the second half of 2015.

IBM and Wistron also revealed an OpenPower-based server using GPU and networking technology from Nvidia and Mellanox, respectively, which is being aimed at high performance compute workloads.

The foundation also announced the first server spec and motherboard mock-up combining the design concepts of the Facebook-led open source hardware project, Open Compute, with OpenStack and OpenPower technologies, an initiative Rackspace – among other service providers with a vested interest all three open source projects – was keen to bring to fruition.

“Collaborating across our open development communities will accelerate and broaden the raw potential of a fully open datacentre. We have a running start together and look forward to technical collaboration and events to engage our broader community,” said Corey Bell, chief executive officer of the Open Compute Project.

In an interview with BCN earlier this month Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation said there is a big opportunity for Power to succeed in the market, and that IBM hopes to claim up to 30 per cent of the scale-out market in a matter of years.

Ken King, general manager OpenPower Alliances at IBM said: “OpenPower started off as an idea that immediately resonated with our technology partners to strengthen their scale out implementations like analytics.  Now, OpenPower is fundamental to every conversation IBM is having with clients — from HPC to scale out computing to cloud service providers.  Choice, freedom and better performance are strategic imperatives guiding customers around the globe, and OpenPOWER is leading the way.

CenturyLink expands public cloud in APAC

CenturyLink is expanding its public cloud platform in Singapore

CenturyLink is expanding its public cloud platform in Singapore

American telco CenturyLink has expanded the presence of its public cloud platform to Singapore in a bid to cater to growing regional demand for cloud services.

CenturyLink, which recently expanded its managed services presence in China and its private cloud services in Europe and the UK, is adding public cloud nodes to one of its Singapore datacentres.

“The launch of a CenturyLink Cloud node in Singapore further enhances our position as a leading managed hybrid IT provider for businesses with operations in the Asia-Pacific region,” said Gery Messer, CenturyLink managing director, Asia Pacific.

“We continue to invest in the high-growth Asia-Pacific region to meet increasing customer demand,” Messer said.

The company said it wants to cater to what it sees as growing demand for cloud services in the region, citing Frost & Sullivan figures that show the Asia-Pacific region spent almost $6.6bn on public cloud services last year. That firm predicts annual cloud services spending in the region will exceed $20bn by 2018.

The move also comes at a time when the Singapore Government is looking to invest more in both using cloud services and growing usage of cloud platforms in the region.

Last year the Infocomm Development Authority of Singapore (IDA) said it was working with Amazon Web Services to trial a data as a service project the organisations believe will help increase the visibility of privately-held data sets.

The agency also signed a Memorandum of Intent with AWS that would see the cloud provider offer usage credits $3,000 (US) to the first 25 companies to sign up to the pilot, which will go towards the cost of hosting their dataset registries or datasets.

It’s also announced similar partnerships in the past with Pivotal and Red Hat.

DataCentred adds ARM 64-bit to OpenStack cloud

DataCentred is adding ARM-based OpenStack services to its public cloud portfolio

DataCentred is adding ARM-based OpenStack services to its public cloud portfolio

Manchester-based cloud services provider DataCentred has added ARM AArch64-based servers to its OpenStack-based public cloud platform, a product of its recently announced partnership with Codethink. The company’s head of cloud services told BCN the company is responding to customer demand for putting ARM-based workloads in the cloud.

As part of the move the ARM AArch64 architecture, which allows 32-bit and 64-bit processes to be executed alongside one another, will be added to the company’s OpenStack-based public cloud offering; the company said it will run the platform on HP M400 ARM hardware, and give customers access to Intel and ARM architectures alongside one another within an OpenStack environment.

DataCentrerd said the move will help drive down the cost of data centre operation and of the cost of virtualised instances within a customer’s service framework.

“We are thrilled to be the first OpenStack public cloud operator to feature 64-bit ARM instances. This breakthrough is testament to the considerable skill and expertise of our OpenStack cloud development team.  This is probably the first example of Moonshot AArch64 running in Europe outside of HP’s development labs, and certainly the first example of generally available Moonshot backed AArch64 instances in an OpenStack public cloud anywhere in the world,” said Mike Kelly, chief executive and founder of DataCentred.

“We know that ARM themselves are pleased to hear of this development, as a real world deployment. OpenStack is one of the big success stories for Open Source software, and is likely to be the environment through which enterprise migrates, in a vendor neutral way, to take advantage of elastic cloud compute,” Kelly added.

Matt Jarvis, head of cloud computing at DataCentred told BCN there’s currently a scarcity of ARM in the cloud.

“This deployment is driven by customer demand – we have both new customers who want to access ARM64 on-demand, and existing customers who we’ve been talking to about proof of concept ARM workloads for some time,” Jarvis said.

“There is significant interest from the worldwide community of technology companies currently working with ARM hardware to have access to develop platforms on-demand, along with specific vertical market interest in ARM as part of a longer term technical strategy targeting reduction in operating cost due to power savings,” he added.

ARM for compute seems to be fairly scarce in the cloud world, though it’s clear that OpenStack incumbents are looking to bring the software platform to all kinds of architecture beyond x86. Oracle is looking to marry SPARC and OpenStack while IBM and Rackspace are both working towards getting the open source software platform working on OpenPower.

DataCentred said it plans to move the Moonshot-powered cloud service into production sometime later this year.