Category Archives: Infrastructure as a Service

Microsoft jumps into the data lake

Azure Data LakeAt the company’s annual Build conference this week Microsoft unveiled among other things an Azure Data Lake service, which the company is pitching as a hyperscale big data repository for all kinds of data.

The data lake concept is a fairly new one, the gist of it being that data of varying types and structures is created at such a high velocity and in such large volumes that it’s prompting a necessary evolution in the applications and platforms required to handle that data.

It’s really about being able to store all that data in a volume-optimised (and cost-efficient) way that maintains the integrity of that information when you go to shift it someplace else, whether that be an application / analytics or a data warehouse.

“While the potential of the data lake can be profound, it has yet to be fully realized. Limits to storage capacity, hardware acquisition, scalability, performance and cost are all potential reasons why customers haven’t been able to implement a data lake,” explained Microsoft’s product marketing manager, Hadoop, big data and data warehousing Oliver Chiu.

The company is pitching the Azure Data Lakes service as a means of running Hadoop and advanced analytics using Microsoft’s own Azure HDInsight, as well as Revolution-R Enterprise and other Hadoop distributions developed by Hortonworks and Cloudera.

It’s built to support “massively parallel queries” so information is discoverable in a timely fashion, and built to handly high volumes of small writes, which the company said makes the service ideal for Internet of Things applications.

“Microsoft has been on a journey for broad big data adoption with a suite of big data and advanced analytics solutions like Azure HDInsight, Azure Data Factory, Revolution R Enterprise and Azure Machine Learning. We are excited for what Azure Data Lake will bring to this ecosystem, and when our customers can run all of their analysis on exabytes of data,” Chiu explained.

Pivotal is also among a handful of vendors seriously bought into the concept of data lakes. However, although Chiu alluded to cost and performance issues associated with the data lakes approach, many enterprises aren’t yet at a stage where the variety, velocity and volume of data their systems ingest are prompting a conceptual change in how that data is being perceived, stored or curated; in a nutshell, many enterprises are still too siloed – not the least of which in how they treat data.

AWS a $5bn business, Bezos claims, as Amazon sheds light on cloud revenue

Amazon publicly shed light on AWS revenues for the first time

Amazon publicly shed light on AWS revenues for the first time

Amazon reported first quarter 2015 sales revenues of $22.7bn, an increase of 15 per cent year on year from $19.7bn, and quarterly cloud revenues of $1.57bn. This is the first time the e-commerce giant has publicly disclosed AWS revenues.

North America saw the bulk of Amazon’s sales growth, with revenue swelling 24 per cent to $13.4bn and operating income increasing 79 per cent to $517m. Outside North America, revenues actually decreased 2 per cent to $7.7bn (excluding the $1.3 billion year-over-year unfavourable foreign exchange impact, revenue growth was 14 per cent).

The company was for the first time pleased to report AWS revenue grew close to 50 per cent to $1.57bn in Q1 2015, with operating income increasing 8 per cent to $26m and a 16.9 per cent operating margin.

“Amazon Web Services is a $5 billion business and still growing fast — in fact it’s accelerating,” said Jeff Bezos, founder and chief executive of Amazon.

“Born a decade ago, AWS is a good example of how we approach ideas and risk-taking at Amazon. We strive to focus relentlessly on the customer, innovate rapidly, and drive operational excellence. We manage by two seemingly contradictory traits: impatience to deliver faster and a willingness to think long term.”

Brian Olsavsky, vice president, chief financial officer of global consumer business said that excluding the favourable impact from foreign exchange, AWS segment operating income decreased 13 per cent. But speaking to journalists and analysts this week Olsavsky reiterated the company was very pleased with the results, and that it would “continue deploying more capital there” as it expands

AWS has dropped its prices nearly 50 times since it began selling cloud services nearly a decade ago, and this past quarter alone has seen the firm continue to add new services to the ecosystem – though intriguingly, Olsavsky refused to directly answer questions on the sustainability of the cloud margins moving forward. This quarter the company announced unlimited cloud storage plans, a marketplace for virtualised desktop apps, a machine learning service and a container service for EC2.

AWS bolsters GPU-accelerated instances

AWS is updating its GPU-accelerated cloud instances

AWS is updating its GPU-accelerated cloud instances

Amazon has updated its family of GPU-accelerated instances (G2) in a move that will see AWS offer up to times more GPU power at the top end.

Announced on the tail end of 2013, AWS teamed up with graphics processing specialist Nvidia to launch the Amazon EC2 G2 instance, a GPU-accelerated instance specifically designed for graphically intensive cloud-based services.

Each Nvidia Grid GPU offers up to 1,536 parallel processing cores and give software as a service developers access to higher-end graphics capabilities including fully-supported 3D visualization for games and professional services.

“The GPU-powered G2 instance family is home to molecular modeling,  rendering, machine learning, game streaming, and transcoding jobs that require massive amounts of parallel processing power. The Nvidia Grid GPU includes dedicated, hardware-accelerated video encoding; it generates an H.264 video stream that can be displayed on any client device that has a compatible video codec,” explained Jeff Barr, chief evangelist at AWS.

“This new instance size was designed to meet the needs of customers who are building and running high-performance CUDA, OpenCL, DirectX, and OpenGL applications.”

The new g2.8xlarge instance, available in US East (Northern Virginia), US West (Northern California), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo), offers four times the GPU power than standard G2 instances including: 4 GB of video memory and the ability to encode either four real-time HD video streams at 1080p or eight real-time HD video streams at 720P; 32 vCPUs; 60 GiB of memory; 240 GB (2 x 120) of SSD storage.

GPU virtualisation is still fairly early on in its development but the technology does open up opportunities for the cloudification of a number of niche applications in pharma and engineering, which have a blend of computational and graphical requirements that have so far been fairly difficult to replicate in the cloud (though bandwidth constraints could still create performance limitations).

Google boosts cloud-based big data services

Google is bolstering its big data services

Google is bolstering its big data services

Google announced a series of big data service updates to its cloud platform this week in a bid to strengthen its growing portfolio of data services.

The company announced the beta launch of Google Cloud Dataflow, a Java-based service that lets users build, deploy and run data processing pipelines for other applications like ETL, analytics, real-time computation, and process orchestration, while abstracting away all the other infrastructure bits like cluster management.

The service is integrated with Google’s monitoring tools and the company said it’s built from the ground up for fault-tolerance.

“We’ve been tackling challenging big data problems for more than a decade and are well aware of the difference that simple yet powerful data processing tools make. We have translated our experience from MapReduce, FlumeJava, and MillWheel into a single product, Google Cloud Dataflow,” the company explained in a recent blog post.

“It’s designed to reduce operational overhead and make programming and data analysis your only job, whether you’re a data scientist, data analyst or data-centric software developer. Along with other Google Cloud Platform big data services, Cloud Dataflow embodies the kind of highly productive and fully managed services designed to use big data, the cloud way.”

The company also added a number of security features to Big Query, Google’s SQL cloud service, including adding row-level permissioning for data protection, made it more performant (raised the ingestion limit to 100,000 rows per second), and announced its availability in Europe.

Google has largely focused its attention on other areas of the stack as of late. The company has been driving its container scheduling and deployment initiative Kubernetes quite hard, as well as its hybrid cloud initiatives (Mirantis, VMware). It also recently introduced a log analysis for Google Cloud and App Engine users.

Rackspace taps former VeriSign, Red Hat exec to lead strategy, product engineering

Scott CrenshawRackspace has appointed Scott Crenshaw to the role of senior vice president of strategy and product. Crenshaw, who formerly hails from VeriSign, will oversee the company’s corporate strategy, business development and product and engineering portfolio.

Crenshaw most recently served as senior vice president of products at VeriSign, where he led the development of the company’s new products and services. Before that he served as vice president of strategy and chief marketing officer at Acronis, a data backup and recovery solutions provider, and spent a number of years at Red Hat, where he served as vice president and general manager of the cloud business unit.

He also holds a number of patents related to subscription service provision and monitoring.

“We are excited to have someone of Scott’s caliber and experience joining our team,” said Rackspace president and chief executive officer Taylor Rhodes.

“Throughout his career, Scott has established a strong track record of developing winning strategies, managing and growing unique product offerings and working collaboratively with colleagues and customers. Scott will work closely with our marketing, sales, support and other critical functions to drive compelling product offerings and the best customer experience in the industry.”

Crenshaw said: “I am thrilled to be a part of this talented team at such an exciting moment in the company’s history.”

 

The cloud beyond x86: How old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there's good reason to believe old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there’s good reason to believe old architectures are making a comeback

When you ask IT pros to think of cloud the first thing that often comes to mind is web-delivered, meter-billed virtualised compute (and increasingly storage and networking) environments which, today, tends to imply an x86-centric stack built to serve up mostly any workload. But anyone watching this space closely will see x86 isn’t the only kid on the block, with SPARC, ARM and Power all vying for a large chunk of the scale-out market, as enterprises seek to squeeze more power out of their cloud hardware. What will the cloud stack of tomorrow look like?

Despite the dominance of x86 in the datacentre it is difficult to ignore the noise vendors have been making over the past couple of years around non-x86 architectures like ARM (ARM), SPARC (Oracle) and Power (IBM), but it’s easy to understand why: simply put, the cloud datacentre market is currently the dominant server market, with enterprises looking to consume more software as a service and outsource more of their datacentre operations than ever before.

Sameh Boujelbene, director of server research at Dell’Oro Group says over 50 per cent of all servers will ship to cloud service providers by 2018, and the size of the market (over $40bn annually by some estimates) creates a massive opportunity for new – and in some cases old non-x86 vendors aiming to nab a large chunk of it.

The nature and number of workloads is also changing. The number of connected devices sending or requesting data that needs to be stored or analysed, along with

the number and nature of workloads processed by datacentres, will more than double in the next five years, Boujelbene explains. This increase in connected devices and workloads will drive the need for more computing capacity and more physical servers, while driving exploration of more performant architectures to support this growing workload heterogeneity.

This article appeared in the March/April edition of BCN Magazine. Click here to download the issue today.

But it’s also important to recognise how migration to the cloud is impacting the choice of server form factors, choice of server brand and the choice of CPU architecture from the datacentre or cloud service provider perspective. Needless to say, cloud service providers have to optimise their datacentre efficiency at every turn.

“Generally, they are moving from general purpose servers to workload optimised servers,” Boujelbene explains. “We see cloud accounts going directly to white box servers shipped by ODMs directly to cloud accounts not only to cut costs but also because ODMs allow customisation; traditional server OEMs such as Dell, HP and IBM simply didn’t want to provide customised servers few years ago.”

Boujelbene sees big opportunities for alternative architectures to x86 such as ARM, SPARC or Power because they provide better performance to run specific types of workloads, and Intel is reacting to that trend by making customised CPUs available to some large cloud accounts. The company has about 35 customised CPU SKUs, and growing, and late last year won a pretty large contract to supply Amazon Web Services, the largest and most established of the public cloud providers, with custom Intel Xeon E5-2666 v3 (Haswell) processors.

Others in the ecosystem, some likely to have joined the fray at some point and others less so, are being enticed to get involved. Mobile chip incumbent Qualcomm announced plans ‘with its own ARM-based offerings’ in November last year to enter the server chip market at some point over the next two years, which the company believes represents a $15bn opportunity over the next five years.

And about a month before the Qualcomm announcement HP unveiled what it called the first “enterprise-grade ARM-based server,” its Moonshot range – the first to support ARM’s v8 architecture. Around the same time, Dell’s chief executive officer and founder Michael Dell intimated to a room of journalists his company, a long time Intel partner, would not be opposed to putting ARM chips in its servers.

SPARC and Power are both very compelling options when it comes to high I/O data analytics – where they are notably more performant than commodity x86. ARM’s key selling points have more to do with the ability to effectively balance licensing, design and manufacturing flexibility with power efficiency and physical density, though the company’s director of server programmes Jeff Underhill says other optimisations – being driven by cloud – are making their way to the CPU level.

“Cloud infrastructure by its very nature is network and storage-centric. So it is essential it can handle large numbers of simultaneous interactions efficiently optimising for aggregate throughput rather than just focusing on the outright performance of a single server. Solutions with integrated high performance networking, as well as storage and domain specific accelerators augmenting their general processor capabilities, offer significantly improved throughput versus traditional general purpose approaches,” Underhill says.

Underhill explains that servers are actually becoming more specialised, though there is and will continue to be a need for general-purpose servers and architectures to support them.

“The really interesting thing to look at is the area where networking and server technologies are converging towards a more scalable, flexible and dynamic ‘infra- structure’. Servers are becoming more specialised with advanced networking and storage capabilities mixed with workload specific accelerators,” he says, adding that this is pushing consolidation of an increasing number of systems (particularly networking) onto the SoC.

Hedging Their Bets

Large cloud providers – those with enough resource to write their own software and stand up their own datacentres – are the primary candidates for making the architectural shift in the scale-out market because of the cost prohibitive nature of making such a move (and the millions of dollars in potential cost-savings if it can be pulled off well).

It’s no coincidence Google, Facebook and Amazon have, with varying degrees of openness, flirted with the idea of shifting their datacentres onto ARM-based or other chips. Google for instance is one of several service providers steering the direction of the OpenPower Foundation (Rackspace is another), a consortium set up by IBM in December 2013 to foster cross-industry open source development of the Power architecture.

Power, which for IBM is the core architecture under- lying its high-end servers and mainframes as well as its more recently introduced cognitive computing as a service platform Watson, is being pitched by the more than 80 consortium members as the cloud and big data architecture of choice. Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation says there is a huge opportunity for the Power architecture to succeed because of barriers in how technology cost and performance at the CPU level is scaling.

“If you go back five or six years, when the base transistor was scaling so well and so fast, all you had to do was go to the next–gen processor to get those cost-to-performance takedowns you were looking for. The best thing you could do all things considered or remaining equal is hop onto the next gen processor. Now, service providers are not getting those cost take-down curves they were hoping for with cloud, and a lot of cloud services are run on massive amounts of older technology platforms.”

The result is that technology providers have to pull on more and more levers – like adding GPU acceleration or enabling GPU virtualisation, or enabling FPGA attachment – to get cost-to-performance to come down; that is driving much of the heterogeneity in the cloud – different types of heterogeneity, not just at the CPU level.

There’s also a classic procurement-related incentive for heterogeneity among providers. The diversity of suppliers means spreading that risk and increasing competitiveness in the cloud, which is another good thing for cost-to-performance too.

While McCredie says that it’s still early days for Power in the cloud, and that Power is well suited to a particular set of data-centric workloads, he acknowledges it’s very hard to stay small and niche on one hand and continue to drive down cost-to-performance. The Foundation is looking to drive at least 20 to 30 per cent of the scale- out market, which – considering x86 has about 95 per cent share of that market locked up – is fairly ambitious.

“We have our market share in our core business, which for IBM is in the enterprise, but we also want share in the scale-out market. To do that you have to activate the open ecosystem,” he says, alluding to the IBM-led consortium.

It’s clear the increasingly prevalent open source mantra in the tech sector is spreading to pretty much every level of the cloud stack. For instance Rackspace, which participates with both OpenStack and Open Compute Project, open source cloud software and hard- ware projects respectively, is actively working to port OpenStack over to the Power architecture, with the goal of having OpenStack running on OpenPower / Open Compute Project hardware in production sometime in the next couple of years. It’s that kind of open ecosystem McCredie says is essential in cloud today and, critically, that such openness need not come at the cost of loose integration or consequent performance tax.

SPARC, which has its roots in financial services, retail and manufacturing, is interesting in part because it remains a fairly closed ecosystem and largely ends up in machines finely-tuned to very specific database workloads. Yet despite incurring losses for several years following its acquisition of Sun Microsystems, the architecture’s progenitor (along with Motorola), Oracle’s hardware business mostly bucked that trend (one experienced by most high-end server vendors) throughout 2014 and continues to do so.

The company’s 2015 Q2 saw its hardware systems grow 4 per cent year on year to roughly $717m, with the SPARC-based Exalogic and SuperCluster systems achieving double-digit growth.

“We’ve actually seen a lot of customers that have gone from SPARC to x86 Linux now very strongly come back to SPARC Solaris, in part because the technology has the audit and compliance features built into the architecture, they can do one click reporting, and be- cause the virtualisation overhead with Solaris on SPARC is much lower when compared with other virtualisation platforms,” says Paul Flannery, senior director EMEA product management in Oracle’s server group.

Flannery says openness and heterogeneity don’t necessarily lead to the development of the most per- formant outcome. “The complexity of having multiple vendors in your stack and then having to worry about the patching, revision labels of each of those platforms is challenging. And in terms of integrating those technologies – the fact we have all of the databases and all of the middleware and the apps – to be able to look at that whole environment.”

Robert Jenkins, chief executive officer of CloudSigma, a cloud service provider that recently worked with Oracle to launch one of the first SPARC-as-a-Service platforms, says that ultimately computing is still very heterogeneous.

“The reality is a lot of people don’t get the quality and performance that they need from public cloud because they’re jammed through this very rigid frame- work, and computing is very heterogeneous –which hasn’t changed with cloud,” he says. “You can deploy simply, but inefficiently, and the reality is that’s not what most people want. As a result we’ve made efforts to go beyond x86.”

He says the company is currently hashing out a deal with a very large bank that wants to use the latest SPARC architecture as a cloud service – so without having to shell out half a million dollars per box, which is roughly what Oracle charges, or migrate off the architecture altogether, which is costly and risky. Besides capex, SPARC is well suited to be offered as a service because the kinds of workloads that run on the architecture tend to be more variable or run in batches.

“The enterprise and corporate world is still focused on SPARC and other older specialised architectures, mainframes for instance, but it’s managing that heterogeneous environment that can be difficult. Infrastructure as a service is still fairly immature, and combined with the fact that companies using older architectures like SPARC tend not to be first movers, you end up in this situation where there’s a gap in the tooling necessary to make resource and service management easier.”

Does It Stack Up For Enterprises?

Whereas datacentre modernisation during the 90s entailed, among other things, a transition away from expensive mainframes running Unix workloads towards lower-cost commodity x86 machines running Linux or Microsoft-based software packages on bare metal, for many large enterprises, much of the 2000s focused on virtualising the underlying hardware platforms in a bid to make them more elastic and more performant. Those hardware platforms were overwhelmingly x86-based.

But, many of those same enterprises refused to go “all- in” on virtualisation or x86, maintaining multiple compute architectures to support niche workloads that ultimately weren’t as performant on commodity kit; financial services and the aviation industry are great examples of sectors where one can still find plenty of workloads running on 40-50 year old mainframe technology.

Andrew Butler, research vice president focusing on servers and storage at Gartner and an IT industry veteran says the same trend is showing up in the cloud sector, as well as to some extent the same challenges.

“What is interesting is that you see a lot of enter- prises claiming to move wholesale into the cloud, which speaks to this drive towards commoditisation in hardware – x86 in other words – as well as services, fea- tures and decision-making more generally. But that’s definitely not to say there isn’t room for SPARC, Power, mainframes or ARM in the datacentre, despite most of those – if you look at the numbers – appearing to have had their day,” Butler says.

“At the end of the day, in order to be able to run the workloads that we can relate to, delivering a given amount of service level quality is the overriding priority – which in the modern datacentre primarily centres on uptime and reliability. But while many enterprises were driven towards embracing what at the time was this newer architecture because of flexibility or cost, performance in many cases still reigns supreme, and there are many pursuing the cloud-enablement of legacy workloads, wrapping some kind of cloud portal access layer around a mainframe application for instance.”

“The challenge then becomes maintaining this bi-mod- al framework of IT, and dealing with all of the technology and cultural challenges that come along with all of this; in other words, dealing with the implications of bringing things like mainframes into direct contact with things like the software defined datacentre,” he explains.

A senior datacentre architect working at a large American airline who insists on anonymity says the infrastructure management, technology and cultural challenges alluded to above are very real. But they can be overcome, particularly because some of these legacy vendors are trying to foster more open exposure of their APIs for management interfaces (easing the management and tech challenge), and because ops management teams do get refreshed from time to time.

What seems to have a large impact is the need to ensure the architectures don’t become too complex, which can occur when old legacy code takes priority simply because the initial investment was so great. This also makes it more challenging for newer generations of datacentre specialists coming into the fold.

“IT in our sector is changing dramatically but you’d be surprised how much of it still runs on mainframes,” he says. “There’s a common attitude towards tech – and reasonably so – in our industry that ‘if it ain’t broke don’t fix it’, but it can skew your teams towards feeling the need to maintain huge legacy code investments just because.”

As Butler alluded to earlier, this bi-modality isn’t particularly new, though there is a sense among some that the gap between all of the platforms and archi- tectures is growing when it comes to cloud due to the expectations people have on resilience and uptime but also ease of management, power efficiency, cost, and so forth. He says that with IBM’s attempts to gain mind- share around Power (in addition to developing more cloudy mainframes), ARM’s endeavour to do much the same around its processor architecture and Oracle’s cloud-based SPARC aspirations, things are likely to remain volatile for vendors, service providers and IT’ers for the foreseeable future.

“It’s an incredibly volatile period we’re entering, where this volatility will likely last between seven years possibly up to a decade before it settles down – if it settles down,” Butler concluded

Microsoft unveils Hyper-V containers, nano servers

Microsoft has unveiled Hyper-V containers and nano servers

Microsoft has unveiled Hyper-V containers and nano servers

Microsoft has unveiled a number of updates to Windows Server including Hyper-V containers, which are essentially Docker containers embedded in Hyper-V VMs, and nano servers, a slimmed down Windows server image.

Microsoft said Hyper-V containers are ideal for users that want virtualisation-grade isolation, but still want to run their workloads within Docker containers in a Windows ecosystem.

“Through this new first-of-its-kind offering, Hyper-V Containers will ensure code running in one container remains isolated and cannot impact the host operating system or other containers running on the same host,” explained Mike Neil, general manager for Windows Server, Microsoft in a recent blog post.

“In addition, applications developed for Windows Server Containers can be deployed as a Hyper-V Container without modification, providing greater flexibility for operators who need to choose degrees of density, agility, and isolation in a multi-platform, multi-application environment.”

Windows Server Containers will be enabled in the next release of Windows Server, which is due to be demoed in the coming weeks, and makes good on Microsoft’s commitment to make the Windows Server ecosystem (including Azure) Docker-friendly.

The company also unveiled what it’s calling nano servers, a “purpose-built OS” that is essentially a stripped down Windows Server image optimised for cloud and container workloads. They can be deployed onto bare metal, and because Microsoft removed tons of code it boots up and runs more quickly.

“To achieve these benefits, we removed the GUI stack, 32 bit support (WOW64), MSI and a number of default Server Core components. There is no local logon or Remote Desktop support. All management is performed remotely via WMI and PowerShell. We are also adding Windows Server Roles and Features using Features on Demand and DISM. We are improving remote manageability via PowerShell with Desired State Configuration as well as remote file transfer, remote script authoring and remote debugging.  We are working on a set of new Web-based management tools to replace local inbox management tools,” the company explained.

“Because Nano Server is a refactored version of Windows Server it will be API-compatible with other versions of Windows Server within the subset of components it includes. Visual Studio is fully supported with Nano Server, including remote debugging functionality and notifications when APIs reference unsupported Nano Server components.”

The move is a sign Microsoft is keen to keep its on-premise and cloud platform ahead of the technology curve, and is likely to appeal to .NET developers who are attracted to some of the benefits of containers while wanting to stay firmly within a Windows world in terms of the tools and code used. Still, the company said it is working with Chef to ensure nano servers work well with their DevOps tools.

Alibaba throws its weight behind ARM architecture standards

Alibaba is joining the Linaro Group, an organisation which aims to eliminate software fragmentation within ARM-based  environments

Alibaba is joining the Linaro Enterprise Group, an organisation which aims to eliminate software fragmentation within ARM-based environments

Chinese e-commerce and cloud giant Alibaba announced it has joined the Linaro Enterprise Group (LEG), a group of over 200 engineers working on consolidating and optimising open source software for the ARM architecture.

Linaro runs a number of different ARM-based initiatives  aimed at cultivating software standards for ARM chips for networking, mobile platforms, servers and the connected home. It mainly targets upstream development but also aims to coordinate work that helps reduce “costly low level fragmentation.”

More recently the organisation launched a working group focused specifically on developing software standards for ARMv8-A 64-bit silicon, architecture a number of server vendors and ODMs have started adopting in their portfolios in a bid to test the ARM-based cloud server market.

Alibaba, which operates six cloud datacentres – mostly in China – and recently expanded to the US, said it will collaborate with a range of companies within LEG to optimise the ARMv8-A software platforms.

“Alibaba Group’s infrastructure carries the world’s largest e-commerce ecosystem, in addition to China’s leading cloud services,” said Shuanlin Liu, chief architect of Alibaba Infrastructure Service.

“We need the best technical solutions as we step into the DT (data technology) era. Hence, we’re investing heavily in the innovation of a wide range of technologies, including the ARM architecture. We will continue to work closely with partners to accelerate the development and growth of the ecosystem,” Liu said.

Alibaba said the move may help it deliver cloud services that have been workload-optimised right down to the chip, and help lower the TCO; lower energy usage and higher density are two leading characteristics driving interest in ARM for cloud datacentres. But due in part to x86′s dominance in the datacentre there is a conspicuous lack of ARM-based software standards and workloads, which is what LEG

“As one of the world’s largest cloud operators, Alibaba is continually pushing technology boundaries to efficiently deploy new services at a massive scale,” said Lakshmi Mandyam, director, server systems and ecosystems, ARM. “Their collaboration with the ARM ecosystem will accelerate and expand open source software choices for companies wishing to deploy ARMv8-A based servers. We welcome Alibaba’s participation in Linaro and the new dimension it will bring to an already vibrant community.”

The past couple of years have seen a number of large cloud service providers flirt with the prospect of switching to ARM architecture within their datacentres, most notably Amazon. The latest move signals Alibaba is interested in moving in that direction, or at least  signal to vendors it’s willing to do so, but it may be a while before we see the cloud giant roll out the ARM-based servers within its datacentres.

Telstra to offer SoftLayer cloud access to Australian customers

Telstra and IBM are partnering to offer access to SoftLayer infrastructure

Telstra and IBM are partnering to offer access to SoftLayer infrastructure

Telstra and IBM have announced a partnership that will see the Australian telco offer access to SoftLayer cloud infrastructure to customers in Australia.

Telstra said that with the recent opening of IBM cloud datacentres in Melbourne and Sydney, the company will be able to expand its presence in the local cloud market by offering Australian businesses more choice in locally available cloud infrastructure services.

As part of the deal the telco’s customers will have access to the full-range of SoftLayer infrastructure services including bare metal servers, virtual servers, storage, security services and networking.

Erez Yarkoni, who serves as both chief information officer and executive director of cloud at Telstra said: “Telstra customers will be able to access IBM’s hourly and monthly compute services on the SoftLayer platform, a network of virtual data centres and global points-of-presence (PoPs), all of which are increasingly important as enterprises look to run their applications on the cloud.”

“Telstra customers can connect to IBM’s services via the internet or with a simple extension of their private network. By adding the Telstra Cloud Direct Connect offering, they can also access IP VPN connectivity, giving them a smooth experience between our Next IP network and their choice of global cloud platforms,” Yarkoni said.

Mark Brewer, general manager, IBM Global Technology Services Australia and New Zealand said: “Australian businesses have quickly realised the benefits of moving to a flexible cloud model to accommodate the rapidly changing needs of business today. IBM Cloud provides Telstra customers with unmatched choice and freedom of where to run their workloads, with proven levels security and high performance.”

Telstra already partners with Cisco on cloud infrastructure and is a flagship member of the networking giant’s Intercloud programme, but the company hailed its partnership with IBM as a key milestone in its cloud strategy, and may help bolster its appeal to business customers in the region.

EU data protection authorities rubber-stamp AWS’ data processing agreement

EU data protection authorities have rubber-stamped AWS' data protection practices

EU data protection authorities have rubber-stamped AWS’ data protection practices

The group of European Union data protection authorities, known as the Article 29 Working Party (WP29), has approved AWS’ Data Processing Agreement, which the company said would help reassure customers it applies high standard of security and privacy in handling their data, whether moved inside or out of the EU.

Amazon said its inclusion of standardised model clauses within its customer contracts, and the WP29’s signoff of its contract, should help give customers more confidence in how it treats their data.

“The security, privacy, and protection of our customer’s data is our number one priority,” said Werner Vogels, chief technology officer, Amazon.

“Providing customers a DPA that has been approved by the EU data protection authorities is another way in which we are giving them assurances that they will receive the highest levels of data protection from AWS. We have spent a lot of time building tools, like security controls and encryption, to give customers the ability to protect their infrastructure and content.”

“We will always strive to provide the highest level of data security for AWS customers in the EU and around the world,” he added.

AWS already boasts a number of highly regulated clients in the US and Europe, and has made strides to appease the security and data-sovereignty-conscious customers. The company has certified to ISO 27001, SOC 1, 2, 3 and PCI DSS Level 1, is approved to provide its services to a number of banks in Europe, and is working with the CIA to build a massive private cloud platform.

More recently AWS added another EU availability zone based in Franfkurt; it operates one in Dublin.

The rubber-stamping seems to have come as welcome news to some European members of parliament, which have for the past few years been actively working on data protection reform in the region.

“The EU has the highest data protection standards in the world and it is very important that European citizens’ data is protected,” said Antanas Guoga, Member of the European Parliament.

“I believe that the Article 29 Working Party decision to approve the data proceeding agreement put forward by Amazon Web Services is a step forward to the right direction. I am pleased to see that AWS puts an emphasis on the protection of European customer data. I hope this decision will also help to drive further innovation in the cloud computing sector across the EU,” Guoga added.