Archivo de la etiqueta: architecture

The cloud beyond x86: How old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there's good reason to believe old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there’s good reason to believe old architectures are making a comeback

When you ask IT pros to think of cloud the first thing that often comes to mind is web-delivered, meter-billed virtualised compute (and increasingly storage and networking) environments which, today, tends to imply an x86-centric stack built to serve up mostly any workload. But anyone watching this space closely will see x86 isn’t the only kid on the block, with SPARC, ARM and Power all vying for a large chunk of the scale-out market, as enterprises seek to squeeze more power out of their cloud hardware. What will the cloud stack of tomorrow look like?

Despite the dominance of x86 in the datacentre it is difficult to ignore the noise vendors have been making over the past couple of years around non-x86 architectures like ARM (ARM), SPARC (Oracle) and Power (IBM), but it’s easy to understand why: simply put, the cloud datacentre market is currently the dominant server market, with enterprises looking to consume more software as a service and outsource more of their datacentre operations than ever before.

Sameh Boujelbene, director of server research at Dell’Oro Group says over 50 per cent of all servers will ship to cloud service providers by 2018, and the size of the market (over $40bn annually by some estimates) creates a massive opportunity for new – and in some cases old non-x86 vendors aiming to nab a large chunk of it.

The nature and number of workloads is also changing. The number of connected devices sending or requesting data that needs to be stored or analysed, along with

the number and nature of workloads processed by datacentres, will more than double in the next five years, Boujelbene explains. This increase in connected devices and workloads will drive the need for more computing capacity and more physical servers, while driving exploration of more performant architectures to support this growing workload heterogeneity.

This article appeared in the March/April edition of BCN Magazine. Click here to download the issue today.

But it’s also important to recognise how migration to the cloud is impacting the choice of server form factors, choice of server brand and the choice of CPU architecture from the datacentre or cloud service provider perspective. Needless to say, cloud service providers have to optimise their datacentre efficiency at every turn.

“Generally, they are moving from general purpose servers to workload optimised servers,” Boujelbene explains. “We see cloud accounts going directly to white box servers shipped by ODMs directly to cloud accounts not only to cut costs but also because ODMs allow customisation; traditional server OEMs such as Dell, HP and IBM simply didn’t want to provide customised servers few years ago.”

Boujelbene sees big opportunities for alternative architectures to x86 such as ARM, SPARC or Power because they provide better performance to run specific types of workloads, and Intel is reacting to that trend by making customised CPUs available to some large cloud accounts. The company has about 35 customised CPU SKUs, and growing, and late last year won a pretty large contract to supply Amazon Web Services, the largest and most established of the public cloud providers, with custom Intel Xeon E5-2666 v3 (Haswell) processors.

Others in the ecosystem, some likely to have joined the fray at some point and others less so, are being enticed to get involved. Mobile chip incumbent Qualcomm announced plans ‘with its own ARM-based offerings’ in November last year to enter the server chip market at some point over the next two years, which the company believes represents a $15bn opportunity over the next five years.

And about a month before the Qualcomm announcement HP unveiled what it called the first “enterprise-grade ARM-based server,” its Moonshot range – the first to support ARM’s v8 architecture. Around the same time, Dell’s chief executive officer and founder Michael Dell intimated to a room of journalists his company, a long time Intel partner, would not be opposed to putting ARM chips in its servers.

SPARC and Power are both very compelling options when it comes to high I/O data analytics – where they are notably more performant than commodity x86. ARM’s key selling points have more to do with the ability to effectively balance licensing, design and manufacturing flexibility with power efficiency and physical density, though the company’s director of server programmes Jeff Underhill says other optimisations – being driven by cloud – are making their way to the CPU level.

“Cloud infrastructure by its very nature is network and storage-centric. So it is essential it can handle large numbers of simultaneous interactions efficiently optimising for aggregate throughput rather than just focusing on the outright performance of a single server. Solutions with integrated high performance networking, as well as storage and domain specific accelerators augmenting their general processor capabilities, offer significantly improved throughput versus traditional general purpose approaches,” Underhill says.

Underhill explains that servers are actually becoming more specialised, though there is and will continue to be a need for general-purpose servers and architectures to support them.

“The really interesting thing to look at is the area where networking and server technologies are converging towards a more scalable, flexible and dynamic ‘infra- structure’. Servers are becoming more specialised with advanced networking and storage capabilities mixed with workload specific accelerators,” he says, adding that this is pushing consolidation of an increasing number of systems (particularly networking) onto the SoC.

Hedging Their Bets

Large cloud providers – those with enough resource to write their own software and stand up their own datacentres – are the primary candidates for making the architectural shift in the scale-out market because of the cost prohibitive nature of making such a move (and the millions of dollars in potential cost-savings if it can be pulled off well).

It’s no coincidence Google, Facebook and Amazon have, with varying degrees of openness, flirted with the idea of shifting their datacentres onto ARM-based or other chips. Google for instance is one of several service providers steering the direction of the OpenPower Foundation (Rackspace is another), a consortium set up by IBM in December 2013 to foster cross-industry open source development of the Power architecture.

Power, which for IBM is the core architecture under- lying its high-end servers and mainframes as well as its more recently introduced cognitive computing as a service platform Watson, is being pitched by the more than 80 consortium members as the cloud and big data architecture of choice. Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation says there is a huge opportunity for the Power architecture to succeed because of barriers in how technology cost and performance at the CPU level is scaling.

“If you go back five or six years, when the base transistor was scaling so well and so fast, all you had to do was go to the next–gen processor to get those cost-to-performance takedowns you were looking for. The best thing you could do all things considered or remaining equal is hop onto the next gen processor. Now, service providers are not getting those cost take-down curves they were hoping for with cloud, and a lot of cloud services are run on massive amounts of older technology platforms.”

The result is that technology providers have to pull on more and more levers – like adding GPU acceleration or enabling GPU virtualisation, or enabling FPGA attachment – to get cost-to-performance to come down; that is driving much of the heterogeneity in the cloud – different types of heterogeneity, not just at the CPU level.

There’s also a classic procurement-related incentive for heterogeneity among providers. The diversity of suppliers means spreading that risk and increasing competitiveness in the cloud, which is another good thing for cost-to-performance too.

While McCredie says that it’s still early days for Power in the cloud, and that Power is well suited to a particular set of data-centric workloads, he acknowledges it’s very hard to stay small and niche on one hand and continue to drive down cost-to-performance. The Foundation is looking to drive at least 20 to 30 per cent of the scale- out market, which – considering x86 has about 95 per cent share of that market locked up – is fairly ambitious.

“We have our market share in our core business, which for IBM is in the enterprise, but we also want share in the scale-out market. To do that you have to activate the open ecosystem,” he says, alluding to the IBM-led consortium.

It’s clear the increasingly prevalent open source mantra in the tech sector is spreading to pretty much every level of the cloud stack. For instance Rackspace, which participates with both OpenStack and Open Compute Project, open source cloud software and hard- ware projects respectively, is actively working to port OpenStack over to the Power architecture, with the goal of having OpenStack running on OpenPower / Open Compute Project hardware in production sometime in the next couple of years. It’s that kind of open ecosystem McCredie says is essential in cloud today and, critically, that such openness need not come at the cost of loose integration or consequent performance tax.

SPARC, which has its roots in financial services, retail and manufacturing, is interesting in part because it remains a fairly closed ecosystem and largely ends up in machines finely-tuned to very specific database workloads. Yet despite incurring losses for several years following its acquisition of Sun Microsystems, the architecture’s progenitor (along with Motorola), Oracle’s hardware business mostly bucked that trend (one experienced by most high-end server vendors) throughout 2014 and continues to do so.

The company’s 2015 Q2 saw its hardware systems grow 4 per cent year on year to roughly $717m, with the SPARC-based Exalogic and SuperCluster systems achieving double-digit growth.

“We’ve actually seen a lot of customers that have gone from SPARC to x86 Linux now very strongly come back to SPARC Solaris, in part because the technology has the audit and compliance features built into the architecture, they can do one click reporting, and be- cause the virtualisation overhead with Solaris on SPARC is much lower when compared with other virtualisation platforms,” says Paul Flannery, senior director EMEA product management in Oracle’s server group.

Flannery says openness and heterogeneity don’t necessarily lead to the development of the most per- formant outcome. “The complexity of having multiple vendors in your stack and then having to worry about the patching, revision labels of each of those platforms is challenging. And in terms of integrating those technologies – the fact we have all of the databases and all of the middleware and the apps – to be able to look at that whole environment.”

Robert Jenkins, chief executive officer of CloudSigma, a cloud service provider that recently worked with Oracle to launch one of the first SPARC-as-a-Service platforms, says that ultimately computing is still very heterogeneous.

“The reality is a lot of people don’t get the quality and performance that they need from public cloud because they’re jammed through this very rigid frame- work, and computing is very heterogeneous –which hasn’t changed with cloud,” he says. “You can deploy simply, but inefficiently, and the reality is that’s not what most people want. As a result we’ve made efforts to go beyond x86.”

He says the company is currently hashing out a deal with a very large bank that wants to use the latest SPARC architecture as a cloud service – so without having to shell out half a million dollars per box, which is roughly what Oracle charges, or migrate off the architecture altogether, which is costly and risky. Besides capex, SPARC is well suited to be offered as a service because the kinds of workloads that run on the architecture tend to be more variable or run in batches.

“The enterprise and corporate world is still focused on SPARC and other older specialised architectures, mainframes for instance, but it’s managing that heterogeneous environment that can be difficult. Infrastructure as a service is still fairly immature, and combined with the fact that companies using older architectures like SPARC tend not to be first movers, you end up in this situation where there’s a gap in the tooling necessary to make resource and service management easier.”

Does It Stack Up For Enterprises?

Whereas datacentre modernisation during the 90s entailed, among other things, a transition away from expensive mainframes running Unix workloads towards lower-cost commodity x86 machines running Linux or Microsoft-based software packages on bare metal, for many large enterprises, much of the 2000s focused on virtualising the underlying hardware platforms in a bid to make them more elastic and more performant. Those hardware platforms were overwhelmingly x86-based.

But, many of those same enterprises refused to go “all- in” on virtualisation or x86, maintaining multiple compute architectures to support niche workloads that ultimately weren’t as performant on commodity kit; financial services and the aviation industry are great examples of sectors where one can still find plenty of workloads running on 40-50 year old mainframe technology.

Andrew Butler, research vice president focusing on servers and storage at Gartner and an IT industry veteran says the same trend is showing up in the cloud sector, as well as to some extent the same challenges.

“What is interesting is that you see a lot of enter- prises claiming to move wholesale into the cloud, which speaks to this drive towards commoditisation in hardware – x86 in other words – as well as services, fea- tures and decision-making more generally. But that’s definitely not to say there isn’t room for SPARC, Power, mainframes or ARM in the datacentre, despite most of those – if you look at the numbers – appearing to have had their day,” Butler says.

“At the end of the day, in order to be able to run the workloads that we can relate to, delivering a given amount of service level quality is the overriding priority – which in the modern datacentre primarily centres on uptime and reliability. But while many enterprises were driven towards embracing what at the time was this newer architecture because of flexibility or cost, performance in many cases still reigns supreme, and there are many pursuing the cloud-enablement of legacy workloads, wrapping some kind of cloud portal access layer around a mainframe application for instance.”

“The challenge then becomes maintaining this bi-mod- al framework of IT, and dealing with all of the technology and cultural challenges that come along with all of this; in other words, dealing with the implications of bringing things like mainframes into direct contact with things like the software defined datacentre,” he explains.

A senior datacentre architect working at a large American airline who insists on anonymity says the infrastructure management, technology and cultural challenges alluded to above are very real. But they can be overcome, particularly because some of these legacy vendors are trying to foster more open exposure of their APIs for management interfaces (easing the management and tech challenge), and because ops management teams do get refreshed from time to time.

What seems to have a large impact is the need to ensure the architectures don’t become too complex, which can occur when old legacy code takes priority simply because the initial investment was so great. This also makes it more challenging for newer generations of datacentre specialists coming into the fold.

“IT in our sector is changing dramatically but you’d be surprised how much of it still runs on mainframes,” he says. “There’s a common attitude towards tech – and reasonably so – in our industry that ‘if it ain’t broke don’t fix it’, but it can skew your teams towards feeling the need to maintain huge legacy code investments just because.”

As Butler alluded to earlier, this bi-modality isn’t particularly new, though there is a sense among some that the gap between all of the platforms and archi- tectures is growing when it comes to cloud due to the expectations people have on resilience and uptime but also ease of management, power efficiency, cost, and so forth. He says that with IBM’s attempts to gain mind- share around Power (in addition to developing more cloudy mainframes), ARM’s endeavour to do much the same around its processor architecture and Oracle’s cloud-based SPARC aspirations, things are likely to remain volatile for vendors, service providers and IT’ers for the foreseeable future.

“It’s an incredibly volatile period we’re entering, where this volatility will likely last between seven years possibly up to a decade before it settles down – if it settles down,” Butler concluded

AEC firm K&A moves from private to public cloud, saves 40% in costs

Khatib & Alami moved onto iland's public cloud platform this year

Khatib & Alami moved onto iland’s public cloud platform this year

Global architectural design and project management firm Khatib & Alami (K&A) has moved from a private cloud platform onto a public cloud, which the company said has led to a 40 per cent reduction in IT operations management spend.

K&A, which was set up in 1964 and has offices in the Middle East, Africa, Western Europe and North America, offers a range of architectural and engineering services.

The company originally moved to deploy its internal applications on a private cloud platform hosted in iland’s datacentre in London, which it did in order to consolidate its IT environments.

At the time the company also experimented with public cloud platforms, but preferred to maintain its private cloud deployment. However, while it’s difficult to narrow down an exact figure where private and public cloud platforms are equal in cost, the company’s corporate IT manager Mohamed Saad said the public cloud option began to make more sense at the company’s growth began to outpace its ability to scale efficiently, both in terms to technology and personnel.

“The hardware was becoming too restrictive because we weren’t able to scale up.  We would have had to purchase more hardware and then deploy that and add more virtual servers with capacity for additional processing power. We would also have needed to employ the maintenance staff that went along with purchasing more hardware. Then we’d have to maintain all this equipment,” he explained.

“All of the maintenance and management headaches and the fact we needed rapid scalability helped us come to the decision that having our own private cloud infrastructure was just too much of a hassle.”

“What’s more, iland’s public cloud was considerably more economical than using our own equipment. We’re getting close to 35 to 40 per cent cost savings with iland’s cloud. iland now hosts all of our mission critical applications, allowing us to focus our IT efforts on activities that drive our business forward,” he added.

A More Practical View of Cloud Brokers

#cloud The conventional view of cloud brokers misses the need to enforce policies and ensure compliance

cloudbrokerviews During a dinner at VMworld organized by Lilac Schoenbeck of BMC, we had the chance to chat up cloud and related issues with Kia Behnia, CTO at BMC. Discussion turned, naturally I think, to process. That could be because BMC is heavily invested in automating and orchestrating processes. Despite the nomenclature used (business process management) for IT this is a focus on operational process automation, though eventually IT will have to raise the bar and focus on the more businessy aspects of IT and operations.

Alex Williams postulated the decreasing need for IT in an increasingly cloudy world. On the surface this generally seems to be an accurate observation. After all, when business users can provision applications a la SaaS to serve their needs do you really need IT? Even in cases where you’re deploying a fairly simple web site, the process has become so abstracted as to comprise the push of a button, dragging some components after specifying a template, and voila! Web site deployed, no IT necessary.

While from a technical difficulty perspective this may be true (and if we say it is, it is for only the smallest of organizations) there are many responsibilities of IT that are simply overlooked and, as we all know, underappreciated for what they provide, not the least of which is being able to understand the technical implications of regulations and requirements like HIPAA, PCI-DSS, and SOX – all of which have some technical aspect to them and need to be enforced, well, with technology.

See, choosing a cloud deployment environment is not just about “will this workload run in cloud X”. It’s far more complex than that, with many more variables that are often hidden from the end-user, a.k.a. the business peoples. Yes, cost is important. Yes, performance is important. And these are characteristics we may be able to gather with a cloud broker. But what we can’t know is whether or not a particular cloud will be able to enforce other policies – those handed down by governments around the globe and those put into writing by the organization itself.

Imagine the horror of a CxO upon discovering an errant employee with a credit card has just violated a regulation that will result in Severe Financial Penalties or worse – jail. These are serious issues that conventional views of cloud brokers simply do not take into account. It’s one thing to violate an organizational policy regarding e-mailing confidential data to your Gmail account, it’s quite another to violate some of the government regulations that govern not only data at rest but in flight.

A PRACTICAL VIEW of CLOUD BROKERS

Thus, it seems a more practical view of cloud brokers is necessary; a view that enables such solutions to not only consider performance and price, but ability to adhere to and enforce corporate and regulatory polices. Such a data center hosted cloud broker would be able to take into consideration these very important factors when making decisions regarding the optimal deployment environment for a given application. That may be a public cloud, it may be a private cloud – it may be a dynamic data center. The resulting decision (and options) are not nearly as important as the ability for IT to ensure that the technical aspects of policies are included in the decision making process.

And it must be IT that codifies those requirements into a policy that can be leveraged by the  broker and ultimately the end-user to help make deployment decisions. Business users, when faced with requirements for web application firewalls in PCI-DSS, for example, or ensuring a default “deny all” policy on firewalls and routers, are unlikely able to evaluate public cloud offerings for ability to meet such requirements. That’s the role of IT, and even wearing rainbow-colored cloud glasses can’t eliminate the very real and important role IT has to play here.

The role of IT may be changing, transforming, but it is no way being eliminated or decreasing in importance. In fact, given the nature of today’s environments and threat landscape, the importance of IT in helping to determine deployment locations that at a minimum meet organizational and regulatory requirements is paramount to enabling business users to have more control over their own destiny, as it were. 

So while cloud brokers currently appear to be external services, often provided by SIs with a vested interest in cloud migration and the services they bring to the table, ultimately these beasts will become enterprise-deployed services capable of making policy-based decisions that include the technical details and requirements of application deployment along with the more businessy details such as costs.

The role of IT will never really be eliminated. It will morph, it will transform, it will expand and contract over time. But business and operational regulations cannot be encapsulated into policies without IT. And for those applications that cannot be deployed into public environments without violating those policies, there needs to be a controlled, local environment into which they can be deployed.


Related blogs and articles:  
 
lori-short-2012clip_image004[5]

Lori MacVittie is a Senior Technical Marketing Manager, responsible for education and evangelism across F5’s entire product suite.

Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

She is the author of XAML in a Nutshell and a co-author of The Cloud Security Rules

 

F5 Networks

clip_image003[5]clip_image004[5]clip_image006[5]clip_image007[5]clip_image008[5]


read more

BIG-IP Solutions for Microsoft Private Cloud

Five of the top six services critical to cloud are application delivery services and available with F5 BIG-IP.

f5friday

The big news at MMS 2012 was focused on private cloud and Microsoft’s latest solutions in the space with System Center 2012. Microsoft’s news comes on the heels of IBM’s latest foray with its PureSystems launch at its premiere conference, IBM Pulse. 

As has become common, while System Center 2012 addresses the resources most commonly associated with cloud of any kind, compute, and the means by which operational tasks can be codified, automated, and integrated, it does not delve too deeply into the network, leaving that task to its strategic partners.

One of its long-term partners is F5, and we take the task seriously.The benefits of private cloud are rooted in greater economies of scale through broader aggregation and provisioning of resources, as well its ability to provide for flexible and reliable applications that are always available and rely on many of these critical services. Applications are not islands of business functionality, after all; they rely upon a multitude of network-hosted services such as load balancing, identity and access management, and security services to ensure a consistent, secure end-user experience from anywhere, from any device.most important features cloud nww 5 of the top 6 services seen as most critical to cloud implementations in a 2012 Network World Cloud survey are infrastructure services, all of which are supported by the application delivery tier.

The ability to consistently apply policies governing these aspects of every successful application deployment is critical to keeping the network aligned with the allocation of compute and storage resources. With the network, applications cannot scale, reliability is variable, and security compromised through fragmentation and complexity. The lack of a unified infrastructure architecture reduces the performance, scale, security and flexibility of cloud computing environments, both private and public. Thus, just as we ensure the elasticity and operational benefits associated with a more automated and integrated application delivery strategy for IBM, so have we done with respect to a Microsoft private cloud solution.

BIG-IP Solutions for Microsoft Private Cloud

BIG-IP solutions for Microsoft private cloud take advantage of key features and technologies in BIG-IP version 11.1, including F5’s virtual Clustered MultiprocessingTM (vCMP™) technology, iControl®, F5’s web services-enabled open application programming interface (API), administrative partitioning and server name indication (SNI). Together, these features help reduce the cost and complexity of managing cloud infrastructures in multi-tenant environments. With BIG-IP v11.1, organizations reap the maximum benefits of conducting IT operations and application delivery services in the private cloud. Although these technologies are generally applicable to all cloud implementations – private, public or hybrid – we also announced Microsoft-specific integration and support that enables organizations to ensure the capability to extend automation and orchestration into the application delivery tier for maximum return on investment.

F5 Monitoring Pack for System Center
Provides two-way communication between BIG-IP devices and the System Center management console. Health monitoring, failover, and configuration synchronization of BIG-IP devices, along with customized alerting, Maintenance Mode, and Live Migration, occur within the Operations Manager component of System Center. The F5 Load Balancing Provider for System Center
Enables one-step, automated deployment of load balancing services through direct interoperability between the Virtual Machine Manager component of System Center 2012 and BIG-IP devices. BIG-IP devices are managed through the System Center user interface, and administrators can custom-define load balancing services. The Orchestrator component of System Center 2012
Provides F5 traffic management capabilities and takes advantage of workflows designed using the Orchestrator Runbook Designer. These custom workflows can then be published directly into System Center 2012 service catalogs and presented as a standard offering to the organization. This is made possible using the F5 iControl SDK, which gives customers the flexibility to choose a familiar development environment such as the Microsoft .NET Framework programming model or Windows PowerShell scripting.

 

F5 big ip msft private cloud solution diagram

Private cloud – as an approach to IT operations – calls for transformation of datacenters, leveraging a few specific strategic points of control, to aggregate and continuously re-allocate IT resources as needed in such as way to make software applications more like services that are always on and secured across users and devices. Private cloud itself is not a single, tangible solution today. Today it is a solution comprised of several key components, including power/cooling, compute, storage and network, management and monitoring tools and the the software applications/databases that end users need.

We’ve moved past the hype of private cloud and its potential benefits. Now organizations need a path, clearly marked, to help them build and deploy private clouds.

That’s part of F5’s goal – to provide the blueprints necessary to build out the application delivery tier to ensure a flexible, reliable and scalable foundation for the infrastructure services required to build and deploy private clouds.

Availability

The F5 Monitoring Pack for System Center and the F5 PRO-enabled Monitoring Pack for System Center are now available. The F5 Load Balancing Provider for System Center is available as a free download from the F5 DevCentral website. The Orchestrator component of System Center 2012 is based on F5 iControl and Windows PowerShell, and is also free.

Connect with Lori: Connect with F5: o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google Related blogs & articles: Complexity Drives Consolidation  At the Intersection of Cloud and Control…  F5 Friday: Addressing the Unintended Consequences of Cloud  F5 Friday: Workload Optimization with F5 and IBM PureSystems  The HTTP 2.0 War has Just Begun  F5 Friday: Microsoft and F5 Lync Up on Unified Communications  DevCentral Groups – Microsoft / F5 Solutions  Webcast: BIG-IP v11 and Microsoft Technologies – Applications   Technorati Tags: F5,F5 Friday,MacVittie,Microsoft,MMS 2012,BIG-IP,private cloud computing,cloud computing,devops,automation,orchestration,architecture,System Center 2012,load balancing,security,performance,scalability domain,blog

read more