Archivo de la categoría: Infrastructure as a Service

The Six Myths of Hybrid IT

It is time to dispel some hybrid cloud myths

Bennett: It is time to debunk some hybrid cloud myths

Many companies face an ongoing dilemma: How to get the most out of legacy IT equipment and applications (many of which host mission-critical applications like their ERP, accounting/payroll systems, etc.), while taking advantage of the latest technological advances to keep their company competitive and nimble.

The combination of cloud and third-party datacentres has caused a shift in the way we approach building and maintaining our IT infrastructure. A best-of-breed approach previously meant a blending of heterogeneous technology solutions into an IT ecosystem. It now focuses on the services and technologies that remain on-premises and those that ultimately will be migrated off-premises.

A hybrid approach to IT infrastructure enables internal IT groups to support legacy systems with the flexibility to optimise service delivery and performance thru third-party providers. Reconciling resources leads to improved business agility, more rapid delivery of services, exposure to innovative technologies, and increased network availability and business uptime, without having to make the budget case for CAPEX investment. However, despite its many benefits, a blended on-premises and off-premises operating model is fraught with misconceptions and myths — perpetuating a “what-if?” type of mentality that often stalls innovation and business initiatives.

Here are the facts behind some of the most widespread hybrid IT myths:

Myth #1: “I can do it better myself.”

If you’re in IT and not aligned with business objectives, you may eventually find yourself out of a job. The hard truth is that you can’t be better at everything. Technology is driving change so rapidly that almost no one can keep up.

So while it’s not always easy to say “I can’t do everything as well as someone else can,” it’s perfectly acceptable to stick to what you’re good at and then evaluate other opportunities to evolve your business. In this case, outsourcing select IT functionality where you can realise improved capabilities and value for your business. Let expert IT outsource providers do what they do best, managing IT infrastructure for companies 24/7/365, while you concentrate on IT strategy to keep your business competitive and strong.

Myth #2: “I’ll lose control in a hybrid IT environment.”

A functional IT leader with responsibility over infrastructure that management wants to outsource may fear the loss of his or her team’s jobs. Instead, the day-to-day management of the company’s infrastructure might be better served off-premise, allowing the IT leader to focus on strategy and direction of the IT functions that differentiate her business in order to stay ahead of fast-moving market innovation and customer demands.

In the early days of IT, it was one size fits all. Today, an IT leader has more control than ever. For example, you can buy a service that comes with little management and spin resources up using imbedded API interfaces. The days where you bought a managed service and had no control, or visibility, over it are gone. With the availability of portals, plug-ins and platforms, internal resources have more control if they want their environment managed by a third party, or want the ability to manage it outright on their own.

Myth #3: “Hybrid IT is too hard to manage.”

Do you want to differentiate your IT capabilities as a means to better support the business? If you do want to manage it on your own, you need to have the people and processes in place to do so. An alternative is to partner with a service provider offering multiple off-premise options and a more agile operating model than doing all of it on your own.  Many providers bundle management interfaces, orchestration, automation and portals with their offerings, which provides IT with complete transparency and granular control into your outsourced solution.  These portals are also API-enabled to ensure these tools can be integrated into any internal tools you have already invested in, and provide end to end visibility into the entire Hybrid environment.

Myth #4: “Hybrid IT is less secure than my dedicated environment.”

In reality, today’s IT service providers are likely more compliant than your business could ever achieve on its own. To be constantly diligent and compliant, a company may need to employ a team of internal IT security professionals to manage day-to-day security concerns. Instead, it makes sense to let a team of external experts worry about data security and provide a “lessons-learned” approach to your company’s security practice.

There are cases where insourcing makes sense, especially when it comes to the business’ mission-critical applications. Some data should absolutely be kept as secure and as close to your users as possible. However, outsourced infrastructure is increasingly becoming more secure because providers focus exclusively on the technology and how it enables their users. For example, most cloud providers will encrypt your data and hand the key to you only. As a result, secure integration of disparate solutions is quickly becoming the rule, rather than the exception.

Myth #5: “Hybrid IT is inherently less reliable than the way we do it now.”

Placing computing closer to users and, in parallel, spreading it across multiple locations, will result in a more resilient application than if you had it in a fixed, single location. In fact, the more mission-critical the application becomes, the more you should spread it across multiple providers and locations. For example, if you build an application for the cloud you’re not relying on any one application component being up in order to fulfil its availability. This “shared nothing” approach to infrastructure and application design not only makes your critical applications more available, it also adds a level of scalability that is not available in traditional in-house only approaches.

Myth #6: “This is too hard to budget for.”

Today’s managed service providers can perform budgeting as well as reporting on your behalf. Again, internal IT can own this, empowering it to recommend whether to insource or outsource a particular aspect of infrastructure based on the needs of the business. However, in terms of registration, costs, and other considerations, partnering with a third-party service can become a huge value-add for the business.

Adopting a hybrid IT model lowers the risk of your IT resources and the business they support. You don’t have to make huge investments all at once. You can start incrementally, picking the options that help you in the short term and, as you gain experience, allow you the opportunity to jump in with both feet later. Hybrid IT lets you evolve your infrastructure as your business needs change.

If IT and technology has taught us anything, it’s that you can’t afford to let fear prevent your company from doing what it must to remain competitive.

Written by Mike Bennett, vice president global datacentre acquisition and expansion, CenturyLink EMEA

Alibaba takes aim at AWS, Google, Microsoft, pours $1bn into global cloud rollout

Alibaba is pouring $1bn into its cloud division to support global expansion

Alibaba is pouring $1bn into its cloud division to support global expansion

Alibaba announced plans this week to plough $1bn into its cloud computing division, Aliyun, in a bid to expand the company’s presence and establish new datacentres internationally. The move may give it the scale it needs to compete more effectively with the likes of Amazon and Google.

The company currently operates five datacentre in China and Hong Kong, and earlier this year set up a datacentre in Silicon Valley aimed at local startups and Chinese multinational corporations.

The $1bn in additional investment will go towards setting up new cloud datacentres in the Middle East, Singapore, Japan and in various countries across Europe.

“Aliyun has become a world-class cloud computing service platform that is the market leader in China, bearing the fruits of our investment over the past six years. As the physical and digital are becoming increasingly integrated, Aliyun will serve as an essential engine in this new economy,” said Daniel Zhang, chief executive officer of Alibaba Group.

“This additional US$ 1 billion investment is just the beginning; our hope is for Aliyun to continually empower customers and partners with new capabilities, and help companies upgrade their basic infrastructure. We want to enable businesses to connect directly with consumers and drive productivity using data. Ultimately, our goal is to help businesses successfully transition from an era of information technology to data technology,” Zhang said.

The company said it also plans to use the funds to expand its partnerships through its recently announced Marketplace Alliance Program, a move that sees it partnering with large tech and datacentre operators, initially including Intel, Singtel, Meeras, Equinix and PCCW among others to help localise its cloud computing services and grow its ecosystem.

The investment if anything confirms Alibaba’s intent to grow well beyond Asia and displace other large public cloud providers like AWS, IBM and Google, which already boast significant global scale.

Spring.io: Containers are great, but need to be simple to adopt

Spring.io offers containers as a service

Containers are all the rage but everyone does them differently

ElasticHosts founder Richard Davies launched another container-focused venture this week – Spring.io, a pay-as-you-use cloud service targeted primarily at Linux developers. Davies told BCN that while the volume around Docker and other container technologies is high, which is encouraging, their uptake will ultimately depend on how providers balance simplicity with performance.

Spring.io is a spinoff of Davies’ other venture, ElasticHosts, which still uses Linux containers but operates more like a traditional infrastructure as a service provider (customers need to subscribe to the service for set periods).

“We have been listening to the market and what we are hearing is that people are craving simplicity,” Davies said. “They just want to be able to sign up to a service without having to choose instance sizes or worry about over-paying, just as you would with your gas or electricity.”

The benefit containers offer over traditional virtualisation platforms is that they scale more closely in line with the resources an application needs, and they scale much more quickly. But most cloud services are provisioned in fixed virtual and / or physical increments that can only scale by adding or subtracting fixed-size VMs or hardware or both. This, Davies said, leads to massive amounts of waste in terms of asset utilisation (for the provider) and cost (for the consumer). It’s a ‘lost-lose’.

Spring.io uses the same underlying technologies as Docker and other Linux containers (cgroups, namespaces), but whereas they are mostly application containers with a focus on simplifying portability and managing dependencies in micro-services architecture, Springs.io offers operating system containers with a focus on usage-based billing and reactive auto-scaling.

Where most container as a service providers lean heavily upon scripting for scaling and deployment , or run containers within virtual machines, Springs.io supplies auto-scaling for load straight out of the box and requires no user API calls or JSON-juggling for their management or monitoring.

“Docker is exciting to developers struggling with shipping applications, we believe Springs.io is exciting to devops and sysadmins that want simple scaling for a reasonable price,” Davies.

HP, CenturyLink buddy-up on hybrid cloud

CenturyLink and HP are partnering on hybrid cloud

CenturyLink and HP are partnering on hybrid cloud

HP and CenturyLink announced a deal this week that will see HP resell CenturyLink’s cloud services to its partners as part of the HP PartnerOne programme.

As part of the deal HP customers will have access to the full range of CenturyLink services, which are built using HP technology, including managed hosting, colocation, storage, big data and cloud.

“CenturyLink solutions, powered by HP, provide compelling value for organizations seeking hybrid IT solutions,” said James Parker, senior vice president, partner, financial and international, at CenturyLink. “CenturyLink complements the HP portfolio with a breadth of hybrid solutions for enterprises, offering customers the ability to choose the services that make the most sense today, while retaining the flexibility to evolve as business demands shift.”

HP said the move will help CenturyLink expand its reach new customers, with HP exploiting new opportunities to build hybrid cloud solutions for existing customers.

“As businesses map out a path to the cloud, they need flexibility in how they consume and leverage IT services,” said Eric Koach, vice president of sales, Enterprise Group, central region, HP.

“HP cloud, software and infrastructure solutions help CenturyLink and HP enable clients to build, manage and secure a cloud environment aligned with their strategy, across infrastructure, information and critical applications,” Koach said.

Since splitting up HP has bifurcated its partner programmes into the PartnerOne programme for service providers and the Helion PartnerOne programme, the latter of which largely includes services providers building solutions on top of OpenStack or Cloud Foundry.

DataCentred ARM-based OpenStack cloud goes GA

DataCentred is moving its ARM-based OpenStack cloud into GA

DataCentred is moving its ARM-based OpenStack cloud into GA

It has been a big week for ARM in the cloud, with Manchester-based cloud services provider DataCentred announcing that its ARM AArch64-based OpenStack public cloud platform is moving into general availability. The move comes just days after OVH announced it would roll out an ARM-based cloud platform.

The company is running the platform on HP M400 ARM servers, and offering customers access to Intel and ARM architectures alongside one another within an OpenStack environment.

The platform, a product of its partnership with Codethink originally launched in March, comes in response to increasing demand for ARM-based workload support in the cloud according to DataCentre’s head of cloud services Mark Jarvis.

“The flexibility of OpenStack’s architecture has allowed us to make the integration with ARM seamless. When users request an ARM based OS image, it gets scheduled onto an ARM node and aside from this the experience is identical to requesting x86 resources.  Our early adopters have provided invaluable testing and feedback helping us to get to point where we’re confident about stability and support,” Jarvis explained.

“The platform is attracting businesses who are interested in taking advantage of the cost savings the lower-power chips offer as well as developers who are targeting ARM platforms. Developers are particularly interested because virtualised ARM is an incredibly cost-effective alternative to deploying physical ARM hardware on every developer’s desk,” he added.

The company said ARM architecture also offers environmental and space-saving benefits because they can be deployed in higher density and require less power than more conventional x86 chips to run.

Mike Kelly, founder and chief executive of DataCentred didn’t comment on customer numbers or revenue figures but stressed the move demonstrates the company has successfully commercialised OpenStack on ARM.

“The market currently lacks easy to use 64-bit ARM hardware and DataCentred’s innovation provides customers with large scale workloads across many cores. Open source software is the future of computing and the General Availability of DataCentred’s new development will make our services even more attractive to price-sensitive and environmentally-aware consumers,” Kelly said.

DataCentred isn’t alone in the belief that ARM has a strong future in the cloud. The move comes the same week French cloud and hosting provider OVH announced plans to add Cavium ARM-based processors to its public cloud platform by the end of next month.

The company, an early adopter of the Power architecture for cloud, said it will add Cavium’s flagship 48 core 64-bit ARMv8-A ThunderX workload-optimized processor to its RunAbove public cloud service.

Google Cloud adds Microsoft support as Windows Server 2003 reaches EOL

Google made Windows Server support generally available this week

Google made Windows Server support generally available this week

Making good on commitments the cloud provider made in December last year Google has announced general availability of Windows Server on the Google Cloud Platform. The move comes the same week Windows Server 2003 reached its end of life.

“Making sure Google Cloud Platform is the best place to run your workloads is our top priority, so we’re happy that today Windows Server on Google Compute Engine graduates to General Availability, joining the growing list OSes we support. We’re also introducing several enhancements for Windows Server users,” the company said in a statement on its cloud blog.

“With its graduation to General Availability, Windows Server instances are now covered by the Compute Engine SLA. Windows Server users can now easily deploy a server running Active Directory or ASP.NET using the Cloud Launcher, and can securely extend their existing infrastructure into Google Cloud Platform using VPN.”

Google also said customers the purchase GCP support packages can get architectural and operational support for their Windows Server deployments on its cloud platform. And with Microsoft ceasing support for Windows Server 2003 Google is looking to lure in Microsoft developers by committing to support migration to more current Microsoft Server releases (2008, 2012).

In December last year the company announced it would begin offering Microsoft license mobility for the Google Cloud Platform, enabling existing Microsoft server application users to bring their own licenses and apps – SQL Server, SharePoint, Exchange – from on-premise to the cloud, without incurring any additional fees.

As before the move to expand support for the Microsoft ecosystem is likely to come as welcome news to the .NET crowd, which is fairly sizeable. Microsoft commands a 32.8 per cent share of all public web server infrastructure according to W3Techs.

IBM, Mubadala joint venture to bring Watson cloud to MENA

IBM is bringing Watson to the Middle East

IBM is bringing Watson to the Middle East

IBM is teaming up with Abu Dhabi-based investment firm Mubadala Development Company to create a joint venture based in Abu Dhabi that will deliver IBM’s cloud-based Watson service to customers in the Middle East and Northern Afirca (MENA) region.

The companies will set up the joint venture through Mubadala’s subsidiary, Injazat, which will be the sole provider of the Watson platform in the region.

The companies said the move will help create an ecosystem of MENA-based partners, software vendors and startups developing new solutions based on the cognitive compute platform.

“Bringing IBM Watson to the region represents the latest major milestone in the global adoption of cognitive computing,” said Mounir Barakat, executive director of ICT at Aerospace & Engineering Services, Mubadala.

“It also signals Mubadala’s commitment to bringing new technologies and spurring economic growth in the Middle East, another step towards developing the UAE as a hub for the region’s ICT sector,” Barakat said.

Mike Rhodin, senior vice president of IBM Watson said Mubadala’s knowledge of the local corporate ecosystem will help the company expand its cognitive compute cloud service in the region.

IBM has enjoyed some Watson wins in financial services, healthcare and the utilities sectors, but the company has been fairly quiet on how much the division rakes in; over the past year the company made strides to expand the platform in the US, Africa and Japan, and recently made a number of strategic acquisitions in software automation in order to boost Watson’s appeal in customer engagement and health services.

EMEA cloud infrastructure spending swells 16% in Q1 2015

Spending on cloud as a proportion of overall IT expenditure is growing at healthy rates

Spending on cloud as a proportion of overall IT expenditure is growing at healthy rates

Cloud-related IT infrastructure spending in the EMEA region grew 16 per cent year on year to reach $1.01bn in the first quarter of this year, representing just under 20 per cent of the overall IT infrastructure spend according to analyst house IDC.

Spending on IT infrastructure (servers, disk storage, and Ethernet switches) for public cloud accounts for about 8 per cent and private cloud 11 per cent of the overall spend; the firm previously estimated that growth in spending on public cloud would outpace private cloud spending by nearly 10 percentage points (25 and 16 per cent, respectively).

Michal Vesely, research analyst, european infrastructure at IDC said much of the expenditure in Western Europe was fuelled mainly by public cloud and large-scale datacentre installations.

“Private cloud expenditure, especially on premises, on the other hand, is more directly connected to regular IT investments by enterprises,” he explained. “Private cloud spending saw a slower pace as users assess their storage, as well as integrated and hyperconverged systems, strategies. Once decisions are made, we expect another major push in the forthcoming period.”

The firm also said unstable macroeconomic conditions in Southern and Western Europe haven’t adversely impacted spending trends , although on-premise deployments seem to be growing at a slower rate – in part due to an increased shift to cloud. According to the analyst house this shift is in full swing. In April the firm forecast that cloud will make up nearly half of all IT infrastructure spending in four years.

OVH adds ARM to public cloud

OVH has launched an ARM-based public cloud service just 8 months after going to market with a Power8-based cloud platform

OVH has launched an ARM-based public cloud service just 8 months after going to market with a Power8-based cloud platform

French cloud and hosting provider OVH said this week it will add Cavium ARM-based processors to its public cloud platform by the end of next month. The move comes just 8 months after the company added the Power8 architecture to its cloud arsenal.

The company said it will add Cavium’s flagship 48 core 64-bit ARMv8-A ThunderX workload-optimized processor to its RunAbove public cloud service cloud service.

“This deployment is an example of OVH.Com’s leadership in delivering latest industry leading technologies to our customers,” said Miroslaw Klaba, vice president of research & development at OVH.

“With RunAbove ThunderX based instances, we can offer our users breakthrough performance at the lowest cost while optimizing the infrastructure for targeted compute and storage workloads delivering best in class TCO and user experience.”

OVH, which serves 700,000 customers from 17 datacentres globally, said it wanted to offer a more diversified technology stack and cater to growing demand for cloud-based high performance compute workloads, and drop the cost per VM.

“Cloud service operators are looking to gain the benefits and flexibility of end to end virtualization while managing dynamically changing workloads and massive data requirements,” said Rishi Chugh, director marketing at Cavium. “ ThunderX based RunAbove instances provide exceptional processing performance and flexibility by integrating a tremendous amount of  IO along with targeted workload accelerators for compute, security, networking and storage at the lowest cost per VM for RunAbove – into a power, space and cost-optimized form factor.”

OVH is among just a handful of cloud service providers offering a variety of cloud compute platforms beyond x86. Late last year the company launched a cloud service based on IBM’s Power8 processor architecture, an open source architecture tailored specifically for big data applications, and OpenStack.

But while cloud compute is becoming more heterogeneous there are still far fewer workloads being created natively for ARM and Power8, which are both quite young, than x86, so it will likely take some time for asset utilisation (and the TCO) rates to catch up with where x86 servers are today.

AWS and Chef cook up DevOps deal

Chef is moving onto the AWS Marketplace

Chef is moving onto the AWS Marketplace

IT automation specialist Chef and AWS announced a deal this week that would see Chef’s flagship offering offered via the AWS Marketplace, a move the companies said would help drive DevOps cloud uptake.

Tools like Chef and Puppet Labs, which use an intermediary service to help automate a company’s infrastructure, have grown increasingly popular with DevOps personnel in recent years – particularly given not just the growth but heterogeneity of cloud today. And with DevOps continuing to grow – by 2016 nearly a quarter of the largest enterprises globally will have adopted a DevOps strategy according to Gartner – it’s clear both AWS and Chef see a huge opportunity to onboard more users to the former’s cloud service.

As one might expect, the companies touted the ability to use Chef to migrate workloads off premise and into the AWS without losing all of the code developed to automate lower level services.

Though Chef and Puppet Labs can both be deployed on and automate AWS cloud resources the Chef / AWS deal will see it gain one-click deployment and a more prominent placement in its catalogue of available services.

“Chef is one of the leading offerings for DevOps workflows, which engineers and developers depend on to accelerate their businesses,” said Dave McCann, vice president, AWS Marketplace. “Our customers want easy-to-use software like Chef that is available for immediate purchase and deployment in AWS Marketplace. This new partnership demonstrates our focus on offering low-friction DevOps tools to power customers’ businesses.”

Ken Cheney, vice president of business development at Chef said: “AWS’s market leadership in cloud computing, coupled with our expertise in IT automation and DevOps practices, brings a new level of capabilities to our customers. Together, we’re delivering a single source for automation, cloud, and DevOps, so businesses everywhere can spend minimal calories on managing infrastructure and maximise their ability to develop the software driving today’s economy.”