Archivo de la categoría: Infrastructure as a Service

AWS goes hipster, plans pop-up shop in London

AWS is opening a pop-up shop in London following other openings in San Fran and NYC

AWS is opening a pop-up shop in London following other openings in San Fran and NYC

Amazon Web Services has announced plans to take its AWS Pop-up Loft programme to London in early September in a bid to reach out to local UK startups.

The temporary shops will be a place where developers, engineers and entrepreneurs can come to learn about AWS services, get trained up on the company’ services, meet clients, and receive guidance on cloud migration.

The company has opened similar pop-up shops in in San Francisco and New York City, but the most recently announced shop, which is due to open September 10, is the company’s first crack at it outside the US.

The UK is a hotbed of innovation and London is one of the main places where we see talented, ambitious entrepreneurs coming together to test ideas and start new businesses that leverage cloud computing,” said Werner Vogels, chief technical officer and vice president, Amazon.com.

“With the AWS Pop-up Loft in London we will be bringing together a host of AWS resources, and some of the brightest and most creative minds in the industry, to help startups across the UK. We look forward to working alongside the next generation of UK businesses and helping them to reach their full potential,” Vogels said.

Intel and Chef will also be supporting the pop-up shop.

Patrick Bliemer, managing director, Intel Northern Europe said: “The startup community is a fundamental driver of technology innovations fuelling the rapid growth of the digital services economy. Intel is excited to be working closely with AWS on the AWS Pop-up Loft program to help enable environments around the world where users have access to the tools and expert guidance they need to bring new ideas and innovations to market.”

Alibaba to set up cloud datacentre, HQ in Singapore

Alibaba is adding a datacentre in Singapore, where it will also place its international HQ

Alibaba is adding a datacentre in Singapore, where it will also place its international HQ

Alibaba’s cloud computing division Aliyun revealed plans to set up a datacentre in Singapore, where it also plans to base its overseas business headquarters.

The Singapore datacentre, its seventh globally, will host the company’s growing suite of cloud services and link up with its existing datacentres in Beijing, Hangzhou, Qingdao, Hong Kong, Shenzhen, and Silicon Valley.

“The cloud datacentre in Singapore is a key milestone in our strategy to help businesses of all sizes innovate and scale, wherever they are based, and however they choose to grow,” said Sicheng Yu, vice president of Aliyun. “Aliyun offers a unique combination of services for success in the cloud, including high-volume cloud-based transaction support and quality assurance for cloud computing services.”

Singapore will also be home to the company’s international headquarters, where its global business outside of China will be managed.

Aliyun claims demand for its cloud services is growing at a whopping 82 per cent, with revenues from its cloud services more than doubling year on year. The company said it has over 1.8 million cloud customers as of June this year.

Last month Aliyun’s parent Alibaba announced plans to plough $1bn into its cloud computing division, which cloud give it the scale it needs to compete more effectively with the likes of Amazon and Google. In addition to the Singapore datacentre, which is scheduled to go live in September this year, the company also plans to add cloud datacentres in the Middle East, Japan, and in various countries in Europe as part of that investment.

At the time the company said it also plans to use the funds to expand its partnerships through its recently announced Marketplace Alliance Program, a move that sees it partnering with large tech and datacentre operators, initially including Intel, Singtel, Meeras, Equinix and PCCW among others to help localise its cloud computing services and grow its ecosystem.

Fujitsu, Red Hat partner on OpenStack-based private clouds

Red Hat and Fujitsu are partnering to develop OpenStack converged infrastructure solutions

Red Hat and Fujitsu are partnering to develop OpenStack converged infrastructure solutions

Fujitsu and Red Hat have jointly developed a dedicated solution to simplify the creation of OpenStack private clouds.

The Primeflex is a converged compute and storage combines Fujitsu’s server technology with Red Hat OpenStack and Red Hat Enterprise Linux OpenStack Platform software, and backed by Fujitsu’s professional services outfit.

The companies said the OpenStack-based converged offering will speed up cloud deployment.

Harald Bernreuther, director global infrastructure solutions at Fujitsu said: “Primeflex for Red Hat OpenStack can underpin any organisation’s plan to transform their business model by leveraging cloud computing. By opting for an OpenStack-based solution, organisations can run new cloud-scale workloads while also optimising costs.

“Primeflex for Red Hat OpenStack extends the philosophy of cost optimisation, through simplifying system maintenance and consolidating technology updates across the entire system stack, all the way from the underlying hardware through to the operating system,” Bernreuther said.

Red Hat said there is value in driving strong integration between software and hardware in the cloud space.

“OpenStack is a rapidly-growing, open source cloud infrastructure platform that is cost-effective, open, flexible and highly scalable,” said Radhesh Balakrishnan, general manager, OpenStack, Red Hat.

“We are excited about Fujitsu’s offering based on Red Hat Enterprise Linux OpenStack Platform to deliver private cloud infrastructure solutions and we look forward to continuing the collaboration to provide customers with an innovative cloud platform for digital business initiatives,” he said.

Red Hat isn’t the only OpenStack vendor boosting its converged infrastructure strategy as of late. In July Mirantis unveiled plans to work with a range of vendors, initially Dell and Juniper, to deliver OpenStack-based converged infrastructure solutions for enterprises.

IBM announces Linux mainframe app development cloud

IBM is trying to keep mainframes relevant in the cloud era

IBM is trying to keep mainframes relevant in the cloud era

IBM is open sourcing a large set of Linux mainframe code and launching the LinuxONE Developer Cloud, a cloud-based platform for developers to create applications for a Linux server based on the mainframe.

The LinuxONE Developer Cloud, which will be deployed in select IBM datacentres globally, will provide developers access to a cloud-based development, piloting and testing environment for Linux-based mainframe workloads.

The move coincides with the company’s launch of a portfolio of Linux mainframe services, called LinuxONE, that IBM says are optimised to run cloud-native workloads like Dockerized apps and NoSQL databases.

“Fifteen years ago IBM surprised the industry by putting Linux on the mainframe, and today more than a third of IBM mainframe clients are running Linux,” said Tom Rosamilia, senior vice president, IBM Systems.

“We are deepening our commitment to the open source community by combining the best of the open world with the most advanced system in the world in order to help clients embrace new mobile and hybrid cloud workloads. Building on the success of Linux on the mainframe, we continue to push the limits beyond the capabilities of commodity servers that are not designed for security and performance at extreme scale,” Rosamilia said.

As part of the move the company is contributing tens of thousands of lines of code to the recently created Open Mainframe Project, formed by the Linux Foundation to optimise Linux deployments on mainframes.

“Linux on the mainframe has reached a critical mass such that vendors, users and academia need a neutral forum where they can work together to advance Linux tools and technologies and increase enterprise innovation,” said Jim Zemlin, the Linux Foundation executive director.

“The Open Mainframe Project is a direct response to the demands of Linux users and the supporting open source ecosystem to address unique features and requirements built into mainframes for security, availability and performance,” Zemlin said.

Rackspace to add AWS to Fanatical Support services

Rhodes: "We are positioned to become the dominant service provider for these cloud platforms"

Rhodes: “We are positioned to become the dominant service provider for these cloud platforms”

Rackspace is currently developing a Fanatical Support offering for AWS customers, the company’s latest move aimed at shifting its business towards managed cloud services.

Speaking about the company’s second quarter financial results earlier this week Rackspace president and chief exec Taylor Rhodes said the company plans to extend its Fanatical Support and managed cloud services to the AWS platform “later this year.”

“As I’ve advised you in earlier calls, we don’t expect significant revenue for managed services on other cloud providers in 2015, but we’re excited about the prospects for this business. We estimate that the addressable market is in the multiple billions of dollars annually and is growing in the high-double digits,” Rhodes explained in a call with analysts and journalists.

“Because of our scale and reputation for Fanatical Support, we are positioned to become the dominant service provider for these cloud platforms,” he said.

The new comes about a month after Rackspace announced it would extend its Fanatical Support services to Microsoft Azure’s public and private cloud infrastructure. The company said customers will be able to buy either bundled Azure infrastructure with support, or just support services. The offerings will be available first in the US, with plans for an international rollout “through early 2016.”

The Six Myths of Hybrid IT

It is time to dispel some hybrid cloud myths

Bennett: It is time to debunk some hybrid cloud myths

Many companies face an ongoing dilemma: How to get the most out of legacy IT equipment and applications (many of which host mission-critical applications like their ERP, accounting/payroll systems, etc.), while taking advantage of the latest technological advances to keep their company competitive and nimble.

The combination of cloud and third-party datacentres has caused a shift in the way we approach building and maintaining our IT infrastructure. A best-of-breed approach previously meant a blending of heterogeneous technology solutions into an IT ecosystem. It now focuses on the services and technologies that remain on-premises and those that ultimately will be migrated off-premises.

A hybrid approach to IT infrastructure enables internal IT groups to support legacy systems with the flexibility to optimise service delivery and performance thru third-party providers. Reconciling resources leads to improved business agility, more rapid delivery of services, exposure to innovative technologies, and increased network availability and business uptime, without having to make the budget case for CAPEX investment. However, despite its many benefits, a blended on-premises and off-premises operating model is fraught with misconceptions and myths — perpetuating a “what-if?” type of mentality that often stalls innovation and business initiatives.

Here are the facts behind some of the most widespread hybrid IT myths:

Myth #1: “I can do it better myself.”

If you’re in IT and not aligned with business objectives, you may eventually find yourself out of a job. The hard truth is that you can’t be better at everything. Technology is driving change so rapidly that almost no one can keep up.

So while it’s not always easy to say “I can’t do everything as well as someone else can,” it’s perfectly acceptable to stick to what you’re good at and then evaluate other opportunities to evolve your business. In this case, outsourcing select IT functionality where you can realise improved capabilities and value for your business. Let expert IT outsource providers do what they do best, managing IT infrastructure for companies 24/7/365, while you concentrate on IT strategy to keep your business competitive and strong.

Myth #2: “I’ll lose control in a hybrid IT environment.”

A functional IT leader with responsibility over infrastructure that management wants to outsource may fear the loss of his or her team’s jobs. Instead, the day-to-day management of the company’s infrastructure might be better served off-premise, allowing the IT leader to focus on strategy and direction of the IT functions that differentiate her business in order to stay ahead of fast-moving market innovation and customer demands.

In the early days of IT, it was one size fits all. Today, an IT leader has more control than ever. For example, you can buy a service that comes with little management and spin resources up using imbedded API interfaces. The days where you bought a managed service and had no control, or visibility, over it are gone. With the availability of portals, plug-ins and platforms, internal resources have more control if they want their environment managed by a third party, or want the ability to manage it outright on their own.

Myth #3: “Hybrid IT is too hard to manage.”

Do you want to differentiate your IT capabilities as a means to better support the business? If you do want to manage it on your own, you need to have the people and processes in place to do so. An alternative is to partner with a service provider offering multiple off-premise options and a more agile operating model than doing all of it on your own.  Many providers bundle management interfaces, orchestration, automation and portals with their offerings, which provides IT with complete transparency and granular control into your outsourced solution.  These portals are also API-enabled to ensure these tools can be integrated into any internal tools you have already invested in, and provide end to end visibility into the entire Hybrid environment.

Myth #4: “Hybrid IT is less secure than my dedicated environment.”

In reality, today’s IT service providers are likely more compliant than your business could ever achieve on its own. To be constantly diligent and compliant, a company may need to employ a team of internal IT security professionals to manage day-to-day security concerns. Instead, it makes sense to let a team of external experts worry about data security and provide a “lessons-learned” approach to your company’s security practice.

There are cases where insourcing makes sense, especially when it comes to the business’ mission-critical applications. Some data should absolutely be kept as secure and as close to your users as possible. However, outsourced infrastructure is increasingly becoming more secure because providers focus exclusively on the technology and how it enables their users. For example, most cloud providers will encrypt your data and hand the key to you only. As a result, secure integration of disparate solutions is quickly becoming the rule, rather than the exception.

Myth #5: “Hybrid IT is inherently less reliable than the way we do it now.”

Placing computing closer to users and, in parallel, spreading it across multiple locations, will result in a more resilient application than if you had it in a fixed, single location. In fact, the more mission-critical the application becomes, the more you should spread it across multiple providers and locations. For example, if you build an application for the cloud you’re not relying on any one application component being up in order to fulfil its availability. This “shared nothing” approach to infrastructure and application design not only makes your critical applications more available, it also adds a level of scalability that is not available in traditional in-house only approaches.

Myth #6: “This is too hard to budget for.”

Today’s managed service providers can perform budgeting as well as reporting on your behalf. Again, internal IT can own this, empowering it to recommend whether to insource or outsource a particular aspect of infrastructure based on the needs of the business. However, in terms of registration, costs, and other considerations, partnering with a third-party service can become a huge value-add for the business.

Adopting a hybrid IT model lowers the risk of your IT resources and the business they support. You don’t have to make huge investments all at once. You can start incrementally, picking the options that help you in the short term and, as you gain experience, allow you the opportunity to jump in with both feet later. Hybrid IT lets you evolve your infrastructure as your business needs change.

If IT and technology has taught us anything, it’s that you can’t afford to let fear prevent your company from doing what it must to remain competitive.

Written by Mike Bennett, vice president global datacentre acquisition and expansion, CenturyLink EMEA

Alibaba takes aim at AWS, Google, Microsoft, pours $1bn into global cloud rollout

Alibaba is pouring $1bn into its cloud division to support global expansion

Alibaba is pouring $1bn into its cloud division to support global expansion

Alibaba announced plans this week to plough $1bn into its cloud computing division, Aliyun, in a bid to expand the company’s presence and establish new datacentres internationally. The move may give it the scale it needs to compete more effectively with the likes of Amazon and Google.

The company currently operates five datacentre in China and Hong Kong, and earlier this year set up a datacentre in Silicon Valley aimed at local startups and Chinese multinational corporations.

The $1bn in additional investment will go towards setting up new cloud datacentres in the Middle East, Singapore, Japan and in various countries across Europe.

“Aliyun has become a world-class cloud computing service platform that is the market leader in China, bearing the fruits of our investment over the past six years. As the physical and digital are becoming increasingly integrated, Aliyun will serve as an essential engine in this new economy,” said Daniel Zhang, chief executive officer of Alibaba Group.

“This additional US$ 1 billion investment is just the beginning; our hope is for Aliyun to continually empower customers and partners with new capabilities, and help companies upgrade their basic infrastructure. We want to enable businesses to connect directly with consumers and drive productivity using data. Ultimately, our goal is to help businesses successfully transition from an era of information technology to data technology,” Zhang said.

The company said it also plans to use the funds to expand its partnerships through its recently announced Marketplace Alliance Program, a move that sees it partnering with large tech and datacentre operators, initially including Intel, Singtel, Meeras, Equinix and PCCW among others to help localise its cloud computing services and grow its ecosystem.

The investment if anything confirms Alibaba’s intent to grow well beyond Asia and displace other large public cloud providers like AWS, IBM and Google, which already boast significant global scale.

Spring.io: Containers are great, but need to be simple to adopt

Spring.io offers containers as a service

Containers are all the rage but everyone does them differently

ElasticHosts founder Richard Davies launched another container-focused venture this week – Spring.io, a pay-as-you-use cloud service targeted primarily at Linux developers. Davies told BCN that while the volume around Docker and other container technologies is high, which is encouraging, their uptake will ultimately depend on how providers balance simplicity with performance.

Spring.io is a spinoff of Davies’ other venture, ElasticHosts, which still uses Linux containers but operates more like a traditional infrastructure as a service provider (customers need to subscribe to the service for set periods).

“We have been listening to the market and what we are hearing is that people are craving simplicity,” Davies said. “They just want to be able to sign up to a service without having to choose instance sizes or worry about over-paying, just as you would with your gas or electricity.”

The benefit containers offer over traditional virtualisation platforms is that they scale more closely in line with the resources an application needs, and they scale much more quickly. But most cloud services are provisioned in fixed virtual and / or physical increments that can only scale by adding or subtracting fixed-size VMs or hardware or both. This, Davies said, leads to massive amounts of waste in terms of asset utilisation (for the provider) and cost (for the consumer). It’s a ‘lost-lose’.

Spring.io uses the same underlying technologies as Docker and other Linux containers (cgroups, namespaces), but whereas they are mostly application containers with a focus on simplifying portability and managing dependencies in micro-services architecture, Springs.io offers operating system containers with a focus on usage-based billing and reactive auto-scaling.

Where most container as a service providers lean heavily upon scripting for scaling and deployment , or run containers within virtual machines, Springs.io supplies auto-scaling for load straight out of the box and requires no user API calls or JSON-juggling for their management or monitoring.

“Docker is exciting to developers struggling with shipping applications, we believe Springs.io is exciting to devops and sysadmins that want simple scaling for a reasonable price,” Davies.

HP, CenturyLink buddy-up on hybrid cloud

CenturyLink and HP are partnering on hybrid cloud

CenturyLink and HP are partnering on hybrid cloud

HP and CenturyLink announced a deal this week that will see HP resell CenturyLink’s cloud services to its partners as part of the HP PartnerOne programme.

As part of the deal HP customers will have access to the full range of CenturyLink services, which are built using HP technology, including managed hosting, colocation, storage, big data and cloud.

“CenturyLink solutions, powered by HP, provide compelling value for organizations seeking hybrid IT solutions,” said James Parker, senior vice president, partner, financial and international, at CenturyLink. “CenturyLink complements the HP portfolio with a breadth of hybrid solutions for enterprises, offering customers the ability to choose the services that make the most sense today, while retaining the flexibility to evolve as business demands shift.”

HP said the move will help CenturyLink expand its reach new customers, with HP exploiting new opportunities to build hybrid cloud solutions for existing customers.

“As businesses map out a path to the cloud, they need flexibility in how they consume and leverage IT services,” said Eric Koach, vice president of sales, Enterprise Group, central region, HP.

“HP cloud, software and infrastructure solutions help CenturyLink and HP enable clients to build, manage and secure a cloud environment aligned with their strategy, across infrastructure, information and critical applications,” Koach said.

Since splitting up HP has bifurcated its partner programmes into the PartnerOne programme for service providers and the Helion PartnerOne programme, the latter of which largely includes services providers building solutions on top of OpenStack or Cloud Foundry.

DataCentred ARM-based OpenStack cloud goes GA

DataCentred is moving its ARM-based OpenStack cloud into GA

DataCentred is moving its ARM-based OpenStack cloud into GA

It has been a big week for ARM in the cloud, with Manchester-based cloud services provider DataCentred announcing that its ARM AArch64-based OpenStack public cloud platform is moving into general availability. The move comes just days after OVH announced it would roll out an ARM-based cloud platform.

The company is running the platform on HP M400 ARM servers, and offering customers access to Intel and ARM architectures alongside one another within an OpenStack environment.

The platform, a product of its partnership with Codethink originally launched in March, comes in response to increasing demand for ARM-based workload support in the cloud according to DataCentre’s head of cloud services Mark Jarvis.

“The flexibility of OpenStack’s architecture has allowed us to make the integration with ARM seamless. When users request an ARM based OS image, it gets scheduled onto an ARM node and aside from this the experience is identical to requesting x86 resources.  Our early adopters have provided invaluable testing and feedback helping us to get to point where we’re confident about stability and support,” Jarvis explained.

“The platform is attracting businesses who are interested in taking advantage of the cost savings the lower-power chips offer as well as developers who are targeting ARM platforms. Developers are particularly interested because virtualised ARM is an incredibly cost-effective alternative to deploying physical ARM hardware on every developer’s desk,” he added.

The company said ARM architecture also offers environmental and space-saving benefits because they can be deployed in higher density and require less power than more conventional x86 chips to run.

Mike Kelly, founder and chief executive of DataCentred didn’t comment on customer numbers or revenue figures but stressed the move demonstrates the company has successfully commercialised OpenStack on ARM.

“The market currently lacks easy to use 64-bit ARM hardware and DataCentred’s innovation provides customers with large scale workloads across many cores. Open source software is the future of computing and the General Availability of DataCentred’s new development will make our services even more attractive to price-sensitive and environmentally-aware consumers,” Kelly said.

DataCentred isn’t alone in the belief that ARM has a strong future in the cloud. The move comes the same week French cloud and hosting provider OVH announced plans to add Cavium ARM-based processors to its public cloud platform by the end of next month.

The company, an early adopter of the Power architecture for cloud, said it will add Cavium’s flagship 48 core 64-bit ARMv8-A ThunderX workload-optimized processor to its RunAbove public cloud service.