Archivo de la categoría: Google Cloud Platform

Google adds to its Cloud Platform as vendors compete with AWS Lambda

Google officeGoogle has added to its public cloud infrastructure for developers, Cloud Platform, with a new service that allows app writers to set up functions that can be triggered in response to events. The new Google Cloud Functions has drawn comparison with the Lambda offering from Amazon Web Services (AWS).

The service was not announced to the public, but news filtered out after documentation began to appear on Google’s web site, offering advice to developers. According to the briefing notes, Google Cloud Functions is a ‘lightweight, event-based, asynchronous’ computing system that can be used to create small, single-purpose functions in response to cloud events without the need for managing servers or programming in a runtime environment. Access to the service is available to anyone who fills out a form on the web site.

Google’s answer to AWS Lambda is the latest attempt to catch up with AWS by filling in the omissions in its own service. In September 2015 BCN reported how Google’s Cloud Platform is being sped up by the addition of four new content delivery networks, with CloudFlare, Fastly, Highwinds Network and Level 3 Communications adding to Google’s network of 70 points of presence in 33 countries as part of a new Google CDN Interconnect programme.

Google has also bolstered its cloud offering with new networking, containerisation and price cuts, BCN reported in November 2015. Google has also recruited VMware cofounder Diane Greene to lead all of its cloud businesses, as reported last year.

Google Cloud Functions run as Node.js modules and can be written in JavaScript. A response could be set up to react to, say, circumstances in a user’s Google Cloud Storage, such as an unwanted type of picture file or title. The service also works with webhooks, which contributes to a speeding up of programming processes and code maintenance.

The prices for Cloud Functions were not listed, as the service is still in Alpha mode.

Meanwhile a new start up, Iron.io, has raised $11.5 million in venture capital to develop its own answer to Lamba and Cloud Functions. Microsoft is also rumoured to be developing its own version of Cloud Functions for Azure, according to a report in Forbes.

Google’s new autoscaling aims to offer instants gratification

Google cloud platformGoogle is to give users more detailed and tightly controlled management of their virtual machines through a new autoscaling feature.

Announced on Google’s own blog, the Google Compute Engine Autoscaler aims to help managers exert tighter control over all the billable components of their virtual machine infrastructure, such as processing power, memory and storage. The rationale is to give its customers tighter control of the costs of all the ‘instances’ (virtual machines) running on Google’s infrastructure and to ramp up resources more effectively when demand for computing power soars.

The new Google Compute Engine allows users to specify the machine properties of their instances, such as the amounts of CPUs and RAM, on the virtual machines running on its Linux and Windows Servers. Cloud computing systems that are subject to volatile workload variations will no longer be subject to escalating costs and performance ceilings as the platform brings greater scalability, Google promised.

“Our customers have a wide range of compute needs, from temporary batch processing to high-scale web workloads. Google Cloud Platform provides a resilient compute platform for workloads of all sizes enabling our customers with both scale out and scale up capabilities,” said a joint statement from Google Compute Engine Product Managers Jerzy Foryciarz and Scott Van Woudenberg.

Spiky traffic, caused by sudden popularity, flash sales or unexpected mood swings among customers, can overwhelm some managers with millions of requests per second. Autoscaler makes this complex process simpler, according to Google’s engineers.

Autoscaler will dynamically adjust the number of instances in response to load conditions and remove virtual machines from the cloud portfolio when they are a needless expense. Autoscaler will rise from nought to millions of requests per second in minutes without the need to pre-warm, Google said.

In another related announcement, Google is to make 32-core virtual machines (VMs) available. This offering is aimed at customers with industrial scale computing loads and storage-intensive projects, such as graphics rendering. Three variations of 32-core VMs are now on offer. The Standard offering has 32 virtual CPUs and 120 GB of memory. The High Memory option providers 32 virtual CPUs and 208 GB of memory, while the High-CPU offering provides 32 virtual CPUs and 28.8 GB of memory.

“During our beta trials, 32-core VMs have proven very popular with customers running many different workloads, including visual effects rendering, video transcoding, large MySQL and Postgres instances,” said the blog.

Hortonworks buys SequenceIQ to speed up cloud deployment of Hadoop

CloudBreak

SequenceIQ will help boost Hortonworks’ position in the Hadoop ecosystem

Hortonworks has acquired SequenceIQ, a Hungary-based startup delivering infrastructure agnostic tools to improve Hadoop deployments. The company said the move will bolster its ability to offer speedy cloud deployments of Hadoop.

SequenceIQ’s flagship offering, Cloudbreak, is a Hadoop as a Service API for multi-tenant clusters that applies some of the capabilities of Blueprint (which lets you create a Hadoop cluster without having to use the Ambari Cluster Install Wizard) and Periscope (autoscaling for Hadoop YARN) to help speed up deployment of Hadoop on different cloud infrastructures.

The two companies have partnered extensively in the Hadoop community, and Hortonworks said the move will enhance its position among a growing number of Hadoop incumbents.

“This acquisition enriches our leadership position by providing technology that automates the launching of elastic Hadoop clusters with policy-based auto-scaling on the major cloud infrastructure platforms including Microsoft Azure, Amazon Web Services, Google Cloud Platform, and OpenStack, as well as platforms that support Docker containers. Put simply, we now provide our customers and partners with both the broadest set of deployment choices for Hadoop and quickest and easiest automation steps,” Tim Hall, vice president of product management at Hortonworks, explained.

“As Hortonworks continues to expand globally, the SequenceIQ team further expands our European presence and firmly establishes an engineering beachhead in Budapest. We are thrilled to have them join the Hortonworks team.”

Hall said the company also plans to contribute the Cloudbreak code back into the Apache Foundation sometime this year, though whether it will do so as part of an existing project or standalone one seems yet to be decided.

Hortonworks’ bread and butter is in supporting enterprise adoption of Hadoop and bringing the services component to the table, but it’s interesting to see the company commit to feeding the Cloudbreak code – which could, at least temporarily, give it a competitive edge – back into the ecosystem.

“This move is in line with our belief that the fastest path to innovation is through open source developed within an open community,” Hall explained.

The big data M&A space has seen more consolidation over the past few months, with Hitachi Data Systems acquiring big data and analytics specialist Pentaho and Infosys’ $200m acquisition of Panaya.