Virtual Machines vs #Containers at @DevOpsSummit New York [#DevOps #Docker #Microservices]

There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming onto the cloud hosting scene over 10 years ago. So, what has changed?
In his session at 16th Cloud Expo, Tenko Nikolov, founder and CEO of Kyup, will discuss the security, speed, scalability, cost and outlook for the future of container cloud hosting.

read more

Google drops its cloud pricing – again – and continues to follow Moore’s Law

(c)iStock.com/serg3d

It’s not the first time this has happened, and it almost certainly won’t be the last; Google has slashed the prices of its Compute Engine instance types, in a bid to continue to follow the Moore’s Law pricing structure.

Moore’s Law – the simple theory, predicted in 1965 that processing power will double every two years – just keeps on going. Google appears just as committed to the process today as it was when Compute Engine was launched in November 2013. Back in March last year, Google SVP Urs Holzle argued pricing was too complex, adding that “it seems you need a PhD to figure out the best options.”

The most recent announcements from the search giant include reducing prices of virtual machines by up to 30%. In reality, that drop only comes into play for those with ‘micro’ classification; elsewhere it’s a 20% reduction for standard machines, 15% for high memory and small, and 5% for high CPU machines.

As Holzle notes in a blog post announcing the price cuts, altogether, the historic price cuts have reduced VM prices by more than half. “When combined with our automatic discounts, per-minute billing, no penalties for changing machine types, and no need to enter into long-term fixed-price commitments, it’s easy to see why we’re leading the industry in price and performance,” he wrote.

Except according to the analysts, that’s not really the case. The most recent Gartner infrastructure as a service (IaaS) Magic Quadrant, published earlier this month, shows Amazon Web Services (AWS) and Microsoft going further ahead of Google. Synergy Research, which has also released many reports digging down on the numbers, comes to a similar conclusion.

Google is however not resting on its laurels when it comes to other innovations on cloud; Google Cloud Storage Nearline, a faster response storage service, and the Google Cloud Bigtable database, an analysis of which can be found here, were both launched recently.

Bridgwater College Passes Desktop Efficiency and Continuity Test With 2X by Parallels

Featured image courtesy of Bridgwater College. “2X will remain at the core of our ICT delivery well into the future.” – Dave Foster, Head of IT Services, Bridgwater College   Company Overview Bridgwater College in Somerset is a highly successful multi-site provider of education and training, boasting Beacon status and an Ofsted Outstanding rating. It […]

The post Bridgwater College Passes Desktop Efficiency and Continuity Test With 2X by Parallels appeared first on Parallels Blog.

What to Do When You Forget a File

Worry even less about forgetting a file. In January, I blogged about using the new File Manager in Parallels Access to retrieve a presentation file that you had mistakenly left at home, and I’ve since received several comments that this scenario is pretty common―and that the Parallels Access solution is a welcome and easy fix. […]

The post What to Do When You Forget a File appeared first on Parallels Blog.

Why the cloud wars are good for you: Google’s NoSQL database and the battle with Microsoft and Amazon

(c)iStock.com/TARIK KIZILKAYA

Google is creeping up your data stack. In response to Microsoft’s recent Azure DocumentDB announcement Google has released Cloud BigTable into the wild. Cloud BigTable is a managed NoSQL database service based on a version of BigTable used internally for more than decade. Last week Google announced the database would be made available to the masses, and could even be accessed through the Apache HBase API (take that, Hadoop vendors!). It’s a big play in the war for the control of computing workloads running in the cloud.

Sure, Google’s announcement could be viewed as yet another volley in the cloudy game of thrones but it’s more than that. There are two reasons it’s interesting:

  1. Shifting battlefields – big players are moving up the stack to provide greater value and chase higher margins
  2. Full circle – this is NoSQL coming full circle, from research paper to a full service offering

Making stacks from the stack

There are only three companies that have a chance of long-term success with mass-market cloud infrastructure business. No prizes for guessing the names: Amazon, Microsoft and Google. Amazon is the clear leader. Microsoft is making huge investments which, so far, have the Redmond-based giant out ahead of Google too. The bets are big and the stakes are high. The reality is that most companies are moving to the cloud. It’s only a matter of time and which infrastructure player they chose to invest with.

Nobody generates their own electricity in their house, it’s a utility. Cloud infrastructure should be the same. As profit margins flatten for cloud offerings, the major players are looking elsewhere for big data dollars. That’s what Google’s announcement is all about. The search behemoth wants to gobble up more of the big data stack.  

In the beginning cloud was just the basic physical infrastructure. In recent years vendors are adding more and more of what you need to run an application. If you want to run infrastructure on Google, Amazon or Microsoft today, there’s less you need to do for that to become a reality.

So how does this arms-race impact our friendly neighbourhood IT decision maker? Right now it’s all good. There are more options and the fierce competition is forcing down prices. However, buyer beware – many of the services and platforms are far more niche than the providers would have you believe (see below), while at the same time locking you into the vendor’s technology stack.

Full circle: From research paper to product

Many of the important software innovations of the past decade are based on published papers describing Google’s infrastructure. Hadoop is based on two key pieces of research Google published in 2003 and 2004 on its file system (GFS) and map-reduce implementation. Other examples of research that spawned popular open source software projects include Chubby (Zookeeper), Dremel (Drill), and BigTable (HBase and Cassandra).

HBase was initially developed at a company called Powerset to power a natural language search system, which was acquired by Microsoft. Facebook built Cassandra to power its Inbox search feature. Both HBase and Cassandra use a data model inspired by BigTable, which is why they are being compared to Google’s new offering.

Fast forward seven years and the thing that inspired people to build these open source software projects is now a service you can use. And to take advantage of it you don’t need to build the software that Google uses. In fact you don’t even have to run a product that emulates it. You can really use Google’s Bigtable to power your own applications.

As my friend and former colleague Matt Asay pointed out: “Google has finally given enterprises a clear reason to prefer Google over its cloudy alternatives: The chance to scale and run like Google.”

Are you going to need a BiggerTable?

Organisations that are interested Google Cloud BigTable have already decided this type of data model is right for their application. This offering is competitive with products from DataStax and the Hadoop distribution vendors that support HBase. While some advanced customers will choose to manage their own infrastructure, many will be happy to let someone else take care of the details, especially if that someone is Google.

Cloud BigTable is a database with a very narrow set of features. It is a wide column store with a simple key-value query model. Like Cassandra and HBase, Cloud BigTable is limited by:

  • A complex data model which presents a steep learning curve to developers, slowing the rate of new application development
  • Lack of features such as an expressive query language (key-value only), integrated text search, native secondary indexes, aggregations and more. Collectively, these enable organisations to build more functional applications faster

Competition conquers complexity

This is a story about cloud infrastructure warfare and, in a way, we all win. In the insanely competitive cloud market the prices are dropping as quickly as the capabilities are expanding. As we’ve seen in the mobile industry over the past decade, incredible competition drives incredible innovation.

It’s clear the future of databases are primarily going to be in the cloud. MongoDB is designed for cloud deployments and is incredibly popular on AWS, and Google Cloud Platform already offers hosted MongoDB. We also think that a big part of removing complexity is finding software an organisation can standardise on. No one wants to deal with half a dozen databases. They want standards that have the best parts of the various niche data tools.

To achieve this, the big players are throwing huge money at infrastructure and services. Google, Amazon and Microsoft will continue to search for more areas of big data where they can provide value in the market. Ultimately this will lower barriers to entry for new products and services.

Before the year is out, I’d expect there will be even more vendors trying to creep up your big data stack. That’s good for all of us.

Google, OpenStack target containers as Project Magnum gets first glimpse

Otto, Collier and

Otto, Collier and Parikh demoing Magnum at the OpenStack Summit in Vancouver this week

Google and OpenStack are working together to use Linux containers as a vehicle for integrating their respective cloud services and bolstering OpenStack’s appeal to hybrid cloud users.

The move follows a similar announcement made earlier this year by pure-play OpenStack vendor Mirantis and Google to commit to integrating Kubernetes with the OpenStack platform.

OpenStack chief operating officer Mark Collier said the platform needs to embrace heterogeneous workloads as it moves forward, with both containers and bare-metal solidly on the agenda for future iterations.

To that end, the company revealed Magnum, which in March became an official OpenStack project. Magnum builds on Heat to produce Nova instances on which to run application containers, and it basically creates native capabilities (like support for different scheduling techniques) that enable users and service providers to offer containers-as-a-service.

“As we think about Magnum and how that can take container support to the next level, you’ll hear more about all the different types of technologies available under one common set of APIs. And that’s what users are looking for,” Collier said. “You have a lot of workloads requiring a lot of different technologies to run them at their best, and putting them all together in one platform is a very powerful thing.”

Google’s technical solutions architect Sandeep Parikh and Magnum project leader Adrian Otto (an architect at Rackspace) were on hand to demo a kubernetes cluster deployment in both Google Compute Engine and the Rackspace public cloud using the exact same code and Keystone identity federation.

“We’ve had container support in OpenStack for some time now. Recently there’s been NovaDocker, which is for containers we treat as machines, and that’s fine if you just want a small place to put something,” Otto said.

Magnum uses the concept of a bay – where the orchestration layer goes – that Otto said can be used to manipulate pretty much any Linux container technology, whether its Docker, Kubernetes or Mesos.

“This gives us the ability to offer a hybrid approach. Not everything is great for private cloud, and not everything is great for public [cloud],” Parikh said. “If I want to run a highly available deployment, I can now run my workload in multiple places and if something were to go down the workload will still stay live.”

Death of ‘On Average’ By @Schmarzo | @CloudExpo [#BigData #IoT #M2M]

People and organizations are accustomed to relying upon “on average” guidelines in order to manage their lives and businesses. But “on average” guidelines are severely flawed. In reality, you are either spending too much or spending too little on your individual customers; you are wasting money on over-served customers and leaving money on the table on under-served customers.

However, big data changes that paradigm. We should reject running our businesses using “on average” rules of thumbs. We now have the detailed data and the analytic capability to understand behaviors, tendencies, propensities, characteristics, trends and patterns at the individual level – whether those individuals are humans or machines – and we can leverage these insights to make improved business decisions.

read more

EMC-Windstream Partnering to Drive Cloud Solutions | @CloudExpo [#Cloud]

In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager – Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, will review next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discuss how to increase efficiencies, improve service delivery and enhance corporate cloud solution development.
Speaker Bios
Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has been engaged in the technology industry for more than 15 years in business development and management roles. He supports several of EMC’s most strategic Cloud Service Provider relationships, including Windstream. His organization focuses on identification, solution development and joint go-to-market execution.

read more

LendingTree Debuts Small Business Loan Marketplace

CHARLOTTE, N.C., Dec. 3, 2014 /PRNewswire/ — LendingTree, the nation’s leading online loan marketplace, today announced the official launch of its small business loan marketplace, as it continues to expand its offerings. The company’s network of small business lenders already includes traditional, alternative, peer-to-peer and specialty finance lenders, bringing forth a multitude of funding options for small and mid-sized businesses. 

read more

eBay chief cloud engineer: ‘OpenStack needs to do more on scalability, upgradability’

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

OpenStack has improved leaps and bounds in the past four years but it still leaves much to be desired in terms of upgradability and manageability, according to Subbu Allamaraju, eBay’s top cloud engineer.

Allamaraju, who was speaking at the OpenStack Summit in Vancouver this week, said the ecommerce giant is a big believer in open source tech when it comes to building out its own internal, dev-and-test and customer-facing services.

In 2012 when the company, which is a 100 per cent KVM and OVS shop, started looking at OpenStack, it decided to deploy on around 300 servers. Now the company has deployed nearly 12,000 hypervisors on 300,000 cores, including 15 virtual private clouds, in 10 availability zones.

“In 2012 we had virtually no automation; in 2014 we still needed to worry about configuration drift to keep the fleet of hypervisors in sync. In 2012, there was also no monitoring,” he said. “We built tools to move workloads between deployments because in the early years there was no clear upgrade path.”

eBay has about 20 per cent of its customer-facing website running on OpenStack, and as of the holiday season this past year processed all PayPal transactions on applications deployed on the platform. The company also hosts significant amounts of data – Allamaraju claims eBay runs one of the largest Hadoop clusters in the world at around 120 petabytes.

But he said the company still faces concerns about deploying at scale, and about upgrading, adding that in 2012 eBay had to build a toolset just to migrate its workloads off the Essex release because no clear upgrade path presented itself.

“In most datacentre cloud is only running in part of it, but we want to go beyond that. We’re not there yet and we’re working on that,” he said, adding that the company’s goal is to go all-in on OpenStack within the next few years. “But at meetings we’re still hearing questions like ‘does Heat scale?’… these are worrying questions from the perspective of a large operator.”

He also said the data from recent user surveys suggest manageability and in particular upgradeability, long held to be a significant barrier to OpenStack adoption, are still huge issues.

“Production deployments went up, but 89 per cent are running a core base at least 6 months old, but 55 per cent of operators are running a year-old core base, and 18 per cent are running core bases older than 12 months,” he said. “Lots of people are coming to these summits, but the data suggests many are worried about the upgrading.”

“This is an example of manageability missing in action.  How do you manage large deployments? How do you manage upgradeability?”