Category Archives: Opinion

PRISM Scandal Generates Renewed Interest in Non-US Cloud Providers

Guest Post by Mateo Meier, founder of Swiss hosting provider Artmotion

Businesses vote with their feet, in light of the recent PRISM scandal. Up until recently, the US had been considered the leading destination for cloud services with its vast infrastructures and innovative service offerings, but recent leaks have sparked panic amongst many business owners and is driving demand for Non US cloud providers.

The most concerning aspect for many is the wide ranging implications of using US-controlled cloud services, such as AWS, Azure and Dropbox. As a result, businesses are now turning to Switzerland and other secure locations for their data hosting needs.

Swiss ‘private’ hosting companies are seeing huge growth because privacy in Switzerland is enshrined in law. As the country is outside of the EU, it is not bound by pan-European agreements to share data with other member states, or worse, the US. Artmotion, for example, has witnessed 45 per cent growth in revenue amid this new demand for heightened privacy.

Until now the PRISM scandal has focused on the privacy of the individual, but the surveillance undertaken by NSA and Britain’s own GCHQ has spurred corporate concern about the risks associated with using American based cloud providers to host data. It is especially troubling for businesses with data privacy issues, such as banks or large defence and healthcare organisations with ‘secret’ research and development needs.

Before PRISM, the US was at the forefront of the cloud computing industry and companies worldwide flocked to take advantage of the scalable benefits of cloud hosting, as well as the potential cost savings it offered.

However the scandal has unearthed significant risks to data for businesses, as well as for their customers. With US cloud service providers, the government can request business information under the Foreign Intelligence Surveillance Act (FISA) without the company in question ever knowing its data has been accessed.

For businesses large and small, data vulnerabilities and the threat of industrial espionage from US hosting sites can present real security risks or privacy implications, and it’s causing a real fear. Business owners are worried that by using US based systems, private information could potentially be seen by prying eyes.

The desire for data privacy has therefore seen a surge in large corporations turning to ‘Silicon’ Switzerland to take advantage of the country’s renowned privacy culture. Here they can host data without fear of it being accessed by foreign governments.

Mateo-Meier

Mateo Meier, founder of Artmotion, spent the early stages of his career in the US before returning home to Switzerland to start Artmotion. Artmotion was started in early 2000 and provides highly bespoke server solutions to an international set of clients.

Measurement, Control and Efficiency in the Data Center

Guest Post by Roger Keenan, Managing Director of City Lifeline

To control something, you must first be able to measure it.  This is one of the most basic principles of engineering.  Once there is measurement, there can be feedback.  Feedback creates a virtuous loop in which the output changes to better track the changing input demand.  Improving data centre efficiency is no different.  If efficiency means better adherence to the demand from the organisation for lower energy consumption, better utilisation of assets, faster response to change requests, then the very first step is to measure those things, and use the measurements to provide feedback and thereby control.

So what do we want to control?  We can divide it into three: the data centre facility, the use of compute capacity and the communications between the data centre and the outside world.  The balance of importance of those will differ between all organisations.

There are all sorts of types of data centres, ranging from professional colocation data centres to the server-cupboard-under-the-stairs found in some smaller enterprises.  Professional data centre operators focus hard on the energy efficiency of the total facility.  The most common measure of energy efficiency is PUE, defined originally by the Green Grid organisation.  This is simple:   the energy going into the facility divided by the energy used to power electronic equipment.  Although it is often abused, a nice example is the data centre that powered its facility lighting over POE, (power over ethernet) thus making the lighting part of the ‘electronic equipment, it is widely understood and used world-wide.  It provides visibility and focus for the process of continuous improvement.  It is easy to measure at facility level, as it only needs monitors on the mains feeds into the building and monitors on the UPS outputs.

Power efficiency can be managed at multiple levels:  at the facility level, at the cabinet level and at the level of ‘useful work’.  This last is difficult to define, let alone measure and there are various working groups around the world trying to decide what ‘useful work’ means.  It may be compute cycles per KW, revenue generated within the organisation per KW or application run time per KW and it may be different for different organisations.  Whatever it is, it has to be properly defined and measured before it can be controlled.

DCIM (data centre infrastructure management) systems provide a way to measure the population and activity of servers and particularly of virtualised machines.  In large organisations, with potentially many thousands of servers, DCIM provides a means of physical inventory tracking and control.  More important than the question “how many servers do I have?” is “how much useful work do they do?”  Typically a large data centre will have around 10% ghost servers – servers which are powered and running but which do not do anything useful.  DCIM can justify its costs and the effort needed to set it up on those alone.

Virtualisation brings its own challenges.  Virtualisation has taken us away from the days when a typical server operated at 10-15% efficiency, but we are still a long way from most data centres operating efficiently with virtualisation.  Often users will over-specify server capacity for an application, using more CPU’s, memory and storage than really needed, just to be on the safe side and because they can.   Users see the data centre as a sunk cost – it’s already there and paid for, so we might as well use it.  This creates ‘VM Sprawl’.  The way out of this is to measure, quote and charge.  If a user is charged for the machine time used, that user will think more carefully about wasting it and about piling contingency allowance upon contingency allowance ‘just in case’, leading to inefficient stranded capacity.  And if the user is given a real-time quote for the costs before committing to them, they will think harder about how much capacity is really needed.

Data centres do not exist in isolation.  Every data centre is connected to other data centres and often to multiple external premises, such as retail shops or oil rigs.  Often those have little redundancy and may well not operate efficiently.  Again, to optimise efficiency and reliability of those networks, the first requirement is to be able to measure what they are doing.  That means having a separate mechanism at each remote point, connected via a different communications network back to a central point.  The mobile phone network often performs that role.

Measurement is the core of all control and efficiency improvement in the modern data centre.  If the organisation demands improved efficiency (and if it can define what that means) then the first step to achieving it is measurement of the present state of whatever it is we are trying to improve.  From measurement comes feedback.  From feedback comes improvement and from improvement comes control.  From control comes efficiency, which is what we are all trying to achieve.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

Do You Know the Top Threats to Cloud Security?

Where computing goes, trouble follows — in the form of hackers, disgruntled employees, and plain old destructive bugs. And as computing is moving to the Cloud (it says so right there in our logo!) that’s where some of the newest threats are emerging.

The Cloud Security Alliance has identified The Notorious Nine, (registration required) the top nine cloud computing threats for 2013.

Data breaches, data loss, account and traffic hijacking, insecure interfaces and APIs, denial of service attacks, malicious insiders, cloud “abuse” (using the power of the cloud to crack passwords), lack of due diligence, and shared technology platforms leading to shared vulnerabilities.

 

Let’s Hope Not: Least Favorite 2013 Prediction is “Hacking-as-a-Service”

Among all the pundit predictions for the coming year in cloud computing the one that caught my eye was this one by BusinessInsider’s Julie Bort in an article entitled “5 Totally Odd Tech Predictions That Will Probably Come True Next Year

1. Bad guys start offering “hacking as a service”

Security company McAfee says that criminal hackers have begun to create invitation-only forums requiring registration fees. Next up, these forums could become some sort of black-market software-as-a-service. Pay a monthly fee and your malware is automatically updated to the latest attack. Don’t pay, and it would be a shame if something happened to your beautiful website …

HaaS? Let’s hope not.

More End-of-Year Predicting: 7 ‘Half-Baked Ideas’ on Where Cloud Will Take Us in 2013 and Beyond

As we mentioned earlier, there are a lot of people publishing lists of this and that for the year we’re about done with, and predicting what’s to come in 2013. As a followup to his first such list, Joe McKendrick at Forbes now offers us “7 ‘Half-Baked Ideas’ on Where Cloud Will Take Us in 2013 and Beyond“. My favorite: “Cloud increasingly recognized as a “green” enabler“. I’ve never bought into the notion that it’s not more energy efficient to depend on the massive scale that cloud data centers represent. Less travel, more energy-lean devices (mobile in particular), less built environment for offices, and fewer underutilized, energy-sucking servers sitting in small offices and departments.

Read the whole list and see which you think are more than “half-baked”.

That Time of the Year: Everyone Has a List and Predictions

It’s getting to be the time of year when the “lists of” come out, recalling the best (or worst, depending) of this and that, usually followed by pundit predictions for the coming year. Cloud Computing is no different so this particular form of holiday cheer is starting to appear. We’ll try to find and point to the ones worth spending some time with.

First up: Joe McKendrick at Forbes on “7 Predictions for Cloud Computing in 2013 that Make Perfect Sense“. Notice we’re no longer shackled to the nice round number 10. He found 7 so what’s what he offered us. They range from “More hosted private clouds” to “Cloud as a defining term fades“. I think maybe the most intriguing one was “Cloud and mobile becoming one“.

Read the post for the full list and all his reasoning and details.