Location, location, location: The changing face of data centres

(c)iStock.com/baranozdemir

One of the rare occasions that the subject of data centres has ended up in the mainstream news was fairly recently, when Facebook built a new data centre in Sweden. It was a precious moment when our industry – which despite being responsible for so much cool stuff in the world is not exactly considered to be the most glamorous to the average Joe – was seen on the likes of Gawker and the Mail Online.

Of course, the Facebook factor is a strong reason for this public interest – but there is something inherently… cool… about building a data centre in such a remote location (not to mention the green credentials). But Facebook isn’t doing this for the PR; it sees real potential for savings in this region.

The Node Pole – a brand name given to the Luleå region in Sweden where Facebook built its data centre – had a significant presence at this year’s Data Centre World conference.

They certainly offer an interesting proposition, one that the likes of Facebook and Google have eagerly bought into.

Being touted as the perfect place to house a data centre, Luleå has a number of advantages of which they are rightly proud: bountiful supplies of cheap renewable energy from the numerous hydro-electric power stations in the area, an average temperature of -1.3 °C, and not least, a local University of Technology that can provide skilled workers.

Indeed, Facebook states that most of the 90-odd permanent staff at its Luleå data centre are from the local region.

The enthusiasm in the area is strongly evident. At a presentation by the Node Pole, we were shown a promotional video in which the mayor of Luleå spoke with real passion about how the region and technology – particularly ‘big data’ – are ideally suited.

Facebook seems happy – its first data centre in the region is currently achieving a PUE of 1.08, with the company revealing plans to build another.

Of course there is a reason why the idea of choosing the ideal location for a data centre – rather than settling for the nearest suitable location – is only recently being realised, and that is the proliferation of high speed, low latency internet. As Senator Ted Stevens might have said, big data needs big tubes.

Of course this is all good and well for the Facebooks and Googles of the world, but what about the smaller businesses who aren’t able to invest hundreds of millions into a data centre? Good news. More colocation centres are popping up in the area, now that internet speeds have caught up with the needs of businesses. Hydro66 is one of the latest companies to see an opportunity there and is boasting the “world’s first 100% hydroelectric powered colocation centre”.

London, Paris and Amsterdam have traditionally been the major colocation centres in Europe for large enterprises because of their access to high-speed low-latency network connectivity – but this could be about to change.

Mirantis joins Cloud Foundry to improve OpenStack PaaS integration

Mirantis has joined Cloud Foundry in a move aimed at improving integration between Cloud Foundry and OpenStack

Mirantis has joined Cloud Foundry in a move aimed at improving integration between Cloud Foundry and OpenStack

Pure-play OpenStack vendor Mirantis is joining the Cloud Foundry Foundation in a bid to help drive integration between the two open source platforms.

OpenStack has gained strong momentum in recent years with vendors like HP and IBM building fully fledged portfolios based on the technology; according to 451 Research OpenStack revenue will hit $3.3bn by 2018.

And as far as open source platform-as-a-service projects go, Cloud Foundry seems to have gained the lion’s share of vendor buy-in.

“As the pure-play OpenStack company, Mirantis is focused on making OpenStack the best way to build a private cloud and enable software development,” said Alex Freedland, Mirantis co-founder and chairman.

“Part of that vision is making it as simple as possible to deploy and manage technologies higher ‘up the stack’ – like Cloud Foundry, which has become a very popular PaaS for developer productivity on top of OpenStack. We believe that OpenStack serves the market best by supporting the most popular PaaS solutions and giving enterprise customers maximum choice, rather than prescribing a specific PaaS.”

Sam Ramji, chief executive officer of Cloud Foundry said: “Mirantis and the OpenStack community are doing important work at the infrastructure level of the stack. We’re looking forward to their contributions to optimise OpenStack and Cloud Foundry and empower developers to build their applications for the cloud – quickly and easily.”

In October last year Mirantis secured $100m in series B investment, which has no doubt put the company in a strong position to double down on industry partnerships.

Midokura doubles down on OpenStack

Midokura is joining OpenStack as a sponsor

Midokura is joining OpenStack as a sponsor

Networking startup Midokura has announced it is joining the OpenStack Foundation as a corporate sponsor in a bid to further its agenda within the open source cloud software community.

The company, which focuses on developing network function virtualisation (NFV) capabilities and helped develop the initial OpenStack tools to enable NFV (Quantum / Neutron), said it wanted to join the OpenStack community in a more formal capacity so it could push for a number of initiatives near and dear to its heart.

That agenda includes the adoption of a working group to create the OpenStack Foundation test certification standards and certification for developers.

Efforts are already underway at the Foundation to come to some consensus around what components need to be included in a distribution to be considered “core OpenStack”, but some believe those efforts need to be broadened to appeal to and include ISVs developing for the platform.

“Midokura has been involved with the OpenStack project as an originator of the OpenStack Quantum plug-in, which has since been renamed as Neutron. We have continued code contributions in every release since Bexar,” said Adam Johnson, vice president of business, Midokura. “By doubling down on Midokura’s commitment to OpenStack as a corporate sponsor, we look forward to further promoting the ideals behind the OpenStack Foundation, which align with our own.”

Midokura’s MidoNet NFV technology replaces the default Open vSwitch deployed by OpenStack Neutron with its own.

Lauren Sell, vice president of marketing, OpenStack Foundation: “The OpenStack community wants choices when it comes to scale-out networking. Midokura has demonstrated its commitment to the OpenStack community by delivering an open-source MidoNet option that users can consume in an open, collaborative model. We’re excited about Midokura joining the Foundation as a corporate sponsor, and we look forward to working together to make the project stronger.”

Service Automation Creates New Silos By @FusionLayer | @CloudExpo [#Cloud]

Services providers have traditionally organized the management and operation of different technologies into several teams with very specific domain knowledge. These teams have been staffed with specialists looking after routing, network services, servers, virtualization, storage area networks, security and various other technology domains. Over time, these functional teams have had the tendency to develop into loosely tied silos.

read more

The Internet of Things: Where hope tends to triumph over common sense

The Internet of Things is coming. But not anytime soon.

The Internet of Things is coming. But not anytime soon.

The excitement around the Internet of Things (IoT) continues to grow, and even more bullish predictions and lavish promises will be made made about and on behalf of it in the coming months. 2015 will see us reach “peak oil” in the form of increasingly outlandish predictions and plenty of over-enthusiastic venture capital investments.

But the IoT will not change the world in 2015. It will take at least 10 years for the IoT to become pervasive enough to transform the way we live and work, and in the meantime it’s up to us to decode the hype and figure out how the IoT will evolve, who will benefit, and what it takes to build an IoT network.

Let’s look at the predictions that have been made for the number of connected devices. The figure of 1 trillion has been used several times by a range of incumbents and can only have been arrived at using a very, very relaxed definition of what a “connected thing” is. Of course, if you’re willing to include RFID tags in your definition this number is relatively easy to achieve, but it doesn’t do much to help us understand how the IoT will evolve. At Ovum, we’re working on the basis of a window of between 30 billion and 50 billion connected devices by 2020. The reason for the large range is that there are simply too many factors at play to be any more precise.

Another domain where enthusiasm appears to be comfortably ahead of common sense is in discussions about the volume of data that the IoT will generate. Talk of an avalanche of data is nonsense. There will be no avalanche; instead we’ll see a steadily rising tide of data that will take time to become useful. When building IoT networks the “data question” is one of the things architects spend a lot of time thinking and worrying about. In truth, the creators of IoT networks are far more likely to be disappointed that their network is taking far longer than expected to reach the scale of deployment necessary to produce the volumes of data they had boasted about to their backers.

This article appeared in the latest issue of the BCN Magazine. Click here to download a digital version.

Even the question of who will make money out of the IoT, and where they will make it, is being influenced too much by hope and not enough by common sense. The future of the IoT does not lie in the connected home or in bracelets that count your steps and measure your heartbeat. The vast majority of IoT devices will not beautify our homes or help us with our personal training regime. Instead they will be put to work performing very mundane tasks like monitoring the location of shipping containers, parcels, and people. The “Industrial IoT” which spans manufacturing, utilities, distribution and logistics will make up by far the greatest share of the IoT market. These devices will largely remain unseen by us, most will be of an industrial grey colour, and only a very small number of them will produce data that is of any interest whatsoever outside a very specific and limited context.

Indeed, the “connected home” is going to be one of the biggest disappointments of the Internet of Things, as its promoters learn that the ability to change the colour of your livingroom lights while away on business doesn’t actually amount to a “life changing experience”. That isn’t to say that our homes won’t be increasingly instrumented and connected, they will. But the really transformational aspects of the IoT lie beyond the home.

There are two other domains where IoT will deliver transformation, but over a much longer timescale than enthusiasts predict. In the world of automotive, cars will become increasingly connected and increasingly smart. But it will take over a decade before the majority of cars in use can boast the levels of connectivity and intelligence we are now seeing in experimental form. The other domain that will be transformed over the long-term is healthcare, where IoT will provide us with the ability to monitor and diagnose conditions remotely, and enable us to deliver increasingly sophisticated healthcare services well beyond the boundaries of the hospital or the doctor’s surgery.

Gary Barnett

But again, we are in the earliest stages of research and experimentation and proving some of the ideas are practical, safe and beneficial enough to merit broader roll-out will take years and not months. The Internet of Things will transform the way we understand our environment as well as the people and things that exist within it, but that transformation will barely have begun by 2020.

Gary Barnett is Chief Analyst, Software with Ovum and also serves as the CTO for a non-profit organisation that is currently deploying what it hopes will become the world’s biggest urban air quality monitoring network.

New report shows MongoDB to be leader of the NoSQL database pack

Picture credit: Garrett Heath/Flickr

A report from United Software Associates (USAIN) has found MongoDB to be top of the pile of NoSQL database providers in benchmark testing.

The research tested three leading products – Cassandra, CouchBase and MongoDB – through Yahoo!’s cloud standard benchmark, YCSB. USAIN wanted to assess the durability of each, going on the theory that most applications should prioritise durability over performance, not accepting data loss. The databases were put through the ringer on three types of performance metric; throughput optimised, durability optimised, and balanced.

In workload A of 50% read and 50% update with throughput optimised, under the YCSB benchmark MongoDB hit 160,719 operations per second, ahead of Cassandra (134,839) and Couchbase (106,638). With workload B’s 95% read and 5% update, MongoDB again came out on top with 196,498, ahead of Couchbase (187,798) and Cassandra (144,455).

However with durability optimised, MongoDB soared ahead on workload A, with 31,864 ops a second compared with Cassandra (6,289) and Couchbase (1,236). It was a similar story on workload B, with MongoDB (114,455) ahead of Cassandra (54,864) and Couchbase (18,201).

For the balanced tests, there was no equivalent configuration for Couchbase so it had to sit the tests out. Again MongoDB performed more strongly than Cassandra on workload A (114,245 against 77,676) and workload B (183,152 against 71,643).

The overall conclusion from USAIN was that, not surprisingly, MongoDB provided greater performance in every test, in some instances by as much as 25 times. In Couchbase’s default setting of optimised for throughput, MongoDB again outperformed it. The reason USAIN gave for this disparity was the method the two databases employed: MongoDB handles write conflicts in the database, while Couchbase instructs app developers to detect and handle conflicts in their code, meaning additional trips to retry updates.

Of course, it’s horses for courses. Back in June 2014 Couchbase released its benchmark testing report, this time from Thumbtack Technology, which put its database at the top of the pile ahead of MongoDB and DataStax, arbiters of Cassandra. It’s worth noting as well that USAIN has its place on a MongoDB partner page here.

You can take a look at the full report here (email required).

Telstra to offer SoftLayer cloud access to Australian customers

Telstra and IBM are partnering to offer access to SoftLayer infrastructure

Telstra and IBM are partnering to offer access to SoftLayer infrastructure

Telstra and IBM have announced a partnership that will see the Australian telco offer access to SoftLayer cloud infrastructure to customers in Australia.

Telstra said that with the recent opening of IBM cloud datacentres in Melbourne and Sydney, the company will be able to expand its presence in the local cloud market by offering Australian businesses more choice in locally available cloud infrastructure services.

As part of the deal the telco’s customers will have access to the full-range of SoftLayer infrastructure services including bare metal servers, virtual servers, storage, security services and networking.

Erez Yarkoni, who serves as both chief information officer and executive director of cloud at Telstra said: “Telstra customers will be able to access IBM’s hourly and monthly compute services on the SoftLayer platform, a network of virtual data centres and global points-of-presence (PoPs), all of which are increasingly important as enterprises look to run their applications on the cloud.”

“Telstra customers can connect to IBM’s services via the internet or with a simple extension of their private network. By adding the Telstra Cloud Direct Connect offering, they can also access IP VPN connectivity, giving them a smooth experience between our Next IP network and their choice of global cloud platforms,” Yarkoni said.

Mark Brewer, general manager, IBM Global Technology Services Australia and New Zealand said: “Australian businesses have quickly realised the benefits of moving to a flexible cloud model to accommodate the rapidly changing needs of business today. IBM Cloud provides Telstra customers with unmatched choice and freedom of where to run their workloads, with proven levels security and high performance.”

Telstra already partners with Cisco on cloud infrastructure and is a flagship member of the networking giant’s Intercloud programme, but the company hailed its partnership with IBM as a key milestone in its cloud strategy, and may help bolster its appeal to business customers in the region.

Windows 9 Users Outraged They Won’t Get Windows 10 For Free

Featured image courtesy of Daze Info. Despite the positive reaction so far to Windows 10, Microsoft isn’t out of the gate just yet—by announcing that the free upgrade to Windows 10 will only be available to Windows 7 and 8 users, they’ve frustrated all of their users who took a risk on Windows 9. Here […]

The post Windows 9 Users Outraged They Won’t Get Windows 10 For Free appeared first on Parallels Blog.

What’s On the Other Side of Your Screen?

How do you like your workspace? Are you tired of it? What would you change about where you sit, your view and location? Think big. Dare to dream, because Parallels Access can make your dream real–today. No, I’m not kidding. I speak from experience. During a recent business trip to Sydney, Australia I needed to […]

The post What’s On the Other Side of Your Screen? appeared first on Parallels Blog.

Getting Real (Apps) Inside the Cloud By @ABridgwater | @CloudExpo [#Cloud]

After what feel like an interminable cycle of media frenzy followed by hype and hysteria cycles, the practical elements of real world cloud implementations are starting to become better documented.
But what is really different in the cloud?
How do software applications behave, live, interact and interconnect inside the cloud? Where do cloud architectures differ so markedly from their predecessors that we need to learn a new set of mechanics – and, when do we start to refer to software programmers themselves as “cloud programmers” as a new breed?

read more