20 metros generate three fifths of global colocation revenues, says Synergy Research

According to the latest data from Synergy Research, five metros comprise more than a quarter of the worldwide colocation market and the top 20 account for 59% of worldwide retail and wholesale colocation revenues.

London, New York, Shanghai, Tokyo and Washington were the top five metros, with Chicago, Dallas, Frankfurt, Silicon Valley and Singapore comprising the top 10. Amsterdam, Atlanta, Beijing, Hong Kong, Los Angeles, Paris, Phoenix, Seattle, Sydney, and Toronto were in the top 20.

On the vendor side, in Q3 Equinix was the market leader by revenue in eight of the top 20 metros. Digital Realty will comprise five more once their acquisition of DuPont Fabros operations reaches a full quarter. Other vendors in the mix include 21Vianet, China Telecom, and CyrusOne.

“While we are seeing reasonably robust growth across all major metros and market segments, one number that jumps out is the wholesale growth rate in the Washington/Northern Virginia metro area,” said John Dinsdale, a chief analyst and research director at Synergy.

“It is by far the largest wholesale market in the world and for it to be growing at 20% is particularly noteworthy.

“The broader picture is that data centre outsourcing and cloud services continue to drive the colocation market, and the geographic distribution of the world’s corporations is focusing the colocation market on a small number of major metro areas,” Dinsdale added.

Earlier this month, Synergy Research argued that data centre mergers and acquisitions in 2017 surpassed 2015 and 2016’s figures combined, producing 48 transactions last year compared with 45 for the previous two years.

Dropbox files confidentially for IPO – reports

Cloud storage provider Dropbox has filed confidentially for IPO, according to a Bloomberg report.

According to the report, Goldman Sachs and JPMorgan will lead the potential listing, with Dropbox aiming to list in the first half of this year. The company’s most recent valuation, in 2014, was at $10 billion.

The move, alongside Spotify’s bid to go public at the very start of this year, represents the first tech IPOs of 2018. Writing in October, venture capitalist Fred Wilson said that 2018 and 2019 will be ‘bumper years’ for tech IPOs ‘assuming the markets behave.’ Dropbox regularly made pundits’ lists of potential tech IPOs this year, including this MarketWatch piece on New Year’s Eve.

It was the better part of 11 years ago when Drew Houston posted his app idea onto Hacker News with the tagline ‘throw away your USB drive.’ Initial feedback from users was positive – apart from one user who insisted it was ‘trivial’ to build such a system in Linux – and the product officially launched in September 2008. A series A funding round of $6 million followed in November, with total funding exceeding $2 billion up to series D and including credit lines.

Major rivals include Microsoft’s OneDrive and Box – although the two companies were at pains to say they weren’t direct competitors – while Houston regaled the story of how Steve Jobs was interested in acquiring Dropbox, adding that he saw the company as a feature, rather than a product.

Unlike Box, whose focus on the enterprise was clear pretty much from its inception, Dropbox took longer to move in. the company made its most concerted push to the enterprise market in November 2015 with Dropbox Enterprise, although by this time it already had Dropbox for Business with a major user base. Earlier that year, Dropbox said its total number of users had exceeded 400 million.

In more recent times, the company hit a $1 billion annual run rate at the start of last year, becoming one of only five software as a service (SaaS) providers to do so as well as being the fastest. In comparison, it took Salesforce approximately 10 years to hit this figure; by August 2017, the original SaaS king passed the $10 billion run rate.

More than anything else, Dropbox continually refused to be rushed or drawn in to IPO discussions. Houston always insisted that Dropbox would move when the time was right. As Recode put it, the joke was that the company was set to go public in the fourth quarter of that year… every year.

In the annual Forbes Cloud 100 list, first published in 2016 showcasing the hottest private cloud companies – privately owned, rather than ‘private cloud’, of course – Dropbox secured second place in both years. As this publication put it earlier this week when Veeam – another company on the list and another company not rushing going public – issued its financial results, the list has always been an excellent barometer of the best private companies who could be ready to take the next step.

If all goes to plan, Dropbox will become the first Y Combinator-backed company to go public.

As cloud infrastructure becomes more complex, security struggles with it

As more organisations get deeper into their cloud initiatives, their infrastructures become more complex – yet according to new research from WinMagic, security and compliance is struggling to keep up.

The study, conducted by Viga, polled more than 1,000 IT decision makers and found that while an overwhelming 98% of respondents say they use the cloud in some capacity – with on average half of a company’s infrastructure being cloud-based – security is lacking in comparison. Only one in three respondents said their data was at least partially encrypted in the cloud, while a greater percentage (39%) admitted they did not have unbroken security audit trails across VMs in the cloud.

Despite these failings, security remains, as it always has done, the biggest concern about cloud-based workloads. 58% cited security specifically as their largest issue, followed by protecting sensitive data from unauthorised access (55%) – which amounts to pretty much the same thing – and the increased complexity of infrastructure (44%).

The report also finds that the common concept of shared responsibility is – again – not a universal concept among IT decision makers. One in five said they thought sole responsibility for the compliance of data stored on cloud services rested with the vendor, while only 39% correctly noted they considered themselves ultimately responsible.

Each cloud provider differs of course – although it does not quite mitigate the 20% in the survey who believed they were covered by their vendor’s SLA – but to illustrate, AWS outlines it thus. The vendor, according to this document, is responsible for security ‘of’ the cloud – compute, storage, networking – while the customer is responsible for security ‘in’ the cloud, such as customer data, applications, and identity and access management.

“The simple fact is that businesses must get the controls in place to manage their data, including taking the strategic decision that anything they would not want to see in the public domain must be encrypted,” said Mark Hickman, WinMagic chief operating officer.

Why 2018 will be a year of innovation and the ‘cloud on edge’

During much of 2017, it was possible to read many articles that predicted the end of cloud computing and favour edge computing instead. However, there is also the view that believes edge computing and cloud computing are extensions of one another. In other words, the two technological models are expected to work together. Cloud computing therefore has much life in it yet.

With the increasing use of artificial intelligence, machine learning, biometric security and sensors to enable everything, from connected and autonomous vehicles to facial and iris recognition in smartphones such as Apple’s 10th anniversary iPhone X, questions are also arising about whether Big Brother is taking a step too far into our private lives. Will the increasing use of body-worn video cameras, sensors and biometrics mean that our every daily movements will be watched? That’s a distinct possibility, which will concern many people who like to guard their lives like Fort Knox.

The myriad of initiatives, such as edge computing, fog computing and cloud computing that have emerged to connect devices together have created much confusion

Arguably, the use of biometrics on smartphones isn’t new though. Some Android handsets have been using iris recognition for a while now. Yet, with the European Union’s General Data Protection Regulations now less than five months away at the time of writing this article, the issue of privacy and how to protect personal data is on everyone’s lips. However, for innovation to occur, there must sometimes be a trade-off because some of today’s mobile technologies rely upon location-based services to indicate our whereabouts, to determine our proximity to points of interest; machine learning is deployed to learn our habits to make life easier.

Looking ahead

So, even Santa has been looking at whether innovation will reside in the cloud or the edge in 2018. He thinks his sleigh might need an upgrade to provide autonomous driving. Nevertheless, he needs to be careful because Rudolph and his red-nosed reindeer might not like being replaced by a self-driving sleigh. Yet, to analyse the data and the many opportunities that will arise from autonomous vehicles as time marches on, he thinks that having much of the data analysis should be conducted at the edge.

By conducting the analysis at the edge, it becomes possible to mitigate some of the effects of latency, and there will be occasions when connected and autonomous vehicles will need to function without any access to the internet or to cloud services. The other factor that is often considered, and why an increasing number of people are arguing that innovation will lie in edge computing, is the fact that the further away your datacentre is located, the more latency and packet loss traditionally tend to increase. Consequently, real-time data analysis becomes impossible to achieve.

Foggy times

However, the myriad of initiatives, such as edge computing, fog computing and cloud computing, that have emerged over the past few years to connect devices together have created much confusion. They are often hard to understand if you are somebody looking at the IT world from the outside. You could therefore say we live in foggy times because new terms are being bounced around that often relate to old technologies that have been given a new badge to enable future commercialisation.

I’ve nevertheless no doubt that autonomous vehicles, personalised location-aware advertising and personalised drugs – to name but a few innovations – are going to radically change the way organisations and individuals generate and collect data, the volumes of data we collect, and how we crunch this data. Without doubt too, they will have implications for data privacy. The perceived wisdom, when faced with vast new amounts of data to store and crunch, is to therefore run it from the cloud. Yet, that may not be the best solution. Therefore, organisations should consider all the possibilities out there in the market – and some of them may not emanate from the large vendors. That’s because smaller companies are often touted as the better innovators.

Autonomous cars

Autonomous cars, according to Hitachi, will create around 2 petabytes of data a day. Connected cars are also expected to create around 25 gigabytes of data per hour. Now, consider that there are currently about 800+ million cars in USA, China and Europe. So, if there were to be 1 billion cars in the near future, with about half of them being fully connected and assuming that they are used for an average journey of 3 hours per day, 37,500,000,000 gigabytes per day would need to be created.

If, as expected, most new cars will be autonomous by the mid-2020s, that number will look insignificant. Clearly, not all that data can instantaneously be shipped back to the cloud without some level of data verification and reduction. There must be a compromise, and that’s what edge computing can offer in support of such technologies, such as autonomous vehicles.

Storing the ever-increasing amount of data is going to be a challenge from a physical perspective. Data size sometimes does matter of course. With it comes a financial and economic matter of cost per gigabyte. So, for example, while electric vehicles are being touted as the flavour of the future, power consumption is bound to increase. So too will the need to ensure that the personal or device-created data doesn’t fall foul of data protection legislation.

Data acceleration

Yet, as much of the data from connected and autonomous vehicles will need to be transmitted to a cloud service for deeper analysis, back-up, storage and data-sharing with an ecosystem of partners, from vehicle manufacturers to insurers, some of the data still needs to be able to flow to and from the vehicles. In this case, to mitigate the effects of network and data latency, there may be a need for data acceleration with solutions such as PORTrockIT.

Unlike edge computing, where data is analysed close to its source, data acceleration can permit the back-up, storage and analysis of data at speed and at distance by using machine learning and parallelisation to mitigate packet loss and latency.  By accelerating data through this approach, it becomes possible to alleviate any pain that organisations feel. CVS Healthcare, is but one organisation, that has seen the benefits of taking such an innovative approach.

The company’s issues were as follows: back-up RPO and RTO; 86ms latency over the network (>2,000 miles); 1% packet loss; 430GB daily backup never completed across the WAN; 50GB incremental taking 12 hours to complete; outside RTO SLA – unacceptable commercial risk; OC12 pipe (600MB per second); excess Iron Mountain costs.

To address these challenges, CVS turned to a data acceleration solution, the installation of which took only 15 minutes. As a result, it reduced the original 50GB back-up from 12 hours to 45 minutes. That equates to a 94% reduction in back-up time. This enabled the organisation to complete daily back-ups of its data, equating to 430GB, in less than 4 hours per day. So, in the face of a calamity it could perform disaster recovery in less than 5 hours to recover everything completely.

Amongst other things, the annual cost-savings created by using data acceleration amounted to $350,000. Interestingly, CVS Healthcare is now looking to merge with Aetna, and so it will most probably need to roll this solution out across both merging entities.

Any reduction in network and data latency can lead to improved customer experiences. However, with the possibility of large congested data transfers to and from the cloud, latency and packet loss can have a considerable negative effect on data throughput. Without machine intelligence solutions, the effects of latency and packet loss can inhibit data and back-up performance.

Data value

Moving away from healthcare and autonomous vehicles and back to GDPR, the trouble is that there are too many organisations that collate, store and archive data without knowing its true value. Jim McGann, VP Marketing & Business Development at Index Engines, says most organisations find it hard to locate personal data on their systems or in paper records.

This issue makes it impossible to know whether the data can be kept, modified, deleted permanently or rectified – making it harder to comply with GDPR, and I would argue it also makes it harder to know whether the data can be used to legitimately drive innovation. So, instead of being able to budget for innovation, organisations in this situation may find that they need to spend a significant amount of money on fines rather than on developing themselves.

Organisations are therefore going to need infrastructure that provides a limited level of data computation and data sieving at the edge

He explains: “Much of this is very sensitive and so many companies don’t like to talk on the record about this, but we do a lot of work with legal advisory firms to enable organisations with their compliance.” Index Engines, for example, completed some work with a Fortune 500 electronics manufacturer that found that 40% of its data no longer contained any business value. So, the company decided to purge it from its datacentre.

Limited edge

Organisations are therefore going to need infrastructure that provides a limited level of data computation and data sieving at the edge – perhaps in expanded base stations and then shipped back or from the cloud. This may, for example, involve a hybrid cloud edge infrastructure. Does this solve everything? Not quite! Some fundamental problems remain, such as the need to think about how to move vast amounts of data around the world – especially if it contains personal encrypted data.

More to the point, for innovation to lie anywhere, it’s going to continue to be crucial to consider how to get data to users at the right time, and to plan now how to store the data well into the future.

Veeam on target for billion dollar status after posting latest financial results

Cloud backup software provider Veeam says it is “well-poised” to become a billion dollar software company by the end of 2018 after posting $827 million (£611m) in total bookings revenue for 2017.

The Swiss-based company said it secured more $500,000-plus deals in 2017 than in the previous six years combined, taking its total number of customers to 282,000. Deals which were $1 million and above saw 500% growth in 2017.

Customers include more than half (57%) of the Forbes Global 2000, the company added, while client wins in 2017 included The Ameritas Life Insurance Company and Industrial Scientific. Speaking of Forbes, 2017 also saw Veeam being placed in the Forbes Cloud 100 list of top private cloud companies for the first time, being ranked #27, while the company added more than 1,000 employees last year, saying 2018 will be where Veeam makes its people “a major priority”.

“Organisations across the globe are dealing with massive data sprawl, and the need to ensure availability of data and applications across a complex multi-cloud environment has never been greater,” said Peter McKay, Veeam president and co-CEO in a statement. “Veeam continues to grow at double-digit rates as legacy competitors experience a decline.

“Our leadership and momentum in delivering available for any app, any data, across any cloud has us well poised to be a billion dollar software company by the end of 2018,” McKay added.

With these excellent results – allied to a $600m-plus year in 2016 – the company has been under the microscope of investors for some time. As far back as 2015, Veeam was telling the media it planned to stay private. Ratmir Timashev, co-founder and then-CEO, told CNBC that staying private would give the company “the ability to execute without pressure from external investors.” The company is also rare in the space in not being propped up by venture capital rounds, although it does have a strategic partnership in place with Insight Venture Partners, the firm acquiring a minority stake in Veeam in 2013.

The cloud tech M&A position is an interesting one right now. This time last year, received wisdom – according to Byron Deeter of Bessemer Venture Partners – was that the IPO space was running a little dry with greater emphasis on acquisitions as a result. This changed with a healthy 2017 for cloudy IPOs, including MongoDB, Cloudera, Okta, and MuleSoft. Speaking to this publication in June after securing $46 million in series D funding, Dan Phillips, co-founder of CloudHealth Technologies, noted the market was “showing some positive signs”, outlining the company’s future plans did involve going public.

Of the inaugural Forbes Cloud 100 list in 2016, 10 companies did not make it to the 2017 offering because they had ‘graduated’, as Forbes put it, to sale or IPO. Cloudera was in the top 10 that year, alongside AppDynamics, acquired by Cisco in March. The Forbes Cloud 100 therefore is always an interesting benchmark of companies who are ripe on the vine – even if Veeam’s only plans right now are to grow into a billion dollar company.

Cost savings primary driver of digital transformation – so why are CIOs not measuring it?

A new report from the Cloud Industry Forum (CIF) and Ensono argues that while the vast majority of organisations are undertaking digital transformation initiatives – led by various members of the C-suite – KPIs are not aligning with objectives.

The study, conducted by Vanson Bourne, polled 200 IT and business decision makers in the UK, finding the usual disparity between IT and business. For instance, 22% on respondents on the biz side said they believed their organisation had completed its digital transformation initiative, compared with only 10% for IT.

Saving costs was considered the primary driver for digital transformation, cited by 72% on the IT side and 68% on business – yet many of the metrics which organisations look to measure the success of their initiatives do not focus on cost-saving. 53% on the biz side said they used costs as a metric, but other areas such as customer satisfaction (52%), profitability (49%) and customer retention (45%) are also highly cited.

36% of all respondents said their CTO was ‘responsible’ for their company’s digital transformation, compared with 28% for the CIO and 26% for the CEO. 34% of respondents said their CIO was a ‘key driver’, compared with 32% and 30% for the CEO and CTO respectively.

“Digital transformation is fundamentally about business transformation. It is about seeing change – facilitated by technology and hybrid IT – as a revenue generator rather than a cost reduction function,” said Simon Ratcliffe, principal consultant at Ensono.

“Primarily, it needs to be seen as an opportunity for growth – growth though innovation and the delivery of the best service, product and experience to customers and through finding new and quicker routes to market,” Ratcliffe added. “The focus on cost savings is outdated and will negate transformation efforts, limiting its scope and impact.

“This could ultimately have longer-term implications for the business in the digital era.”

The blurred lines between IT and business makes for an interesting case in point. Earlier this week, a report from Interoute – focusing on respondents from the IT side – found that more than half of IT leaders in the UK were struggling to secure boardroom approval for digital transformation objectives.

The cloud-based services heading towards security – and the importance of cloud disaster recovery

What is the most crucial aspect of business to focus on today? Finance? Logistics? Developers? It’s really hard to say, but the truth is that any employee can be replaced by someone (or something) else at any time. Especially by unreasonable management of IT infrastructure as far as protecting data is concerned.

Nowadays, ever more companies rely heavily on IT and new solutions. Unfortunately, while such an approach does have many advantages, it’s a double-edged sword. Let me show you a simple example: if an organization’s IT infrastructure goes down and there’s practically nobody capable of fixing it, the system won’t keep running itself. Business shuts down, and you lose data, time, money and reputation. That is why it is vital to have managed IT security services in place. If you also implement a disaster recovery and business continuity strategy, that’s even better for your organization and its security.

For many businesses, ensuring that these solutions are in place continues to be not only the most critical process, but also the most challenging. Enterprises with large IT infrastructure quite often struggle to back up their data and implement complex database redundancy systems. In addition, cloud-based services such as backup and disaster recovery services have recently become more complex. This has created a new set of problems, but on the other hand it does create new opportunities for businesses to rethink traditional IT practices.

In short, the cloud has drastically changed the way companies approach business continuity and disaster recovery. At the same time, it has created an opportunity for an organization’s approach to IT as a whole to evolve and improve. Yet many IT organizations aren't prepared to fully invest in cloud strategies. As an entrepreneur, you need to ask yourself how you can effectively integrate cloud architecture into on-premises infrastructure, and whether cloud backup and DR are better for you than legacy services?

How can the cloud play different roles in disaster recovery?

The goal of cloud disaster recovery is to provide an organization with a means of recovering data in the unfortunate event of hardware or software failure. Such a catastrophe could be more likely than you would expect in the case of an inappropriately configured and managed system. Disaster recovery requires a response that is not only agile but also fast. The cloud can play a powerful role in your disaster recovery strategy, because it is a reliable location for storing the most up to date copy of your company’s files.

Cloud disaster recovery provides many benefits compared to traditional architecture. Value is added by the ability of cloud-based solutions to store data somewhere remote, separated from the original information, and by recovery speeds greater than those available via traditional tape. Typically, cloud providers charge for storage in a pay per use model, based solely on the used capacity, bandwidth or the site of the server.

The cloud also allows businesses to recover all critical IT systems and data quickly, without incurring the expense of a second physical data center, which most small and medium-sized companies simply can’t afford. Moreover, implementing a disaster recovery strategy in the cloud can actually bring extra savings. It reduces the need for data center space, IT infrastructure and resources, which is in fact why disaster recovery is considered the most important business use case for the cloud.

It’s worth remembering that cloud-based disaster recovery isn't the perfect solution for every business. All advantages and disadvantages need to be clearly understood before implementation. Security is definitely a major concern in this respect and, as clouds can only be accessed via the Internet, bandwidth requirements also need to be adjusted.

Use best practices for cloud-based backup

Implementing a solution in the service model, based on your priorities for backup, can help strengthen your organization's data protection strategy without increasing IT staff workload or raising the budget significantly. Using the cloud in this way means renting the capacity to store the daily backup of your data for as long as you need. Additionally, cloud backups often include the software and hardware necessary to protect an organization's data, including applications for Exchange and SQL Server. One other thing I should mention is that cloud data backup services are mostly used for non-critical data. Traditional backup is a better option for enterprises with large IT infrastructure and critical data that indisputably require a short recovery time, since there are physical limits to how much data can be moved in a given amount of time over a network.

The cloud ensures the business continuity

Today, technology plays a critical role in every organization. Businesses need a complex strategy for continuity and disaster recovery, and the former can play a big part in the decision to adopt cloud services. For today’s data-intensive enterprises, the cloud-based business continuity approach can be crucial in reducing the risk of system outage and data loss from IT disruptions, while also helping you to take your strategy to the next level. After all, you don’t want your business continuity plans to be lost in a disaster. Therefore, the cloud is the perfect choice for enterprises that have already implemented a business continuity strategy.

The cloud gives companies data backup, server failover, and the ability to have an additional center far from the primary location for disaster recovery use. An effective backup and disaster recovery strategy ensures that data are always available, reduces storage costs, protects you against financial loss, preserves operational efficiency and increases staff productivity. You will need to choose the solution that fits your business best, taking into account your company’s specific requirements, the critical and value level of your data, financial impacts and recovery objectives.

This post is brought to you by Comarch.

Google Cloud launches low-cost preemptible GPUs

Google has announced the launch of GPUs attached to preemptible VMs, offering a 50% discount – but with a catch.

As with the preemptible VMs, first announced in 2015 but with prices significantly lowered in August last year, resources can be used for a maximum of 24 hours. In addition, Google Compute Engine can shut them down with a 30 second warning. The ideal place for these instances is distributed, fault-tolerant workloads – hence the substantial discount.

Users will be able to attach NVIDIA K80 and NVIDIA P100 GPUs to preemptible VMs for $0.22 and $0.73 per GPU hour respectively.

Google adds that the preemptible GPUs will be “a particularly good fit for large-scale machine learning and other computational batch workloads as customers can harness the power of GPUs to run distributed batch workloads at predictably affordable prices.”

As is always the way with these announcements, Google rolled out a happy customer – in this instance, healthcare technology provider Silicon Therapeutics. “Preemptible GPU instances from CSP give us the best combination of affordable pricing, easy access and sufficient scalability,” said CSO Woody Sherman. “Preemptible GPU instances have advantages over the other discounted cloud offerings we have explored, such as consistent pricing and transparent terms.

“This greatly improves our ability to plan large simulations, control costs and ensure we get the throughput needed to make decisions that impact our projects in a timely fashion,” added Sherman.

You can find out more in a blog post here.

SolarWinds acquires Loggly to strengthen its cloud portfolio

SolarWinds, an IT management software provider, has announced the acquisition of software as a service (SaaS) firm Loggly to deepen its cloud software engineering and analytics expertise.

Loggly, founded in 2009, offers a SaaS-based, unified log monitoring and log analytics product. The company has received six funding rounds in total, the most recent being a series D of $11,500,000 (£8.5m) in June 2016. The company was named in a 2015 report from Skyhigh Networks as one of the fastest growing cloud services, based on anonymised data of more than 15 million global enterprise users.

The acquisition will see Manoj Chaudhary, CTO and VP engineering at Loggly, and Vito Salvaggio, VP product, join SolarWinds as leaders in engineering and product, while other members of the development, operations, support, sales and marketing teams will also transition.

“Rapidly visualising vast amounts of data through log analytics is absolutely critical to solving many problems in today’s diverse, complex cloud-application and microservices environments,” said Christoph Pfister, executive vice president of products at SolarWinds. “Adding Loggly to our industry-leading portfolio will empower customers to accelerate their time-to-insight and solve problems faster, with our usual, disruptive affordability.

“Building on these strengths, we will continue investing in Loggly to innovate and extend its value to customers, while integrating its capabilities with our other cloud offerings to address even broader needs,” added Pfister.

SolarWinds already has a log monitoring product in place in the form of Papertrail. The company added it will continue investing to innovate and enhance Loggly, and ‘advances SolarWinds’ strategy to deliver comprehensive, simple, and disruptively affordable full-stack monitoring solutions built upon a common, seamlessly integrated, SaaS-based platform’, as the company put it.

Financial terms of the deal were not disclosed.