All posts by James

Dropbox becomes available for Windows Phone and tablets, builds on Microsoft partnership

(c)iStock.com/hocus-focus

Dropbox has announced availability on Windows Phone devices and Microsoft tablets, available on Windows RT, 8.1, and Windows Phone 8.0.

Windows now completes the set of mobile options for Dropbox, with the cloud storage product already available on Android phones, Kindle Fire, iPhones, iPads and BlackBerry phones.

The news comes as both companies look to solidify their partnership established in November last year. At the time this publication questioned the decision given Microsoft’s strong play on its own cloud storage product, OneDrive, now available unlimited to Office 365 subscribers.

Yet it makes more sense underneath the surface; Dropbox will look to gain a foothold in the enterprise market, while Microsoft looks for assurances it can ‘play nicely’ with other vendors. The announcement could also be seen as a timely one given Box’s imminent IPO.

Picture credit: Dropbox

This isn’t the only recent announcement Dropbox has made. Earlier this week the San Francisco based firm announced the acquisition of CloudOn, a mobile cloud collaboration tool.

“We’re taking the next step toward our vision of reimagining docs – by joining the Dropbox team,” CloudOn wrote in a blog post. “Our companies share similar values, are committed to helping people work better, and together we can make an even greater impact.”

At the time of the CloudOn acquisition, pundits questioned the Microsoft and Dropbox partnership. Forbes columnist Ben Kepes wrote: “[Office and Dropbox] is one partnership that never quite gelled in my mind – Microsoft has, after all, its own file sharing solution. With this acquisition, one wonders whether Redmond will consider its cosy relationship with Dropbox.”

Today’s announcement seems to change that. You can read the Dropbox blog post here.

IBM’s cloud revenue hits $7bn in 2014, but still plenty of work to do

(c)iStock.com/claudiodivizia

IBM has released its Q4 and full year financial figures for 2014, showing net income at $15.8bn down 7% year on year, revenue from continuing operations down 6%, but total cloud revenue of $7bn in 2014, up 60%.

It’s a not unexpected result given, like SAP, IBM is aiming to shift its revenues more towards the cloud as opposed to legacy on-premises software, but at the expense of its overall profits.

Martin Schroeter, chief financial officer and SVP IBM UK, said in prepared remarks: “We once again had strong performance in our strategic imperatives that address the market shifts in data, cloud, and engagement. We’re continuing to shift our investments and resources to our strategic imperatives and solutions that address our clients’ most critical issues.”

For the fourth quarter, pre-tax income from continuing operations was flat compared to Q413, while net income was at $5.5bn, down 11, and revenue was at $24.1bn, down 12% overall but down only 2% adjusting for divested businesses and currency.

Looking at software specifically, security software grew at a double digit rate, with IBM bringing its analytics, big data, mobile and cloud capabilities in line with security to address the market opportunity of cyberthreats. Software as a service (SaaS) offerings were up nearly 50%.

2014 was a major year for IBM in terms of bolstering its cloud play – and, by delivering a $7bn cloud business by 2015, it’s succeeded so far in that. Back in March, the Armonk firm announced it had moved $1bn of resources and investments into cloud. In July, IBM celebrated the first year of its acquisition of SoftLayer by offering a series of features linking up the IaaS provider with Watson.

Alongside that there is BlueMix, which Ovum analyst Gary Barnett described as “the next step in the transformation of IBM’s cloud offering”, as well as a partnership with SAP towards the end of last year bringing together IBM’s cloud with SAP’s HANA database.

“Our strategic direction is clear and compelling, and we have made a lot of progress,” Schroeter said. “We have been successful in shifting to the higher value areas of enterprise IT. The strong revenue growth in our strategic imperatives confirms that, as does the overall profitability of our business.

“We expect the industry to continue to shift,” he added.

Cookie Jam teams with Couchbase to power and quickly scale Facebook’s most popular game

Picture credit: Cookie Jam/YouTube

Social Gaming Network (SGN) has partnered with NoSQL database provider Couchbase to rapidly scale and maintain uptime for its Cookie Jam game, which boasts over 5 million users worldwide.

The match-3 puzzle game was named by social network Facebook as its most popular game of 2014, with Facebook head of games partnerships EMEA Bob Slinn telling the Guardian it was “an amazing success story.” Part of that success has to go down to Couchbase, which itself has more than doubled in size over the past year.

The clear challenge facing Couchbase, as is often the case for social and mobile hits, is how to expect a huge spike and scale quickly enough to avoid downtime. Regular readers of this publication will note a similar pattern with Heroku after it powered social network Ello, which was described as an “overnight success.”

As a result, SGN tested the software to its limits and found good results, according to Couchbase CEO Bob Wiederhold. “Certainly with gaming companies, we can see huge spikes,” he tells CloudTech. “It’s hard to predict when they’re going to happen, and so that’s why for social gaming companies in particular, it’s the hits business – you can have very rapid unexpected downloads and usage of product.”

If there is an unexpected jump, then SGN can add additional nodes in a matter of minutes. “That’s an industry vertical that needs to be able to very rapidly scale without any problems, and that’s one of the reasons Couchbase has been so successful in the social and mobile gaming space,” Wiederhold explains.

One of the key challenges facing Couchbase is identifying exactly where the bottlenecks are. “Oftentimes it has very little to do with the database,” Wiederhold says. “When these social gaming companies are scaling for the first time, they run into previously unknown issues, so we work with them closely to diagnose problems and remove those bottlenecks so they can scale without ever going down.

“These problems are usually not catastrophic in the sense that it causes the game to go down,” he adds. “We’re very proud of the fact that we work with many of these gaming companies where they’ve gone through hyper-viral growth and they’ve never gone down even for a minute, but behind the scenes they’re looking at a lot of different things.”

Wiederhold praised SGN for getting its testing and due diligence sorted. “[Some companies] like one product or another, but they never put it through its paces, never stress test it, never test to see if we get some crazy spike of users, and those are the companies that we typically see get into trouble and call us after the fact and try and figure out how to retrofit their game to use Couchbase,” he says.

“SGN has a lot of experience with Couchbase now on multiple games, so that’s not the situation with them.”

For the future, Wiederhold stresses a continued move towards disrupting the legacy database market of Oracle, SAP, IBM et al, with expansion into Asia part of Couchbase’s plan. “We are regularly replacing Oracle in particular, because it has the biggest footprint in the market,” he says.

“That is happening, and probably 60% of the enterprise deployments for mission critical, business critical apps are replacing legacy relational technologies.”

SAP financial figures offer few surprises, expects cloud transformation by 2018

(c)iStock.com/vertigo3D

SAP has released its latest financial figures, and it’s to be expected; cloud subscriptions up, software subscriptions down, overall numbers ticking over, and a pledge that cloud profits will exceed software license revenues by 2018.

The company has again shifted its long term goals, with a 2017 operating profit target now of €6.3bn to €7bn, from a previous total of €7.7bn – and it’s all due to the company’s aggressive shift towards cloud.

The full year figures weren’t anything to write home about, but it’s all going in the right direction. Non-IFRS operating profit stood at €5.64bn for 2014, an uptick of 3% on the year before, while total revenue stood at €17.58bn, an increase of 4%.

The meatiest figures however were in the software and support sections. Cloud subscriptions and support went up 45% year on year to €1.1bn, while traditional software numbers were down 3%, to €4.4bn.

This is committed business that will drive strong cloud growth in the future

“We had exceptional growth in our cloud business and have significantly lifted the total of cloud backlog and non-IFRS deferred cloud revenue to more than €3 billion,” commented Luka Mucic, SAP chief financial officer. “This is committed business that will drive strong cloud growth in the future.

“We expect cloud subscriptions to exceed software license revenue in 2018,” he added. “At that time SAP expects to reach a scale in its cloud business that will clear the way for accelerated operating profit expansion.”

Back in October this publication mused that SAP was doing a little better financially than its two main competitors in the race to the cloud; namely Oracle and IBM, not to mention the likes of Salesforce breathing down its neck. SAP’s recent acquisitions, including software firm Concur Technologies, as well as its ambition to move towards an agile startup mentality, shows this.

The German tech giant gained almost 8,000 employees over the course of the year, doubling down on their commitment to increasing its workforce despite fears over job cuts last year. Yet the shift to the right on operating profit predictions shows there’s still a lot of work to do.

You can read the full financial statement here.

Disaster recovery experts dig down into Azure cloud outages over past 12 months

(c)iStock.com/JasonDoiy

The majority of Microsoft’s service errors in the first quarter of 2014 were advisory, while there were significantly more service interruptions in the following three quarters, according to analysis carried out by CloudEndure.

The figures, taken from Azure’s Service Health Dashboard across last year, saw three full service interruptions in Q1, a whopping 28 in Q2, 16 in Q3 and zero in the final quarter. The highest number of errors came in Q1 (259), yet also produced the lowest number of partial service interruptions (88), compared to 134, 129 and 127 for the other three quarters.

The analysis came about after Azure suffered two debilitating outages last year; one in August, and one in November, which was caused by storage blob front ends going into an infinite loop – a process which went undetected during testing.

Picture credit: CloudEndure

John Dinsdale, chief analyst at Synergy Research, told CloudTech the outage was “really not good”, and Microsoft’s response was “an awful lot less than stellar.” This came after research which found Microsoft had taken the clear second place – behind AWS, naturally – in the global cloud infrastructure market.

Not surprisingly, Americas West had the most outages overall (114), followed by Americas East (98) and Europe West (91), although Europe West and Americas North Central had the most full service interruptions (5). By service, compute had by far the most errors (135), followed by SQL databases (124), virtual machines (64), websites (61) and storage (55).

Despite these figures, CloudEndure is quick to point out that statistics should be taken with a pinch of salt. “Planning the location of your app based on the historical number of errors and performance issues is probably not the best approach”, a blog post reads. “While cloud provider issues are important, you should keep in mind that the top reason for application downtime remains human error.”

It’s certainly the reason Joyent’s servers unexpectedly went down back in May – and it’s good advice to follow here.

Mature Asia Pacific public cloud services to climb to $7.4bn in 2015, Gartner claims

(c)iStock.com/BrianAJackson

Gartner has been gazing at its crystal ball again – and this time the analyst house claims that public cloud services in the mature Asia Pacific (APAC) market will reach $7.4 billion ($4.86bn) this year, and hit $11.5bn (£7.56bn) by 2018.

The analysts assess the ‘mature’ APAC as comprising Australia, Japan, New Zealand, Singapore and South Korea. Cloud management, storage, and software as a service (SaaS) will be among the strongest growers in the interim period, as more enterprise and government users jump onto cloud services.

Ed Anderson, research vice president at Gartner, argues “consistent and stable growth” will occur through 2018, because of the five countries’ relatively advanced tech profiles and solid telecommunications infrastructure.

Gartner sees the majority (52.5%) of the APAC public cloud services market to come through cloud advertising. SaaS will be at 21.5%, infrastructure as a service (IaaS) at just under 10%, business process as a service (BPaaS) at 9.2%, cloud management and security services at 4% and platform as a service (PaaS) at 3%.

CloudTech has covered the Asia Pacific market in detail in the past, with eyes cast over the Indian and Chinese markets respectively. Neither country makes Gartner’s list of mature APAC cloud players, and with good reason; both countries are at a crossroads in their technological development.

The latest report from the Asia Cloud Computing Association (ACCA) saw Japan as clear number one, followed by New Zealand, Australia, Singapore, Hong Kong, and South Korea. These top six countries were defined as ‘ever-ready leaders’. China and India, in comparison, ranked 11th and 13th out of 14 nations respectively, and were described as ‘steady developing’ countries.

“We are anticipating a seismic data revolution once information access in Asia becomes universally cheap, powerful, and available,” the report notes. “And we believe the knowledge economy and cloud computing is the next great leveller for the region, poised to help accelerate the momentum around trade and economic integration in Asia.”

The full Gartner report (subscription required) can be found here.

Google announces Cloud Monitoring in beta for Cloud Platform and AWS customers

(c)iStock.com/zmeel

Google has announced its Cloud Monitoring service, which tracks usage and uptime for Cloud Platform and Amazon Web Services customers, is now in beta availability.

The announcement comes just days after the search giant released Cloud Trace, which allows developers to create reports on their app’s performance issues by finding traces of slow requests.

First announced at Google I/O last year, Cloud Monitoring enables users to get a wide variety of information, from metrics and dashboards on Platform usage, to functionality tests for uptime, latency and error rates for performance, and receiving alerts when security incidents occur.

Developers can access figures for Google App Engine, Compute Engine, Cloud Pub/Sub and CloudSQL, as well as featuring native integration with MySQL, Nginx, Apache, MongoDB, and RabbitMQ among others.

The service also puts together a series of ‘overall health’ dashboards, which can incorporate application or business statistics using custom metrics, to create aggregate views of environments and systems.

Picture credit: Google

Back in May, Google announced the acquisition of intelligent monitoring service Stackdriver. The latest update to Cloud Monitoring shows integration of Stackdriver’s technology into the Cloud Platform product. Google Cloud Platform product manager Dan Belcher confirmed in a blog post the plan to continue integrating the rest of Stackdriver into Cloud Platform, with the aim of “providing a unified monitoring solution for Google Cloud Platform, Amazon Web Services and hybridcustomers.”

If there are any problems with Google Cloud Platform’s performance, it’ll be a rare event, according to figures published earlier this week by benchmarking provider CloudHarmony. Google Cloud DNS had a 100% record in 2014, while Compute Engine hit 78 outages at an overall downtime of 3.35 hours, Cloud Storage had eight outages while Google AppEngine went down for just over six minutes last year.

Google’s push towards creating a better user experience for its cloud products continues to gain pace; alongside this announcement, and Cloud Trace, the company has been pushing regular price cuts for its infrastructure, as well as offering startups $100,000 in Cloud Platform credits to help get their companies off the ground.

You can find out more about Google Cloud Monitoring here.

Analyst report shows where the value lies for buying a cloud computing solution

(c)iStock.com/RomoloTavani

A report from infrastructure as a service performance monitoring analysts Cloud Spectator has found that Amazon EC2 offers “significant” cost advantages over a long term investment.

The report finds different wins for different providers, with SoftLayer remaining the least expensive provider for larger Windows offerings, Microsoft Azure being the most inexpensive block storage offering, while CenturyLink and Rackspace block storage was most expensive yet could “provide more value to the right type of users.”

In all, 10 IaaS providers were vetted – AWS, CenturyLink, DigitalOcean, Google, HP, Joyent, Microsoft Azure, Rackspace, SoftLayer and Verizon – with Cloud Spectator analysing a wide variety of metrics.

The CSPs were ranked over sub hour, hourly, monthly, year, and three year contracts, with Amazon, Joyent and Rackspace having the most wide ranging payment options, yet only three providers – Google, Microsoft and Rackspace – provide sub hour billing.

IBM and Joyent were the only providers assessed who offer a 100% SLA. CloudTech readers will be aware of how that stance has backfired on companies in the past, although SoftLayer came through 2014 with a clean bill of health according to recent CloudHarmony metrics. Joyent, by comparison, suffered 19 outages at an availability of 99.9945%.

The research found a huge spike for hourly 2x large Linux virtual machines for HP, with Amazon consistently having the lowest price for small to medium VMs. For Windows, it was a similar story; Amazon holding the lowest numbers for small to medium, with SoftLayer performing better over large and 2x large VMs.

These figures may not be too surprising; Amazon prides itself on price cuts, while SoftLayer introduced its move towards price drops last year against its traditional policy. EMEA head Jonathan Wisler told this publication at the time that “we don’t want to have a race to the bottom, but we need to make sure we’re competitive in the marketplace.”

The report also delves into yearly and three yearly price comparisons, as well as block storage and data transfer.

“The cloud infrastructure industry is changing constantly,” the report notes. “Although other benefits are associated with cloud infrastructure, the low upfront costs and inexpensive payments for servers attract a large segment of customers and are always mentioned as major incentives for cloud adoption.

This is an important point to consider, particularly given recent research from KPMG which argued cost shouldn’t be the primary driver for choosing a cloud solution, yet the Cloud Spectator report adds: “It is important to understand provider pricing in this industry to make informed decisions on IT spending optimisation.”

You can take a look at the full report (registration required) here.

AWS, Google, SoftLayer score highly in ranking of most reliable cloud providers

(c)iStock.com/sanfel

Cloud benchmarking provider CloudHarmony has updated its metrics, and found AWS, Google and SoftLayer to be among the most reliable public cloud providers in 2014.

The figures, which can be found on its service status page here, saw Amazon’s S3 register 23 outages across nine regions resulting in a 2.69 hour downtime across the year, while Amazon EC2 clocked up 12 outages resulting in a just over two hour outage time.

Google’s Cloud DNS had a 100% record, while Cloud Storage suffered eight outages at an SLA of 99.9996%, App Engine suffered just one outage and Compute Engine had 66 outages for a 99.982% SLA.

The most downtime of all the cloud providers analysed fell to Aruba Cloud, whose Cloud Storage facility clocked up a whopping 407 outages across five regions, and a total of 67.85 hours down. Microsoft’s Azure Virtual Machines suffered a total of 103 outages, and 42.94 hours out, with a 365 day availability of 99.937%, while ElasticHosts and Internap AgileCLOUD also clocked up more than 30 hours down over the course of the year.

IBM’s SoftLayer had a clean bill of health, scoring 100% across Object Storage, CDN and DNS, while Rackspace nearly achieved the same, suffering 26 outages on Cloud Servers.

Whether it’s an unexpected outage, such as Microsoft’s – or planned downtime, as Verizon attempted this weekend – there is no getting away from the cold hard facts of an SLA. Yet according to a senior executive at cloud services provider Claranet, SLAs don’t give quite enough insight.

“The vast majority of SLAs don’t really get to the heart of what’s important to customers – or, at the very least, fall short of guaranteeing what customers really need and expect, beyond uptime and availability,” explained Paul Marland, director of account management.

“The industry tends to measure against technical metrics, but it’s important to remember that it’s the end user’s actual experience that counts.”

As CloudHarmony CEO Jason Read acknowledges, not every outage could be recorded. Yet while this gives a good idea of the state of play, cloud solutions require far more due diligence, from price, to how it will fit into your business.

Verizon Cloud goes out in planned maintenance, aims for seamless updates going forward

(c)iStock.com/LindaJoHeilman

Over the weekend, Verizon’s cloud service, Verizon Cloud, was offline as it looked to add ‘seamless upgrade functionality as well as other customer-facing updates.’

The maintenance period was put in to improve the service and to ensure further updates went ahead without any hitches to customers. The telco giant warned the fixes could take up to 48 hours, but was completed after 40, with Verizon taking the bizarre step of issuing a press release to announce the work had been done.

“The seamless upgrade functionality allows Verizon to conduct major system upgrades without interrupting service or limiting infrastructure capacity,” the release states. “Traditionally, updates have been made via rolling maintenance and other methods.

“Many cloud vendors require customers to set up virtual machines in multiple zones or upgrade domains, which can increase the cost and complexity. Additionally, those customers must reboot their virtual machines after maintenance has occurred.

“Verizon eliminates these requirements, since virtually all maintenance and upgrades to Verizon Cloud will now happen in the background with no impact to customers,” it adds.

Verizon customer Kenn White tweeted his way through the outage, some with a rather scathing undertone:

Until all was finally resolved:

Verizon, like various other telecoms providers, has made a concerted push towards cloud services in recent years, having bought Terremark in 2011 for $1.4bn. The company launched its software store Verizon Cloud Marketplace back in November.

While one particular customer was less than happy, two days of pain in a planned outage seems like a far better idea than aiming for 100% uptime before the inevitable happens, as happened with Mimecast. So long as Verizon keeps up its end of the bargain, that is.