Cloud downtime cost £45m over five years, IWGCR claims

This is a potentially alarming finding from the International Working Group on Cloud Computing Resiliency (IWGCR): a combined 568 hours of downtime at 13 major cloud providers has cost £45.8 million (or $71.7 million) in business since 2007.

In the report, entitled “Downtime statistics of current cloud solutions”, IWGCR analysed the 13 providers, including Amazon, Microsoft and PayPal among others, and worked out that on average, cloud services were unavailable for 7.5 hours per year.

Turning it around, it means that cloud services are available 99.917% of the time – a comparatively huge difference from the oft-feted figure of 99.999% availability.

There has been plenty of scepticism concerning the supposed ‘five nines’ over the years, and this report may go some way to providing concrete evidence that 99.999% availability is a myth for now.

To put it into context, nPhaseOne notes that with ‘five nines …

Taking In-Memory NoSQL to the Next Level

Guest Post by Ofer Bengal, Co-Founder & CEO, Garantia Data

 

Ofer Bengal

Ofer Bengal has founded and led companies in data communications, telecommunications, Internet, homeland security and medical devices.

Today Garantia Data is launching the first in-memory NoSQL cloud that promises to change the way people use Memcached and Redis. I think this is a great opportunity to examine the state of these RAM-based data stores and to suggest a new, highly-efficient way of operating them in the cloud.

Challenges of Cloud Computing

Memcached and Redis are being increasingly adopted by today’s web-applications, and are being used to scale-out their data-tier and significantly improve application performance (in many cases improvement is x10 over standard RDBMS implementation). However, cloud computing has created new challenges in the way scaling and application availability should be handled and using Memcached and Redis in their simple form may not be enough to cope with these challenges.

Memcached

It’s no secret Memcached does wonders for websites that need to quickly serve up dynamic content to a rapidly growing number of users. Facebook, Twitter, Amazon and YouTube, are heavily relying on Memcached to help them scale out; Facebook handles millions of queries per second with Memcached.
But Memcached is not just for giants. Any website concerned with response time and user based growth should consider Memcached for boosting its database performance. That’s why over 70% of all web companies, the majority of which are hosted on public and private clouds, currently use Memcached.

Local Memcached is the simplest and fastest caching method because you cache the data in the same memory as the application code. Need to render a drop-down list faster? Read the list from the database once, and cache it in a Memcached HashMap. Need to avoid the performance-sapping disk trashing of an SQL call to repeatedly render a user’s personalized Web page? Cache the user profile and the rendered page fragments in the user session.

Although local caching is fine for web applications that run on one or two application servers, it simply isn’t good enough when the data is too big to fit in the application server memory space, or when the cached data is updated and shared by users across multiple application servers and user requests. In such cases user sessions, are not bound to a particular application server. Using local caching under these conditions may end up providing a low hit-ratio and poor application performance.

Distributed Memcached tends to improve local caching by enabling multiple application servers to share the same cache cluster. Although the Memcached client and server codes are rather simple to deploy and use, Distributed Memcached suffers from several inherent deficiencies:

  • Lack of high-availability – When a Memcached server goes down the application’s performance suffers as all data queries are now addressed to the RDBMS, which is providing a much slower response time. When the problem is fixed, it could take between a few hours to several days until the recovered server becomes “hot” with updated objects and fully effective again. In more severe case, where session data is stored in Memcached without persistent storage, losing a Memcached server may cause forced logout of users or flush of their shopping carts (in ecommerce sites).
  • Failure hassle – The operator needs to set all clients for the replacement server and wait for it to “warm-up”. Operators sometimes add temporary slave servers to their RDBMS, for offloading their Master server, until their Memcached recovers.
  • Scaling hassle – When the application dataset grows beyond the current Memcached resource capacity, the operator needs to scale out by adding more servers to the Memcached tier. However, it is not always clear when exactly this point has been reached and many operators scale out in a rush only after noticing degradation in their application’s performance.
  • Scaling impact on performance – Scaling out (or in) Memcached typically causes partial or entire loss of the cached dataset, resulting, again, in degradation of the application’s performance.
  • Manpower – Operating Memcached efficiently requires manpower to monitor, optimize and scale when required. In many web companies these tasks are carried out by expensive developers or devops.

Amazon has tried to simplify the use of Memcached by offering ElastiCache, a cloud-based value-added service, where the user does not have to install Memcached servers but rather rent VMs (instances) pre-loaded with Memcached (at a cost higher than plain instances). However, ElastiCache has not offered a solution for any of the Memcached deficiencies mentioned above. Furthermore, ElastiCache scales-out by adding a complete EC2 instance to the user’s cluster, which is a waste of $$ for users who only require one or two more GBs of Memcached. With this model ElastiCache misses on delivery of the true promise of cloud computing – “consume and pay only for what you really need” (same as for electricity, water and gas).

Redis

Redis an open source, key-value, in-memory, NoSQL database began ramping-up in 2009 and is now used by Instagram, Pinterest, Digg, Github, flickr, Craigslist and many others and has an active open source community, sponsored by VMware.

Redis can be used as an enhanced caching system alongside RDBMS, or as a standalone database.
Redis provides a complete new set of data-types built specifically for serving modern web applications in an ultra-fast and more efficient way. It solves some of the Memcached deficiencies, especially when it comes to high availability, by providing replication capabilities and persistent storage. However, it still suffers from the following drawbacks:

  • Failure hassle – There is no auto-fail-over mechanism; when a server goes down, the operator still needs to activate a replica or build a replica from persistent storage.
  • Scalability – Redis is still limited to a single master server and although cluster management capability is being developed, it probably won’t be simple to implement and manage and will not support all Redis commands, making it incompatible with existing deployments.
  • Operations – Building a robust Redis system requires strong domain expertise in Redis replication and data persistence nuances and building a Redis cluster will be rather complex.
DB Caching Evolution

The Evolution of Caching

A new cloud service that will change the way people use Memcached and Redis

Imagine connecting to an infinite pool of RAM memory and drawing as much Memcached or Redis memory you need at any given time, without ever worrying about scalability, high-availability, performance, data security and operational issues; and all this, with the click of a button (ok, a few buttons). Now imagine paying only for GBs used rather than for full VMs and at a rate similar to what you pay your cloud vendor for plain instances. Welcome to the Garantia Data In-Memory NoSQL Cloud!  

By In-Memory NoSQL Cloud I refer to an online, cloud-based, in-memory NoSQL data-store service that offloads the burden of operating, monitoring, handling failures and scaling Memcached or Redis from the application operator’s shoulders. Here are my top 6 favorite features of such service, now offered by Garantia Data:

  • Simplicity – Operators will no longer need to configure and maintain nodes and clusters. The standard Memcached/Redis clients are set for the service DNS and from this moment on, all operational issues are automatically taken care of by the service.
  • Infinite scalability – The service provides an infinite pool of memory with true auto-scaling (out or in) to the precise size of the user’s dataset. Operators don’t need to monitor eviction rates or performance degradation in order to trigger scale-out; the system constantly monitors those and adjusts the user’s memory size to meet performance thresholds.
  • High availability – Built-in automatic failover makes sure data is guaranteed under all circumstances. Local persistence storage of the user’s entire dataset is provided by default, whereas in-memory replication can be configured at a mouse click. In addition, there is no data loss whatsoever when scaling out or in.
  • Improved application performance – Response time is optimized through consistent monitoring and scaling of the user’s memory. Several techniques that efficiently evict unused and expired objects are employed to significantly improve the hit-ratio.
  • Data security – For those operators who are concerned with hosting their dataset in a shared service environment, Garantia Data has full encryption of the entire dataset as a key element of its service.
  • Cost savings – Garantia Data frees developers from handling data integrity, scaling, high availability and Memcached/Redis version compliance issues. Additional savings are achieved by paying only for GBs consumed rather than for complete VMs (instances). The service follows the true spirit of cloud computing enabling memory consumption to be paid for much like electricity, water or gas, so you “only pay for what you really consume”.

We have recently concluded a closed beta trial with 20 participating companies where all these features were extensively tested and verified – and it worked fine! So this is not a concept anymore, it’s real and it’s going to change the way people use Memcached and Redis! Am I excited today? Absolutely!


Does Microsoft’s Surface tablet launch offer anything new?

Microsoft’s tablet finally surfaced yesterday. I was discussing it with my son and he said the whole thing reminded of the movie The Sixth Sense, which Microsoft playing the Bruce Willis part. They’re walking around, wondering what’s going on, trying to solve people’s problems and don’t realize they’re dead. People’s computing problems are directly attributable to them, not being alive and aware of how the world now works.

So I looked at the press event and the product video, and I’m trying to figure out why people are excited. Ok, yes, there’s another tablet offering out there, and it’s from a company who in theory can go toe-to-toe with Apple in this space, or sell at a loss for a decade if not (see Xbox profits and marketshare).

And it’s a Windows offering for those who’ve been wanting …

Amazon S3 Based, Full Control Online Storage Solution

The most popular online storage use case nowadays is the combination of web interface, desktop clients from PC and Mac, mobile clients, together with team folders support and peer-to-peer file sharing. Many online storage services do this. The market niche is how to provide the functionality, while giving customers full control of the whole infrastructure stack from the bottom to the top.

read more

Can the Cloud Move to the Mainstream in Health Care?

Imagine Jim, who’s had a CT scan and then is diagnosed with a stroke at the hospital. Although the on-call neurologist is at another hospital, and outside the hospital system, he accesses Jim’s radiology images on his mobile device, in real time. Upon review, the doctor learns that Jim had an aneurysm months before. He’s able to call a neurosurgery colleague who is making rounds at the same hospital and they review Jim’s images together on a laptop and tablet using cloud-based technology.
The cloud allows these two doctors to begin an assessment on Jim before ever examining him, thus improving efficiency and possibly saving his life. When Dr. Smith meets with Jim he examines him and then discusses a treatment plan. He uses his tablet to visually explain the radiology images and he’s able to make the abstract 2D CT images more real by using 3D imagery to help show Jim what has happened to him.

read more

The private cloud strikes back

Having read JP Rangaswami’s argument against private clouds (and the obvious promoting of his version of cloud) I have only to say that he’s looking for oranges in an apple tree. His entire premise is based on the idea that enterprises are wholly concerned with cost and sharing risk when that can’t be farther from the truth. 

Yes, cost is indeed a factor, as is sharing risk, but a bigger and more important factor facing the enterprise today is agility and flexibility…something that monolithic leviathan-like enterprise IT systems of today definitely are not.

He then jumps from cost to social enterprise as if there is a causal relationship there when, in fact, they are two separate discussions. I don’t doubt that if you are a consumer (not just customer) facing organization, it’s best to get on that social enterprise bandwagon, but if your main …

Public or Private Cloud: Which Is Best for Application Marketplaces?

Cloud computing offers almost limitless possibilities for innovation and growth. It also provides fodder for endless debates about the pros and cons of hosted and on-premise software deployments.
In one corner, you have the advocates of hosting software in the public cloud who argue that it offers better flexibility and scalability. In the other corner, you have proponents of private, on-premise deployments who counter that this approach is more secure and offers greater control.
The growing popularity of cloud service marketplaces – application stores that offer a range of cloud-based software and services – is adding a new dimension to these familiar arguments. Which type of deployment, public or private cloud, is best for marketplaces like these?
It’s an important question, since deciding where to host an application store can impact the entire ecosystem for cloud-based solutions, from the big-name brands that operate marketplaces, to developers who create applications to sell in them, to the end-user customers who come to rely on these solutions.

read more

Cloud Encryption Best Practices

Cloud encryption keeps coming up as one of the hottest topics for enterprises migrating to the cloud. IT departments are constantly pushed to cut costs and utilize compute resources more efficiently, hence cloud computing is the natural evolution, yet at the same enterprises cannot compromise on cloud security, and cloud encryption should be considered high on the list as it segregates and “hides” your data from other virtual entities hosted on the same physical cloud infrastructure.
What’s my cloud provider’s encryption approach?
Cloud data security and cloud encryption comes in many forms and shapes. While some cloud providers will provide the encryption service, some will provide a “shopping list” of cloud encryption companies, and others will provide both. But which one is best for your needs?

read more

The Value of an End-to-End Cloud Computing Operating System

“The productization of Big Data will be an interesting trend to track, and I think we’ll start to see some significant investment in this area over the coming months,” noted Scott Sneddon, Vyatta’s Director of Cloud Solutions, in this exclusive Q&A with Cloud Expo Conference Chair Jeremy Geelan. “We at Vyatta think this trend is exciting,” Sneddon continued, “because these kinds of new ventures will always need powerful and creative networking and security solutions.”
Cloud Computing Journal: Agree or disagree? – “While the IT savings aspect is compelling, the strongest benefit of cloud computing is how it enhances business agility.”
Scott Sneddon: Whether you’re a mature company or an emerging business, time-to-market is critical to success. Rapid deployment of network infrastructure or a new product line always requires capital. The companies that win always optimize their cash flow and keep plenty of it on hand to seize opportunities. That opportunity could be a critical executive hire, an undervalued target acquisition or a necessary engineering build-out. Either way, cloud computing when executed correctly can directly impact how nimble companies can react to market opportunities.

read more

The cloud news categorized.