Archivo de la categoría: Redis

Garantia Data Offers First Redis Hosting on Azure

Garantia Data, a provider of in-memory NoSQL cloud services, today announced the availability of its Redis Cloud and Memcached Cloud database hosting services on the Windows Azure cloud platform. Garantia Data’s services will provide thousands of developers who run their applications on Windows Azure with virtually infinite scalability, high availability, high-performance and zero-management in just one click.

Garantia is currently offering its Redis Cloud and Memcached Cloud services free of charge to early adopters in the US-East and US-West Azure regions.

Used by both enterprise developers and cutting-edge start-ups, Redis and Memcached are open source, RAM-based, key-value memory stores that provide significant value in a wide range of important use cases. Garantia Data’s Redis Cloud and Memcached Cloud are reliable and fully-automated services for running Redis and Memcached on the cloud – essentially freeing developers from dealing with nodes, clusters, scaling, data-persistence configuration and failure recovery.

“We are happy to be the first to offer the community a Redis architecture on Windows Azure,” said Ofer Bengal, CEO of Garantia Data. “We have seen great demand among .Net and Windows users for scalable, highly available and fully-automated services for Redis and Memcached. Our Redis Cloud and Memcached Cloud provide exactly the sort functionality they need.”

“We’re very excited to welcome Garantia Data to the Windows Azure ecosystem,” said Rob Craft, Senior Director Cloud Strategy at Microsoft. “Services such as Redis Cloud and Memcached Cloud give customers the production, workload-ready services they can use today to solve real business problems on Windows Azure.”

Redis Cloud scales seamlessly and infinitely, so a Redis dataset can grow to any size while supporting all Redis commands. Memcached Cloud offers a storage engine and full replication capabilities to standard Memcached. Both provide true high-availability, including instant failover with no human intervention. In addition, they run a dataset on multiple CPUs and use advanced techniques to maximize performance for any dataset size.

Garantia Brings Redis Cloud to Heroku, AppFog, AppHarbor

Garantia Data, a provider of in-memory NoSQL cloud services, today announced the availability of its Redis Cloud database hosting service on HerokuAppFog and AppHarbor platforms over AWS. Garantia Data’s new Redis Cloud add-ons will provide the hundreds of thousands of developers who run their applications on these platforms with an infinitely scalable, highly available, high-performance and zero-management Redis solution in just one click.

Used by both enterprise developers and cutting-edge start-ups, Redis is an open source, RAM-based, key-value memory store that provides significant value in a wide range of important use cases. Garantia Data’s Redis Cloud is a fully-automated service for running Redis on the cloud – completely freeing developers from dealing with nodes,clusters, scaling, data-persistence configuration and failure recovery.

“Redis Cloud has been running in a private beta on Amazon EC2 since January and in a free, public beta since June, and we survived several node failures and three AWS outages without losing any customer data,” said Ofer Bengal, CEO of Garantia Data. “After successfully navigating these events, we are now 100 percent confident that our service is fully reliable and ready for PaaS environments. Heroku, AppFog and AppHarbor developers will now be able to enjoy the powerful benefits that our solution can bring to their critical databases, while gaining more time to focus on building the best possible applications.”

Redis Cloud is the only solution that scales seamlessly and infinitely, so a Redis dataset can grow to any size while supporting all Redis commands. It provides true high-availability, including instant failover with no human intervention. In addition, it runs a dataset on multiple CPUs and uses advanced techniques to maximize performance for any dataset size. Redis Cloud add-ons let developers create multiple databases in a single plan, each running in a dedicated process and in a non-blocking manner.

“We’re very excited to welcome Garantia Data to the AppFog ecosystem,” said Lucas Carlson, CEO and founder of AppFog. “Redis Cloud is exactly the sort of production, workload-ready service that our customers have been demanding. As huge fans of Redis, we feel that Redis Cloud’s robust performance and complete feature set makes it one of the best NoSQL DB-as-a-Service options out there. We can’t wait to see what developers create with Redis Cloud and AppFog!”

“We’re excited to welcome Garantia Data’s Redis Cloud into the AppHarbor add-on catalog,” said Michael Friis, co-founder of AppHarbor. “Redis is becoming a critical component for many .NET developers and is used by prominent .NET-powered web-properties like StackOverflow.

“We’ve seen Redis become an integral part of modern web applications, in part because of its amazing performance and flexibility,” said Glenn Gillen, Engineering Manager for Heroku Add-ons. “We’re excited to include Redis Cloud in the Heroku Add-ons marketplace so our customers can take advantage of its highly available and scalable solution in the quickest and simplest way possible.”

Garantia Data is currently offering the Redis Cloud free of charge to early adopters of its Heroku, AppFog and AppHarbor add-ons. The company will demonstrate the Redis Cloud and its new PaaS add-ons at Booth #332 duringAWS re: Invent, November 27-29 in Las Vegas.


AppFog Adds Redis, RabbitMQ Support Across Cloud Providers

AppFog today announced support for Redis and RabbitMQ, two of the most in-demand and widely used solutions for developing enterprise-class, web scale applications.

Used by both enterprise developers and those building cutting-edge, new start-up technologies, Redis is an open source RAM-based key-value memory store providing significant value in a wide range of important use cases. The popular and powerful NoSQL database has become a coding staple for developers worldwide and depended on for scalability by companies with websites serving a massive number of customers and users. Redis has also been the most-requested feature across all developers. Used by companies ranging from GitHub to Blizzard and from StackOverflow to Flickr, Redis has become a best practice for all looking to create solutions with excellent performance.

“Redis has become a required go-to tool for developers looking to solve performance issues for their applications,” said Krishnan Subramanian, Founder and Principal Analyst at Rishidot Research. “Interestingly, it is recommended to add Redis to your stack to take advantage of it in cases where your existing database is of no use. As a critical component of many highly performant stacks, Redis is rapidly becoming the standard for memory-based key-value stores.”

RabbitMQ is an open source enterprise message broker solution, enabling robust and easy-to-use messaging for applications. The messaging queue software provides support for a wide range of languages, platforms and third-party services. Supported by VMware, RabbitMQ is used by a huge number of developers and companies to develop robust and reliable applications.


Garantia Testing asks “Does Amazon EBS Affect Redis Performance?”

The Redis mavins at Garantia  decided to find out whether EBS really slows down Redis when used over various AWS platforms.

Their testing and conclusions answer the question: Should AOF be the default Redis configuration?

We think so. This benchmark clearly shows that running Redis over various AWS platforms using AOF with a standard, non-raided EBS configuration doesn’t significantly affect Redis’ performance. If we take into account that Redis professionals typically tune their redis.conf files carefully before using any data persistence method, and that newbies usually don’t generate loads as large as the ones we used in this benchmark, it is safe to assume that this performance difference can be almost neglected in real-life scenarios.

Read the full post for all the details.


Benchmarking Redis on AWS: Is Amazon PIOPS Really Better than Standard EBS?

The Redis experts at Garantia Data did some benchmarking in the wake of Amazon’s announcement of

Their conclusion:

After 32 intensive tests with Redis on AWS (each run in 3 iterations for a total of 96 test iterations), we found that neither the non-optimized EBS instances nor the optimized-EBS instances worked better with Amazon’s PIOPS EBS for Redis. According to our results, using the right standard EBS configuration can provide equal if not better performance than PIOPS EBS, and should actually save you money.

Read the full post for details and graphs.


Redis/Memcached: Even Modest Datasets Can Enjoy the Speediest Performance

A pretty technical blog post over at Garantia Data’s blog relates the results of a recent benchmark test of the effects of cloud intrastructure on Memcached and Redis datasets:

Redis and Memcached were designed from the ground-up to achieve the highest throughput and the lowest latency for applications, and they are in fact the fastest data store systems available today. They serve data from RAM,  and execute all the simple operations (such as SET and GET) with O(1) complexity.

However, when run over cloud infrastructure such as AWS, Redis or Memcached may experience significant performance variations across different instances and platforms, which can dramatically affect the performance of your application.

Read the full post.


Taking In-Memory NoSQL to the Next Level

Guest Post by Ofer Bengal, Co-Founder & CEO, Garantia Data

 

Ofer Bengal

Ofer Bengal has founded and led companies in data communications, telecommunications, Internet, homeland security and medical devices.

Today Garantia Data is launching the first in-memory NoSQL cloud that promises to change the way people use Memcached and Redis. I think this is a great opportunity to examine the state of these RAM-based data stores and to suggest a new, highly-efficient way of operating them in the cloud.

Challenges of Cloud Computing

Memcached and Redis are being increasingly adopted by today’s web-applications, and are being used to scale-out their data-tier and significantly improve application performance (in many cases improvement is x10 over standard RDBMS implementation). However, cloud computing has created new challenges in the way scaling and application availability should be handled and using Memcached and Redis in their simple form may not be enough to cope with these challenges.

Memcached

It’s no secret Memcached does wonders for websites that need to quickly serve up dynamic content to a rapidly growing number of users. Facebook, Twitter, Amazon and YouTube, are heavily relying on Memcached to help them scale out; Facebook handles millions of queries per second with Memcached.
But Memcached is not just for giants. Any website concerned with response time and user based growth should consider Memcached for boosting its database performance. That’s why over 70% of all web companies, the majority of which are hosted on public and private clouds, currently use Memcached.

Local Memcached is the simplest and fastest caching method because you cache the data in the same memory as the application code. Need to render a drop-down list faster? Read the list from the database once, and cache it in a Memcached HashMap. Need to avoid the performance-sapping disk trashing of an SQL call to repeatedly render a user’s personalized Web page? Cache the user profile and the rendered page fragments in the user session.

Although local caching is fine for web applications that run on one or two application servers, it simply isn’t good enough when the data is too big to fit in the application server memory space, or when the cached data is updated and shared by users across multiple application servers and user requests. In such cases user sessions, are not bound to a particular application server. Using local caching under these conditions may end up providing a low hit-ratio and poor application performance.

Distributed Memcached tends to improve local caching by enabling multiple application servers to share the same cache cluster. Although the Memcached client and server codes are rather simple to deploy and use, Distributed Memcached suffers from several inherent deficiencies:

  • Lack of high-availability – When a Memcached server goes down the application’s performance suffers as all data queries are now addressed to the RDBMS, which is providing a much slower response time. When the problem is fixed, it could take between a few hours to several days until the recovered server becomes “hot” with updated objects and fully effective again. In more severe case, where session data is stored in Memcached without persistent storage, losing a Memcached server may cause forced logout of users or flush of their shopping carts (in ecommerce sites).
  • Failure hassle – The operator needs to set all clients for the replacement server and wait for it to “warm-up”. Operators sometimes add temporary slave servers to their RDBMS, for offloading their Master server, until their Memcached recovers.
  • Scaling hassle – When the application dataset grows beyond the current Memcached resource capacity, the operator needs to scale out by adding more servers to the Memcached tier. However, it is not always clear when exactly this point has been reached and many operators scale out in a rush only after noticing degradation in their application’s performance.
  • Scaling impact on performance – Scaling out (or in) Memcached typically causes partial or entire loss of the cached dataset, resulting, again, in degradation of the application’s performance.
  • Manpower – Operating Memcached efficiently requires manpower to monitor, optimize and scale when required. In many web companies these tasks are carried out by expensive developers or devops.

Amazon has tried to simplify the use of Memcached by offering ElastiCache, a cloud-based value-added service, where the user does not have to install Memcached servers but rather rent VMs (instances) pre-loaded with Memcached (at a cost higher than plain instances). However, ElastiCache has not offered a solution for any of the Memcached deficiencies mentioned above. Furthermore, ElastiCache scales-out by adding a complete EC2 instance to the user’s cluster, which is a waste of $$ for users who only require one or two more GBs of Memcached. With this model ElastiCache misses on delivery of the true promise of cloud computing – “consume and pay only for what you really need” (same as for electricity, water and gas).

Redis

Redis an open source, key-value, in-memory, NoSQL database began ramping-up in 2009 and is now used by Instagram, Pinterest, Digg, Github, flickr, Craigslist and many others and has an active open source community, sponsored by VMware.

Redis can be used as an enhanced caching system alongside RDBMS, or as a standalone database.
Redis provides a complete new set of data-types built specifically for serving modern web applications in an ultra-fast and more efficient way. It solves some of the Memcached deficiencies, especially when it comes to high availability, by providing replication capabilities and persistent storage. However, it still suffers from the following drawbacks:

  • Failure hassle – There is no auto-fail-over mechanism; when a server goes down, the operator still needs to activate a replica or build a replica from persistent storage.
  • Scalability – Redis is still limited to a single master server and although cluster management capability is being developed, it probably won’t be simple to implement and manage and will not support all Redis commands, making it incompatible with existing deployments.
  • Operations – Building a robust Redis system requires strong domain expertise in Redis replication and data persistence nuances and building a Redis cluster will be rather complex.
DB Caching Evolution

The Evolution of Caching

A new cloud service that will change the way people use Memcached and Redis

Imagine connecting to an infinite pool of RAM memory and drawing as much Memcached or Redis memory you need at any given time, without ever worrying about scalability, high-availability, performance, data security and operational issues; and all this, with the click of a button (ok, a few buttons). Now imagine paying only for GBs used rather than for full VMs and at a rate similar to what you pay your cloud vendor for plain instances. Welcome to the Garantia Data In-Memory NoSQL Cloud!  

By In-Memory NoSQL Cloud I refer to an online, cloud-based, in-memory NoSQL data-store service that offloads the burden of operating, monitoring, handling failures and scaling Memcached or Redis from the application operator’s shoulders. Here are my top 6 favorite features of such service, now offered by Garantia Data:

  • Simplicity – Operators will no longer need to configure and maintain nodes and clusters. The standard Memcached/Redis clients are set for the service DNS and from this moment on, all operational issues are automatically taken care of by the service.
  • Infinite scalability – The service provides an infinite pool of memory with true auto-scaling (out or in) to the precise size of the user’s dataset. Operators don’t need to monitor eviction rates or performance degradation in order to trigger scale-out; the system constantly monitors those and adjusts the user’s memory size to meet performance thresholds.
  • High availability – Built-in automatic failover makes sure data is guaranteed under all circumstances. Local persistence storage of the user’s entire dataset is provided by default, whereas in-memory replication can be configured at a mouse click. In addition, there is no data loss whatsoever when scaling out or in.
  • Improved application performance – Response time is optimized through consistent monitoring and scaling of the user’s memory. Several techniques that efficiently evict unused and expired objects are employed to significantly improve the hit-ratio.
  • Data security – For those operators who are concerned with hosting their dataset in a shared service environment, Garantia Data has full encryption of the entire dataset as a key element of its service.
  • Cost savings – Garantia Data frees developers from handling data integrity, scaling, high availability and Memcached/Redis version compliance issues. Additional savings are achieved by paying only for GBs consumed rather than for complete VMs (instances). The service follows the true spirit of cloud computing enabling memory consumption to be paid for much like electricity, water or gas, so you “only pay for what you really consume”.

We have recently concluded a closed beta trial with 20 participating companies where all these features were extensively tested and verified – and it worked fine! So this is not a concept anymore, it’s real and it’s going to change the way people use Memcached and Redis! Am I excited today? Absolutely!