Can We Finally Find the Database Holy Grail? | Part 3

In my first post in this three-part series I talked about the need for distributed transactional databases that scale-out horizontally across commodity machines, as compared to traditional transactional databases that employ a “scale-up” design. Simply adding more machines is a quicker, cheaper and more flexible way of increasing database capacity than forklift upgrades to giant steam-belching servers. It also brings the promises of continuous availability and of geo-distributed operation.
The second post in this series provided an overview of the three historical approaches to designing distributed transactional database systems, namely: 1. Shared Disk Designs (e.g., ORACLE RAC), 2. Shared Nothing Designs (e.g., the Facebook MySQL implementation), and 3) Synchronous Commit Designs (e.g., GOOGLE F1). All of them have some advantages over traditional client-server database systems, but they each have serious limitations in relation to cost, complexity, dependencies on specialized infrastructure and workload-specific performance trade-offs. I noted that we are very excited about a recent innovation in distributed database design, introduced by NuoDB’s technical founder Jim Starkey. We call the concept Durable Distributed Cache (DDC), and I want to spend a little time in this third and final post talking about what it is, with a high-level overview of how it works.

read more