The Big Data and Cloud “movements” have acted as catalysts for tremendous growth in fit-for-purpose databases. Along with this growth, we see a new set of challenges in how we access the data through our business-critical applications. Let’s take a brief look at the evolution of these data access methods (and why we are in the mess we are in today).
Back in the ’80s the development of relational databases brought with it a standardized SQL protocol that could be easily implemented within mainframe applications to query and manipulate the data. These relational database systems supported transactions in a very reliable fashion through what was called “ACID” compliance (Atomicity, Consistency, Isolation, and Durability). These databases provided a very structured method of dealing with data and were very reliable. But ACID compliance also brought along lots of overheard process. Hence a downfall – they were not optimized to handle large transaction requests, nor could they handle huge volumes of transactions. To counteract this, we’ve did some significant performance and throughput enhancements within data connectivity drivers that lit a fire under the SQL speeds and connectivity efficiencies.