While Big Data has been thought of as large stores of data at rest, it can also be about data in motion.
Of the 3 “V’s” of Big Data – volume, variety, velocity (we’d add “Value” as the 4th V) – velocity
has been the unsung ‘V.’ With the spotlight on Hadoop, the popular image of Big Data is large petabyte data stores of unstructured data (which are the first two V’s). While Big Data has been thought of as large stores of data at rest, it can also be about data in motion.
“Fast Data” refers to processes that require lower latencies than would otherwise be possible with optimized disk-based storage. Fast Data is not a single technology, but a spectrum of approaches that process data that might or might not be stored. It could encompass event processing, in-memory databases, or hybrid data stores that optimize cache with disk.
Fast Data is nothing new, but because of the cost of memory, was traditionally restricted to a handful of extremely high-value use cases.
Teradata, the doyen of the Big Data set, has got a new purpose-built appliance for SAS high-performance analytics that uses an in-memory approach for hyper-fast results.
In other words, it distributes complex analytics in parallel across a vast pool of memory looking for patterns in large volumes of data.
It reportedly whittled what would normally have been a 167-hour project in financial risk analysis at some Wall Street bank or another down to 84 seconds.
Teradata claims other customers can expect as much and expects it to “kick competitive butt.” It claims IBM, Oracle and SAP, which have their own in-memory systems, “lack the foundational analytics.”
Recently Google announced a significant price increase for use of its App Engine Platform-as-a-Service. The increase itself was not a huge surprise. Google had been making noises that something like this was in the offing for a number of months. But the size of the increase shocked the Web development and cloud applications community. For most users, the cost of using the Google runtime environment effectively increased by 100% or more.
A huge online backlash ensued. For its part, Google put off the increase by a month and moderated some of the increases. But the whole incident brought many nagging doubts about the cloud to the surface. Said one poster on one of the many threads that lit up the Google Groups forums after the increase:
“I like so many of us have spent a lot of time learning app engine – I have been worried like so many that using …
Over the last few years there has been a lot of progress made towards virtualizing a decent amount of the traditional, network-centric appliances that used to be just hardware based. Why are some companies still resistant to this software-based approach? Is it because that’s the way it has always been, or is it inherent to the networking geeks who may be less virtualization-savvy than some of their cohorts in the other technology silos? It reminds me of the days when VoIP was first being introduced and the subsequent lack of acceptance that some of the old-school, traditional telephony engineers fueled. Some of them accepted it and others retired. The point is though that it makes sense and those who accept it will be much the better for it.
With the dynamic today moving towards private and public cloud offerings, the virtual appliance marketplace will most certainly continue to grow and mature. There are many reasons why this makes a lot of sense.
Take a look at the time it takes to implement a physical network appliance. Let’s use an application delivery controller – or load-balancer if you prefer that term. How long does it take to implement a physical box into an existing environment? Between ordering the unit(s) which usually come in pairs, shipping, and installing, it takes some time. The cables need to be run, the box racked and stacked and then physically powered on and provisioned. We have been doing this for years and this used to be standard operating procedure. Now that works well and good, kinda, in your own data center. What about a public cloud offering? Sorry, you don’t own that infrastructure. How about downloading a virtual appliance, spinning up a VM and you are off to the races. Again, this happens after provisioning the unit, but there is a lot less moving parts going that route. Cloud or not – either way it still makes sense. There will be less infrastructure requirements: power, rack space, cabling etc.
There are some other tangible benefits as well. From a refresh perspective it just makes sense to upgrade a virtual appliance with a newer image – or adding memory –rather than a hardware-based forklift upgrade every five years (with potentially more downtime required). The ability to shrink or grow a virtual appliance is one of the things that set it apart. We don’t have to repurchase anything – other than license keys and annual service contracts. Regrettably, those won’t go away. But coupled all together with the flexibility to move your virtual appliances along with your data from one environment to another is key. We will see more and more network-centric appliances become virtualized. There will most assuredly always be some physical boxes that the network folks can get their hands on, but that will be for access purposes only.
The companies/manufacturers/network-engineers who don’t embrace this trend could quickly find themselves behind the eight ball. Analog phones anyone?
Independent industry security expert Gunnar Peterson provides the analysis and decision support that will enable you make an informed choice when evaluating Security Gateways.
Guide describes security architecture capabilities, common business use cases, and deployment considerations. Upon registration you will receive access to the white paper and a customizable technical RFP matrix.
The cloud has raised the bar and changed the game for software development. However, the current software development paradigms are fundamentally introverted and rooted in a siloed approach to development. The cloud provides an opportunity for the emergence of a silo-free and elastic programming paradigm that is built to automatically scale and enable inside-out integration at its core.
In their session at the 10th International Cloud Expo, Ash Massoudi, CEO & Co-Founder of NextAxiom Technology, and Sandy Zylka, VP Products & Technology and a Co-Founder of NextAxiom, will discuss the seven defining characteristics of this elastic programming paradigm and their implications.
Business standards and compliance services provider SAI Global is benefiting from a strategic view of IT enabled disaster recovery.
When we started to get into DR, we handled it from an IT point of view and it was very much like an iceberg. We looked at the technology and said, “This is what we need from a technology point of view.” As we started to get further into the journey, we realized that there was so much more that we were overlooking.
We were working with the businesses to go through what they had, what they didn’t have, what we needed from them to make sure that we could deliver what they needed. Then we started to realize it was a bigger project.
Google finally introduced its long-trumpeted cloud-based Google Drive Tuesday hours before Apple released its Q2 results.
Drive happens to compete with iCloud and Apple’s results, which could have been, shall we say, edgy, turned out to be over-the-top.
Drive also competes with Microsoft’s SkyDrive, Dropbox, Box, Amazon’s Cloud Drive and SugarSync.
Google says Drive users will get 5GB of free online storage for videos, photos, songs, files and PDFs that they can upload, create, edit, view, sync, share by way of different rights, collaborate on, get notifications, recognize scanned files, store, search (by word, owner, even some images to a point) and access from anywhere from PCs, Macs and Android devices (Gmail, iPhones, iPads, Chrome OS and Linux to come).
Traditional middleware is not suitable for the Cloud; it’s simply too complex and it doesn’t provide a pay-per-use charge model.
In their general session at the 10th International Cloud Expo, Ash Massoudi, CEO & Co-Founder of NextAxiom Technology, and Sandy Zylka, VP Products & Technology and a Co-Founder of NextAxiom, will discuss a new middleware paradigm for developing and integrating applications: metered, virtualized middleware. The virtualized attribute allows you to easily build and run new applications in the cloud reusing existing on-premise application functionality.
The metered attribute has two aspects: it provides a usage-based charge model for the middleware and it provides a built-in, metered charge model, much like electricity, for your new applications.
In the early years of cloud computing, the idea was just to get there – to start achieving some of the promised efficiencies. But now, as cloud initiatives mature, the focus has turned to ensuring data security and privacy – no small feat, given the range of threats and global regulations organizations encounter.
In their session at the 10th International Cloud Expo, George Gerchow, Director of VMware’s Center for Policy & Compliance, and Tom McAndrew, Executive Vice President of Professional Services at Coalfire Systems, will cover the security and privacy challenges in a cloud environment.