SolarFlare Bares Thoughts on Cloud Overhead

Faster. I said “faster.” This could be the motto of Solarflare Communications and Bruce Tolley, VP of Technical, Solutions and Partner Marketing. As databases inexorably grow and get distributed throughout clouds, the need for speed does not go away, even as overhead can work against this need.
Solarflare was at the recent Red Hat Summit, and as part of my continuing series of brief interviews, I asked Bruce a few questions.
Roger: What sort of performance increases can your customers experience?
Bruce: By leveraging the industry standard for server I/O virtualization (SR-IOV), we can present not a handful but 100s of virtual network interfaces as well as multiples of physical network interfaces to the network OS.
By combining SR-IOV with PCI Passthrough or DirectPath I/O, we can get back to pretty close to bare metal performance with a tier 1 application that has been vitalized. This is possible because we are able to bypass a lot of the overhead of the hypervisor and give the application direct access to the network.
Roger: How important is real-time versus accessing archived, historical information to your customers?
Bruce: This is really a question about the scale and size of the data that needs to be processed, and the time that elapses for all the steps to complete before you get your answer. For many of the web 2.0 companies who have exa-scale data sets, the big data processing method is very much like batch processing where the time it takes to get the answer is measured in several minutes.
Many enterprise customers have smaller data sets and need real time answers for such use cases as risk analysis and compliance, where the answer is needed in seconds or less. The OpenSource big data community is delivering a number of real time tools and platforms to address this need.
Roger: What does the term Big Data therefore mean to you?
Bruce: Solarflare develops software and hardware for 10 and 40G networking including server adapters. Our customers use our 10GbE products to build utility compute grids that support Hadoop, Cloudera, and GreenPlum analytics.
They can also use our complete portfolio of precision time and network monitoring/packet capture solutions to instrument the performance of those grids for internatl purposes. We also help our customers build OpenStack clouds with Linux KVM or Vmware ESX when they want to run Big Data analytics in the cloud.

read more