SYS-CON Events announced today that Silanis, the world’s leading electronic signature provider, will exhibit at SYS-CON’s 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York.
Silanis is the world’s leading electronic signature provider. Since the company was founded in 1992, our software has automated business transactions that require secure, compliant and enforceable e-signatures. Recognized as the enterprise market leader, we are responsible for processing more than 600 million e-signed transactions annually – more than any other e-signature vendor. These transactions represent billions of dollars’ worth of regulated business processes taking place 24/7 around the globe, from insurance applications and consumer loans to federal procurement contracts. Our customers represent the leading organizations within their respective fields, including four of North America’s top 10 banks, eight of the top 15 insurers and the entire US Army, among others.
Monthly Archives: April 2014
Teaching an Old Dog New Tricks: Blending Cloud and Desktop Applications
In 2013, Adobe took the bold step of moving from a perpetual licensing model to a subscription-based model to sell its creative software. The world is becoming more connected, collaborative and mobile, and Adobe wanted to make the shift to meet those needs. The process began in 2012 when Adobe launched the first phase of Creative Cloud, which provided Adobe’s full set of creative applications as a membership. In the year since, Adobe built the infrastructure to store and share files in the cloud, and acquired Behance, a thriving social network used by millions of people to connect, showcase their work, and discover other talent. For users, there is a blending of sorts of cloud computing and desktop applications: Customers will continue to install and use the creative applications on their desktop just as they always have, but the apps will increasingly be part of a larger creative process centered on Creative Cloud.
Maybe the Cloud Can Help Secure the Internet
As recent events have confirmed once again, no single company, organization or government is up to the task of securing the Internet. The never-ending cat and mouse game of exploits chasing vulnerabilities continues. The stunning Heartbleed discovery has shaken the online security establishment to the core. Claims of security and privacy for many Web servers were patently false.
We all know a chain is only as strong as its weakest link and the unintended back door information leak that is Heartbleed has undoubtedly allowed countless secrets to escape from secure servers, albeit as random pieces of a puzzle to be reassembled by the hacker. It will undoubtedly go down in history as the most widespread compromise of online services since the advent of the Web. Why? Because we now conduct an unprecedented number of so-called “secure” communications over SSL in every facet of commerce, government and the social web.
MapDB: The Agile Java Database
MapDB is an Apache-licensed open source database specifically designed for Java developers. The library uses the standard Java Collections API, making it totally natural for Java developers to use and adopt, while scaling database size from GBs to TBs. MapDB is very fast and supports an agile approach to data, allowing developers to construct flexible schemas to exactly match application needs and tune performance, durability and caching for specific requirements.
Moore’s Law Gives Way to Bezos’s Law
loud providers Google, AWS and Microsoft are doing some spring-cleaning – out with the old, in with the new – when it comes to pricing services.
With the latest cuts, here’s a news flash:
There’s a new business model driving cloud that is every bit as exponential in growth — with order of magnitude improvements to pricing — as Moore’s Law has been to computing. Let’s call it “Bezos’ Law,” and go straight to the math
SolarFlare Bares Thoughts on Cloud Overhead
Faster. I said “faster.” This could be the motto of Solarflare Communications and Bruce Tolley, VP of Technical, Solutions and Partner Marketing. As databases inexorably grow and get distributed throughout clouds, the need for speed does not go away, even as overhead can work against this need.
Solarflare was at the recent Red Hat Summit, and as part of my continuing series of brief interviews, I asked Bruce a few questions.
Roger: What sort of performance increases can your customers experience?
Bruce: By leveraging the industry standard for server I/O virtualization (SR-IOV), we can present not a handful but 100s of virtual network interfaces as well as multiples of physical network interfaces to the network OS.
By combining SR-IOV with PCI Passthrough or DirectPath I/O, we can get back to pretty close to bare metal performance with a tier 1 application that has been vitalized. This is possible because we are able to bypass a lot of the overhead of the hypervisor and give the application direct access to the network.
Roger: How important is real-time versus accessing archived, historical information to your customers?
Bruce: This is really a question about the scale and size of the data that needs to be processed, and the time that elapses for all the steps to complete before you get your answer. For many of the web 2.0 companies who have exa-scale data sets, the big data processing method is very much like batch processing where the time it takes to get the answer is measured in several minutes.
Many enterprise customers have smaller data sets and need real time answers for such use cases as risk analysis and compliance, where the answer is needed in seconds or less. The OpenSource big data community is delivering a number of real time tools and platforms to address this need.
Roger: What does the term Big Data therefore mean to you?
Bruce: Solarflare develops software and hardware for 10 and 40G networking including server adapters. Our customers use our 10GbE products to build utility compute grids that support Hadoop, Cloudera, and GreenPlum analytics.
They can also use our complete portfolio of precision time and network monitoring/packet capture solutions to instrument the performance of those grids for internatl purposes. We also help our customers build OpenStack clouds with Linux KVM or Vmware ESX when they want to run Big Data analytics in the cloud.
SolarFlare Bares Thoughts on Cloud Overhead
Faster. I said “faster.” This could be the motto of Solarflare Communications and Bruce Tolley, VP of Technical, Solutions and Partner Marketing. As databases inexorably grow and get distributed throughout clouds, the need for speed does not go away, even as overhead can work against this need.
Solarflare was at the recent Red Hat Summit, and as part of my continuing series of brief interviews, I asked Bruce a few questions.
Roger: What sort of performance increases can your customers experience?
Bruce: By leveraging the industry standard for server I/O virtualization (SR-IOV), we can present not a handful but 100s of virtual network interfaces as well as multiples of physical network interfaces to the network OS.
By combining SR-IOV with PCI Passthrough or DirectPath I/O, we can get back to pretty close to bare metal performance with a tier 1 application that has been vitalized. This is possible because we are able to bypass a lot of the overhead of the hypervisor and give the application direct access to the network.
Roger: How important is real-time versus accessing archived, historical information to your customers?
Bruce: This is really a question about the scale and size of the data that needs to be processed, and the time that elapses for all the steps to complete before you get your answer. For many of the web 2.0 companies who have exa-scale data sets, the big data processing method is very much like batch processing where the time it takes to get the answer is measured in several minutes.
Many enterprise customers have smaller data sets and need real time answers for such use cases as risk analysis and compliance, where the answer is needed in seconds or less. The OpenSource big data community is delivering a number of real time tools and platforms to address this need.
Roger: What does the term Big Data therefore mean to you?
Bruce: Solarflare develops software and hardware for 10 and 40G networking including server adapters. Our customers use our 10GbE products to build utility compute grids that support Hadoop, Cloudera, and GreenPlum analytics.
They can also use our complete portfolio of precision time and network monitoring/packet capture solutions to instrument the performance of those grids for internatl purposes. We also help our customers build OpenStack clouds with Linux KVM or Vmware ESX when they want to run Big Data analytics in the cloud.
Inktank: Living in a Multi-Petabyte World
Blink your eyes and five years have gone by. That’s my feeling as I posed questions to Ross Turk (pictured below), VP of Community at Inktank.
I first interviewed him almost five years ago, at an event in San Jose, when he was with SourceForge. Since that time, the entire cloud computing revolution went into full-launch mode, with Inktank’s focus on massive distributed storage along with it.
Inktank and Ross were at the recent Red Had Summit, which seemed like a great opportunity to re-visit his thoughts. Here’s what I asked him and what he had to say.
Roger: What sort of scale do your customers face?
Ross: Deployments in the 3-5PB range are common, and larger ones are slowly coming online. In 2014, we plan to see regular deployments in the 10PB range.
Through Inktank Ceph Enterprise, we aim to deliver massively scalable storage that runs on commodity hardware, radically improving storage economics and easing the costs of managing exponential enterprise data growth.
Because Ceph contains no single point of failure and is built using only scale-out components, there is no theoretical limit to the deployment size. And because our solution is based on open source, it frees enterprises from vendor lock-in, providing the flexibility and scalability necessary to keep up with evolving storage needs.
Roger: What problems are InkTank and Ceph designed to solve?
Ross: Traditional storage solutions typically are built using costly and restrictive proprietary hardware, requiring enterprises with vast amounts of data to invest heavily. But they are also restrictive, often tying enterprises to a single vendor and reducing their ability to adapt to changing storage demands.
Roger: What are the keys to your relationship with Red Hat? What do each of the companies contribute to one another?
Ross: Our relationship with Red Hat is critical to us; as we work to bring the power of Ceph into the enterprise, more and more of our customers are demanding tight interoperability with technologies in the Red Hat ecosystem, and their customers are increasingly requesting Ceph-based solutions.
Inktank Ceph Enterprise is certified to provide storage for RHEL-OSP, Red Hat’s OpenStack distribution, and RHEV, Red Hat’s enterprise virtualization product. Our relationship is necessary to deliver and support the fully-integrated solutions our customers are asking for!
Thank You GetVoIP!!
Thank you for the honor of being named a “Top 100 Cloud Professionals to Follow on G+”! Congratulations also to my 99 colleagues.
Thank You GetVoIP!!
Thank you for the honor of being named a “Top 100 Cloud Professionals to Follow on G+”! Congratulations also to my 99 colleagues.