ScaleIO Releases New Version of Its Elastic Converged Storage

ScaleIO today announced the release of ScaleIO ECS v1.1 scale-out storage software, bringing operational flexibility and cost savings to high-performance databases, virtual servers, end-user computing, and high-performance computing.

ScaleIO ECS eliminates the dependence on complex, expensive external SAN storage and fabric by presenting business application servers’ local disks as a robust, high-performance, shared virtual SAN. ECS provides hyper-scalability and enterprise-grade resilience while also reducing storage costs by more than 80%, delivering a direct savings of over 28% on an organization’s total IT budget.

“As enterprises consolidate into mega-data centers and SMEs move to cloud and hosting infrastructures, data centers are rapidly expanding to many thousands of servers. As a result, data center operators face constantly increasing levels of complexity and costs,” explained Boaz Palgi, CEO of ScaleIO. “ECS helps organizations manage these challenges by providing a scale-out storage solution that was designed for hyper-scalability, high performance, unprecedented elasticity, and low total cost of ownership. ECS makes storage as inconspicuous as CPU and RAM. Running seamlessly alongside business applications, ECS enables data centers to be built wall to wall from commodity servers only.”

With ECS, any administrator can add, move, or remove servers and capacity on demand during I/O operations. The software responds automatically to any infrastructure change and rebalances data accordingly across the grid. ECS helps ensure the highest level of enterprise-grade resilience by deploying advanced clustering algorithms whose distributed rebuild capabilities achieve the quickest handling of failures while maintaining maximum storage performance.

Breaking traditional barriers of storage scalability, ECS scales out to hundreds and thousands of nodes. Performance scales linearly with the number of application servers and disks. Deploying ECS in both greenfield and existing data center environments is a simple process and takes only a few minutes.

ECS can be managed from both a command-line interface (CLI) and an intuitive graphical user interface (GUI). ECS v1.1 natively supports all the leading Linux distributions and hypervisors; works agnostically with any SSD or HDD, regardless of type, model, or speed; and runs on x86, ARM, and other chipsets—giving organizations complete freedom of choice. Additional functionality includes encryption at rest and quality of service (QoS) of performance.

“Software-defined storage enables IT organizations to break out of the traditional SAN model that requires a staff of minions to perform mundane storage tasks,” commented Matthew Brisse, storage research director at Gartner. “Software-defined storage enables the promise of storage elasticity to match storage needs for traditional, virtual, and service-oriented cloud strategies in response to the ever-changing business requirements found in most IT organizations.”

“ScaleIO offers an innovative solution enabling customers to utilize capacity on hundreds of compute nodes and to aggregate that capacity into a single shared LUN,” said Julian Fielden, managing director, OCF. “OCF has deployed and tested ScaleIO ECS on a cluster with several hundred nodes in a large customer’s high-performance computing environment. The software made previously unused capacity available to the business applications and to a distributed file system while demonstrating impressive performance and resilience.”

Healthcare as a Service – Implementing a Cloud Solution

Cloud security and cloud compliance are one of the hottest topics in cloud computing. During the course of 2012 we’ve seen many companies, specifically software vendors providing healthcare solutions, migrating or implementing their software in the cloud. While cloud computing brings many advantages to such ISVs’ (pay per use, scalability, and automation to name a few), specific regulations, such as HIPAA in the healthcare space, forces such players to pay attention to specific cloud issues around regulatory compliance.
The HIPAA regulation specifically requires Protected Health Information (PHI) data to be encrypted while in motion and while at rest. Any decent security engineer will tell you that implementing cloud encryption can be easily achieved using the same tools used on-premise. Right? Wrong (or to be more exact, partially wrong): Creating an encryption scheme is indeed an easy task to achieve, but that’s the easy part. Doing so without trusting a third party (your cloud provider or the encryption provider) is the tricky part. While implementing encryption as part of an overall software enrollment strategy, one should consider the following: Is the key management server installed on premise or in cloud? On premise is the secure option yet limits many of the cloud benefits, while a key management cloud deployment is attractive from a total-system stand point, but until recently required you to trust a third party with your encryption keys.

read more

Infinite Convergence Launches Cloud-Based Enterprise Mobile Messaging Service

Infinite Convergence Solutions, a carrier-grade next-generation mobility technology provider, today launched its Enterprise Messaging Service (EMS). The cloud-based messaging service is specifically designed for enterprises to securely exchange information with customers, employees and business partners globally. With customization and systems integration services, Infinite Convergence’s experienced end-to-end service team can help enterprises expedite the introduction of value-added mobile messaging.

“Enterprises are increasingly using mobile messaging to communicate with their customers, employees and partners. In addition, enterprises are becoming more discerning in their messaging delivery service requirements, moving beyond commoditized bulk messaging and towards high-quality, highly reliable services, which also allow them to add value to their own offerings,” said Pamela Clark-Dickson, Senior Analyst, Mobile Content & Applications at Informa Telecoms & Media.

EMS builds upon Infinite Convergence’s innovative carrier-grade messaging platform, which currently enables 130 million subscribers to exchange 900 billion mobile messages annually. With features such as global reach, delivery assurance and end-to-end secure delivery, as well as a user-friendly web portal, campaign manager, opt-in/opt-out capabilities and messaging analytics, Infinite Convergence’s EMS maximizes its leading-edge platform to provide superior enterprise messaging capabilities. Customizable for a variety of industries, including financial institutions, travel and hospitality and healthcare, enterprises can use Infinite Convergence’s EMS for their most time-sensitive and confidential communications.

“In today’s mobile world, it’s necessary for enterprises to establish close connections with their clients. Text and multimedia messaging continues to be the most ubiquitous form of communication across the globe,” said Anurag Lal, CEO of Infinite Convergence Solutions. EMS enables organizations to engage with their customers, employees and business partners in the most effective and compelling way.”

EMS’s scalable, secure and proven technology boasts 99.99% reliability with the ability to deliver messages to mobile subscribers in over 180 countries. Its API-based approach allows the service to be seamlessly integrated with existing business applications and IT infrastructures. Moreover, with Infinite Convergence’s ability to provide scalable and cost-effective customization, enterprises can focus their efforts on enhancing their own customer-specific offering instead of being boxed into standard, commodity-based of-the-shelf services.

The Limits of Cloud: Gratuitous ARP and Failover

#Cloud whatclouddoesrightandwrongis great at many things. At other things, not so much. Understanding the limitations of cloud will better enable a successful migration strategy.

One of the truisms of technology is that takes a few years of adoption before folks really start figuring out what it excels at – and conversely what it doesn’t. That’s generally because early adoption is focused on lab-style experimentation that rarely extends beyond basic needs.

It’s when adoption reaches critical mass and folks start trying to use the technology to implement more advanced architectures that the “gotchas” start to be discovered.

Cloud is no exception.

A few of the things we’ve learned over the past years of adoption is that cloud is always on, it’s simple to manage, and it makes applications and infrastructure services easy to scale.

Some of the things we’re learning now is that cloud isn’t so great at supporting application mobility, monitoring of deployed services and at providing advanced networking capabilities.

The reason that last part is so important is that a variety of enterprise-class capabilities we’ve come to rely upon are ultimately enabled by some of the advanced networking techniques cloud simply does not support.

Take gratuitous ARP, for example. Most cloud providers do not allow or support this feature which ultimately means an inability to take advantage of higher-level functions traditionally taken for granted in the enterprise – like failover.

GRATUITOUS ARP and ITS IMPLICATIONS

For those unfamiliar with gratuitous ARP let’s get you familiar with it quickly. A gratuitous ARP is an unsolicited ARP request made by a network element (host, switch, device, etc… ) to resolve its own IP address. The source and destination IP address are identical to the source IP address assigned to the network element. The destination MAC is a broadcast address. Gratuitous ARP is used for a variety of reasons. For example, if there is an ARP reply to the request, it means there exists an IP conflict. When a system first boots up, it will often send a gratuitous ARP to indicate it is “up” and available. And finally, it is used as the basis for load balancing failover. To ensure availability of load balancing services, two load balancers will share an IP address (often referred to as a floating IP). Upstream devices recognize the “primary” device by means of a simple ARP entry associating the floating IP with the active device. If the active device fails, the secondary immediately notices (due to heartbeat monitoring between the two) and will send out a gratuitous ARP indicating it is now associated with the IP address and won’t the rest of the network please send subsequent traffic to it rather than the failed primary. VRRP and HSRP may also use gratuitous ARP to implement router failover.   how-failure-lb-works

Most cloud environments do not allow broadcast traffic of this nature. After all, it’s practically guaranteed that you are sharing a network segment with other tenants, and thus broadcasting traffic could certainly disrupt other tenant’s traffic. Additionally, as security minded folks will be eager to remind us, it is fairly well-established that the default for accepting gratuitous ARPs on the network should be “don’t do it”.

The astute observer will realize the reason for this; there is no security, no ability to verify, no authentication, nothing. A network element configured to accept gratuitous ARPs does so at the risk of being tricked into trusting, explicitly, every gratuitous ARP – even those that may be attempting to fool the network into believing it is a device it is not supposed to be.

That, in essence, is ARP poisoning, and it’s one of the security risks associated with the use of gratuitous ARP. Granted, someone needs to be physically on the network to pull this off, but in a cloud environment that’s not nearly as difficult as it might be on a locked down corporate network. Gratuitous ARP can further be used to execute denial of service, man in the middle and MAC flooding attacks. None of which have particularly pleasant outcomes, especially in a cloud environment where such attacks would be against shared infrastructure, potentially impacting many tenants.

Thus cloud providers are understandably leery about allowing network elements to willy-nilly announce their own IP addresses.

That said, most enterprise-class network elements have implemented protections against these attacks precisely because of the reliance on gratuitous ARP for various infrastructure services. Most of these protections use a technique that will tentatively accept a gratuitous ARP, but not enter it in its ARP cache unless it has a valid IP-to-MAC mapping, as defined by the device configuration. Validation can take the form of matching against DHCP-assigned addresses or existence in a trusted database.

Obviously these techniques would put an undue burden on a cloud provider’s network given that any IP address on a network segment might be assigned to a very large set of MAC addresses.

Simply put, gratuitous ARP is not cloud-friendly, and thus it is you will be hard pressed to find a cloud provider that supports it.

What does that mean?

That means, ultimately, that failover mechanisms in the cloud cannot be based on traditional techniques unless a means to replicate gratuitous ARP functionality without its negative implications can be designed.

Which means, unfortunately, that traditional failover architectures – even using enterprise-class load balancers in cloud environments – cannot really be implemented today. What that means for IT preparing to migrate business critical applications and services to cloud environments is a careful review of their requirements and of the cloud environment’s capabilities to determine whether availability and uptime goals can – or cannot – be met using a combination of cloud and traditional load balancing services.


 round-lori clip_image004[5]      F5 Networksclip_image003[5]clip_image004[5]clip_image006[5]clip_image007[5]clip_image008[5]

 

read more

Cloud Computing: 10gen & SoftLayer Tie Up on MongoDB

10gen, the company commercializing MongoDB, and SoftLayer, the largest privately held Infrastructure-as-a-Service provider in the world, have just launched MongoDB Cloud Subscriptions.
It’s a unique pay-as-you-go managed cloud subscription pushing certified pre-engineered and orchestrated MongoDB systems through a highly scalable, automated cloud platform.
The idea is to make the open source NoSQL database more available through the push-button provisioning of high-performance, production-grade, highly scalable clusters at SoftLayer’s portal or API.

read more

A Break in the Clouds

A recent study by researchers at North Carolina State University and the University of Oregon describes a threat scenario that allows attackers to exploit cloud-based resources for malicious purposes like cracking passwords or launching denial-of-service attacks. The study has gotten a lot of attention, including articles in reputable sources like Dark Reading, Ars Technica and Network World.
In order to optimize the performance of mobile apps or browsers, some computation-heavy functions have been offloaded to cloud-based resources, which in turn access backend resources and Web pages. This creates a middle ground in the cloud that is exploited in the attack, which the authors call “Browser Map Reduce (BMR)”. In reading the paper, it’s clear that this is a legitimate threat. The authors actually carried it out using free resources, although they limited the scope in order not to be abusive.

read more

Big Data and The Open Source Model

It is amazing how many open source software companies out there are trying to get hit by the same $1B bolt of lightning that hit MySQL without realizing that the MySQL result is not repeatable.
Looking at the current batch of big data high flyers, from TenGen to Cloudera to Hortonworks, each seems to be vying for the same kind of ubiquitous usage that enabled MySQL to get a more than 20x multiple. What they don’t realize is that the failure of early open source acquisitions to deliver substantial value to owners has made buyers much more wary.
Companies like MySQL were valued based on a mystical belief that downloads could be monitized (not unlike the similarly wishful belief in monetizing eyeballs that motivated disastrous dot com acquisitions in the 90s). Moving forward, open source companies will be valued the old-fashioned way: by the viability of their business model.

read more

Big Data and Privacy

Remember when being “sent to your room” was considered one the harshest punishments a parent could dole out?

I certainly hated it, and I’m pretty sure my kids don’t like it much either. For whatever reason, this form of punishment – the ultimate act of isolation – seems to have stood the test of time. It’s also a great way to quickly introduce your children to the seven stages of grief.

read more

Big Data Trees with Hadoop HDFS

Last month’s release of Revolution R Enterprise 6.1 added the capability to fit decision and regression trees on large data sets (using a new parallel external memory algorithm included in the RevoScaleR package). It also introduced the possibility of applying this and the other big-data statistical methods of RevoScaleR to data files distributed in in Hadoop’s HDFS file system*, using the Hadoop nodes themselves as the compute engine (with Revolution R Enterprise installed). Revolution Analytics’ VP of Development Sue Ranney explained how this works in a recent webinar. I’ve embedded the slides below, and you can also watch the webinar…

David Smith

read more