Cloudscaling to Exhibit at Cloud Expo Silicon Valley

SYS-CON Events announced today that Cloudscaling, the leading elastic cloud infrastructure company, will exhibit at SYS-CON’s 11th International Cloud Expo, which will take place on November 5–8, 2012, at the Santa Clara Convention Center in Santa Clara, CA.
Cloudscaling is the leading elastic cloud infrastructure company. Its Open Cloud System is the most reliable, scalable and production-grade solution available for building elastic clouds powered by OpenStack technology. Open Cloud System delivers the agility, performance and economic benefits of leading public cloud providers, but deployable in the customer’s datacenter and under their IT team’s control. Cloudscaling is backed by Trinity Ventures and is headquartered in San Francisco.

read more

Cloud Expo Silicon Valley: Production Clouds Powered by OpenStack

OpenStack is powering production clouds. In this General Session at the 11th International Cloud Expo, Nebula CEO Chris C. Kemp will outline the open source cloud platform’s path to maturity, leading to production use in some of today’s most popular companies. He will moderate a panel discussion among featured OpenStack users PayPal, Cisco WebEx and eBay, asking the questions that are top of mind as your organization considers implementing OpenStack. The group will discuss how they’re using OpenStack today, ways the cloud platform is delivering real business value, and best practices for effective planning and implementation.

read more

Objectivity to Exhibit at Cloud Expo Silicon Valley

SYS-CON Events announced today that Objectivity, Inc., makers of InfiniteGraph, The Distributed Graph Database™, will exhibit at SYS-CON’s 11th International Cloud Expo, which will take place on November 5–8, 2012, at the Santa Clara Convention Center in Santa Clara, CA. View demos of InfiniteGraph in the Big Data Pavilion, learn about InfiniteGraph’s complementary solution at Brian Clark, VP Field Services, technical session Thursday 11/8 and check out Objectivity’s CEO Jay Jarrell participate in the Thursday 11/8 Lunchtime Power Panel!
As the only NoSQL distributed graph database, InfiniteGraph is a complementary technology to any polyglot Big Data strategy. The power of InfiniteGraph is its performance and scalability within distributed environments. In recent testing InfiniteGraph scaled to more than a billion nodes and edges in less than 30 minutes. Companies including, Deloitte, US Government, the world’s largest telecom carrier, Osaka Gas Information Systems (OGIS), Nippon Systemware Co. LTD (NSW), and Software Research Associates (SRA) are leveraging InfiniteGraph for real-time analysis in multiple industries including HealthCare, network security, and homeland defense.

read more

Cloud Expo Silicon Valley: OpenStack Momentum: Adopters Speak Up

A founding member of the OpenStack Community, Dell has driven significant interest in and adoption of OpenStack-powered cloud solutions among cloud service providers.
In his session at the 11th International Cloud Expo, Mike Fountaine, National Director, Cloud Solutions, at Dell, to discuss why DreamHost, a pioneer in OpenStack use, made this choice, what products and services they’re creating with it, and the upcoming Grizzly release with the new Horizon and Keystone components.
Mike Fountaine is National Director, Cloud Solutions, at Dell. He leads the Sales organization responsible for Dell’s OpenStack and Hadoop solutions for the Americas. His 25-year career in high-tech has spanned engineering, marketing and sales roles focused on delivering innovative computing and storage solutions to the smallest startups, the largest global companies, and everyone in between. With Dell for the better part of 13 years, Fountaine has spent the last three years with the Data Center Solutions team focused on architecting and delivering hyper-scale computing infrastructure, big-data solutions based on Hadoop, and revolutionary cloud solutions based on Openstack.

read more

Tinypass Launches Cloud-Based, Content Metering Tool for its Digital Paywall Platform

Tinypass, a flexible and consumer-friendly paywall platform for digital publishers, announced today a new content metering capability that allows publishers to provide a specified amount of free – or “metered” – access to their site’s content before visitors are asked to pay. The technology offers publishers a broad range of control in implementing a metered access model, enabling them to set the amount of free content users are able to view based upon either the number of articles/resources retrieved or by time frame. For example, offering 24-hours of free access to allow visitors to preview site offerings.

“We’re committed to empowering digital publishers with the ability to monetize their content in effective, affordable and consumer-friendly ways,” said David Restrepo, Chief Operating Officer of Tinypass. “What’s great about the new, metered access capability is that it allows content owners of any size to quickly and easily implement a customized access model that their visitors will already be familiar with from experience accessing content on the sites of several of the world’s largest publishers. Like the rest of our cloud-based technology platform, the metering tools are completely scalable and, with zero upfront costs, can be deployed by operators of all sizes.”

The metering functionality is built into Tinypass’ plugins for WordPress and Drupal, as well as Tinypass’ PHP- and Java-based APIs. Media owners Worldcrunch, Summit Business Media and the Chicago Phoenix are among the first publishers to begin rolling out the new meter tool on their sites.

In addition to offering site owners a choice between a quantity or time frame approach, the Tinypass metering tool can be implemented in two separate technology modes – a client-side mode or a client- and server-side mode. The client-side mode operates in a manner similar to the paywall offerings of the New York Times and The Financial Times and works through cookies added to the user’s browser. After users have accessed a preset amount of free content, they are asked to create an account to continue reading.

A more secure approach includes both client- and server-side user confirmation. The addition of a small amount of code to the publisher’s content management system (CMS) prevents users from circumventing the access meter simply by clearing the cookies in his or her browser.

Using the Tinypass meter, publishers are free to make their own decisions in regards to inbound, referring links to their site. Visits that originate from social networking and other news sharing sites can be counted against the meter or treated as separate interactions that do not count against free views.


AppFog Adds Redis, RabbitMQ Support Across Cloud Providers

AppFog today announced support for Redis and RabbitMQ, two of the most in-demand and widely used solutions for developing enterprise-class, web scale applications.

Used by both enterprise developers and those building cutting-edge, new start-up technologies, Redis is an open source RAM-based key-value memory store providing significant value in a wide range of important use cases. The popular and powerful NoSQL database has become a coding staple for developers worldwide and depended on for scalability by companies with websites serving a massive number of customers and users. Redis has also been the most-requested feature across all developers. Used by companies ranging from GitHub to Blizzard and from StackOverflow to Flickr, Redis has become a best practice for all looking to create solutions with excellent performance.

“Redis has become a required go-to tool for developers looking to solve performance issues for their applications,” said Krishnan Subramanian, Founder and Principal Analyst at Rishidot Research. “Interestingly, it is recommended to add Redis to your stack to take advantage of it in cases where your existing database is of no use. As a critical component of many highly performant stacks, Redis is rapidly becoming the standard for memory-based key-value stores.”

RabbitMQ is an open source enterprise message broker solution, enabling robust and easy-to-use messaging for applications. The messaging queue software provides support for a wide range of languages, platforms and third-party services. Supported by VMware, RabbitMQ is used by a huge number of developers and companies to develop robust and reliable applications.


Stratsec research reveals potential alarm for cloud security

A new piece of research from security providers Stratsec has inferred that some cloud providers are unable to block malicious attacks, which could lead to cyber hackers being able to infiltrate systems in a botnet-styled attack.

As a result, according to the research at the Stratsec Winter School, there was an alarming number of reasons why attacking cloud systems was a good idea, such as being relatively easy to set up, costing less, and taking significantly less time to build.

Instead of a traditional botnet setup whereby an attacker would need to know various programming languages in order to hack into a system, in order to set up a botCloud – defined by Stratsec as a group of cloud instances controlled by malicious entities to initiate cyber-security attacks – the attacker only needs to know the cloud provider’s API and requisite sysadmin knowledge.

Worryingly, the researchers stated that based on their …

Hurricane Sandy and NYC Data Centers: How They Prepped, What Happened

Water and servers don’t mix. Storms can do more than cut the power to a data center, they can also breech walls, flood, or otherwise damage a center. A natural disaster like Hurrican Sandy can also make it difficult for staff to even be there to do their jobs, and can delay the arrival of replacement parts, fuel for generators, and so on.

Two posts at Data Center Knowledge do a good job of outlining how they prepared, and what actually happened.


Cloud: Where’s My Spigot?

I enjoyed reading Nicholas Carr’s “The Big Switch” when it came out a few years ago. It compared cloud services to water and electricity, and set me on course to write about cloud computing and its potential on a regular basis.

Now, as I consult with three different software development teams to develop and deploy their ideas via cloud computing, I have to ask, “when can I simply turn the spigot on and off?”

With public-cloud options, we still face buying instances, aka chunks of compute power. There are gaps we have to leap as we approach the limit of a specific instance, with proportional price jumps.

One blinding revelation I got from putting on my Captain Obvious hat was that buying compute power is more like construction than manufacturing; there are precious few economies of scale. So, one project, as we project growing from 100 users to 1,000 and then 100,000, our cost/customer barely drops.

With private-cloud options, we face the normal IT challenge of captial expenditure. We don’t have an existing datacenter, so we’d be buying iron and software licenses the same as in the past. Sure, we should get higher usage from our resources, but we’d also need full-time people to keep our virtualization, stack, and UI running in joyful harmony around the clock.

One of our projects involves a presentation-driven business that has facilities that are open only a few hours per month. Why even put in a broadband connection at almost $1,000 a year if we can provide almost the same experience to our customers with a laptop, monitor, and DVD? Then after that, provision cloud on an irregular basis?

When can we simply turn the spigot on and off, and get a good “laminar flow,” as the engineers say?

read more