Appcara Debuts Its AppStack Release 2 Platform, A Dynamic Application Layer Above the Cloud

Appcara, maker of cloud application lifecycle solutions, announced a major new version of its AppStack application and portability platform that helps enterprises and service providers accelerate complex applications into the cloud and deliver new application-based revenue sources. Incorporating a real time Dynamic Application Environment layer that eliminates the need for server templates or scripting, AppStack Release 2 changes the rules of cloud computing. With AppStack 2, enterprises and service providers gain exceptionally fast time-to-market for key business applications – maximizing time, knowledge and profitability.

While cloud deployments are common for simple web apps, the $385 billion enterprise application market remains the province of on-premises data centers, in large part because IT staffs lack the tools required to truly govern VMware, Citrix, or other private and hybrid applications running on Amazon Web Services, Citrix CloudStack or VMware vCloud. While cloud app management solutions exist, they require deep technical knowledge of application stacks, and a lot of manual, error-prone effort to use.

AppStack provides an advanced application layer above public and private clouds – to capture application components, configurations and dependencies in real-time in the industry’s only dynamic Configuration Repository – thereby automating all aspects of the application lifecycle. By decoupling apps from low-level dependencies such as operating systems and machine images into its uniquely visual, intuitive, real time environment, AppStack 2 enables total application portability, even across different public clouds – for instance, from Amazon to Rackspace – as well as from public to private clouds. AppStack allows cloud computing to expand beyond simple, predefined workloads, and into the realm of serious enterprise applications – with a single pane of glass management interface, and eliminating vendor lock-in.

“For a long time enterprises have been seeking to leverage cloud environments for more complex business applications and take advantage of the flexibility, faster time to market and markedly lower cost structures delivered by the cloud. AppStack’s ability to capture and assemble these application graphically, in real-time, is something that’s businesses can truly leverage for improved efficiency and faster time to market,” said William Fellows, analyst at 451 Research.

A major new capability in AppStack 2 is the App Marketplace functionality that connects application publishers and consumers and provides a platform to enable usage-based applications, so that:

  • Corporations have immediate access to business applications, enabling
    them to integrate applications without the need to purchase, install &
    configure apps as with packaged software.
  • ISV’s can readily publish business applications for usage-based
    consumption
  • Distributors can resell applications in a usage-based model in the
    cloud, and to move away from traditional box-based software sales.

“Many of our clients come from the Life Sciences industry where they have very sophisticated, mission critical applications running on their servers. Making a move to the cloud wasn’t an option worth considering until the AppStack solution came along,” said Tim Caulfield, chief executive officer at American Internet Services (AIS). “The idea that AppStack can do the heavy lifting on the back-end and provide the user with a very clean, single-pane-of-glass interface, is quite appealing and fits nicely with our BusinessCloud1 offering.”

AppStack 2 builds upon existing AppStack patent-pending components that have been in use by service providers and enterprise customers, including:

  • Dynamic Application Engine – which
    captures user-defined app environments, settings and relationships in
    real-time, construct data models and insert into Configuration
    Repository. It automates provisioning, lifecycle management, and
    portability.
  • Configuration Repository – which stores
    application settings, dependencies and change records for all
    application workloads. This enables speedy provisioning of application
    workloads as well as the ability to de-couple lower level components
    such as operating systems for portability across cloud vendors.
  • Cloud Target Optimizer – which maps
    provisioning elements and instructions to vendor specific API’s and
    available capabilities.

“Enterprises need to get critical apps to market with the least effort and cost possible, and the solutions on the market today help with only simple, static environments,” said John Yung, founder and CEO of Appcara. “AppStack keeps it simple, fast, and visual to deploy and manage even complex applications in the cloud, so customers and service providers can lower their IT costs – even with enterprise applications – and focus on scaling their business.”

AppStack Release 2 will ship in July 2012. Appcara is showcasing its latest version of AppStack at Cloud Computing Expo New York, in its booth #257.


PACSGEAR Adds SeeMyRadiology.com to Open Image Exchange Network

PACSGEAR, a provider of imaging connectivity for electronic health records, today announced an agreement with SeeMyRadiology.com, a collaborative, cloud-based medical imaging solution, to participate in the Open Image Exchange, a cloud-based network designed to securely share medical images and results. PACSGEAR has incorporated the application programming interface (API) from SeeMyRadiology.com to upload images and reports to the company’s secure network. The Open Image Exchange will be demonstrated live at the upcoming SIIM Annual Meeting in Orlando, Florida.

“Our industry has a history of coming together to solve challenges like image exchange,” said Eli Rapaich, PACSGEAR’s CEO. “By incorporating SeeMyRadiology.com’s API, we strengthen the Open Image Exchange network and provide customers with options to share medical images electronically. We welcome the inclusion of all open APIs to accelerate image sharing and health information exchange,” Rapaich said.

“The Open Image Exchange network will deliver value throughout the entire process of care delivery, from healthcare organizations, to physicians, to patients,” said Willie Tillery, CEO at SeeMyRadiology.com. “Through partnering with PACSGEAR and serving as the first available integrated API partner in the Open Image Exchange, we are committed to making medical image exchange more broadly available, ultimately improving the quality of patient care.”

Open Image Exchange will be featured at PACSGEAR’s Booth #513 at the Annual Meeting of the Society for Imaging Informatics in Medicine (SIIM 2012) in Orlando, Florida from June 7 – 10, 2012.


Does PaaS Really Mean No-Ops?

Guest post by Yaron Parasol, Director of Product Management, GigaSpaces

Yaron Parasol is GigaSpaces Director of Product Management

I’d like to start with a brief overview of the evolution of the cloud – and why I think a new approach to PaaS solutions is needed – and the best scenarios for this to come into play.

First there was IaaS. Cloud was created with the notion of IT agility and cost-reduction. You need servers? No problem! Forget about red tape, forget about sys admins. You create an account and in few clicks you select the hardware profile and OS image you need, and voila, your server is out there, ready for you to use. No hassles, immediate gratitude.

Well, this is true as long as the images you get form your cloud provider match your needs. If you have custom needs, you will have to create and maintain your own image. So, you need the sys admin’s knowledge. However, we’re also seeing a change in methodology here – sys admins no longer need to install the servers once they’re up. Instead, they provide their expertise using approved and maintained images on the cloud. Application developers can choose the right image for them and from that point on, create virtual machines in the amount and hardware size they need for their applications.

Now let’s switch to PaaS. The idea of no-ops is the guideline for many of the existing PaaS offerings. Their use cases and features are all about developers. As Michael Cote put it:

“The point of PaaS is to make a developer’s life even easier: you don’t need to manage your cloud deployments at the the lower level of IaaS, or even wire together your Puppet/Chef scripts. The promise of PaaS is similar to that of Java application servers: just write your applications (your business logic) and do magic deployment into the platform, where everything else is taken care of.”

Developers need to deploy applications to the Cloud. They don’t want to care about the OS but they also don’t want to care about platforms, load balancers etc. They want to focus on what they know – writing code.

This is definitely a very productive approach for some developers and some applications. Reality shows that a big portion of Cloud users don’t find these solutions a good fit for their purposes. These users continue to deploy and manage their applications on infrastructure clouds, as if they were running on premise leveraging the good old Ops folks. Others have taken a more agile approach, using configuration management and automation tools such as Chef.

These users chose not to use PaaS because they need flexibility and control. PaaS doesn’t seem to answer a lot of the current IT challenges – see for example here and here.

Existing applications with a variety of platforms, some using extremely complex topologies (like Hadoop, or a sharded MongoDB setup) are some of the reasons why PaaS won’t cut it for many users.

They would like to install their chosen versions of their preferred platforms, use their OS image, with their configured security groups, tune them in a manner that fits their applications and deploy their applications with the topology they choose.

Chef, like other DevOps tools, go a long way here and helps to achieve the flexibility while re-establishing a new agile relationship between Dev and Ops. Ops bring in their knowledge and skillset, but they document it and maintain it as code in a more structured and configurable way. This in turn gives the application guys the agility they need, putting a complex application to work in a single click and eliminating the platform black box experience which they dislike so much.

Application vs. Separate Platforms

However, DevOps tools still fall short when it comes to managing applications. They are not aware of the application’s inner dependencies. They don’t know how to monitor the application, scale it or even run a complex multi-tier recovery process. Most of these tools can’t even provision an entire application on the cloud.

So what if you could extend the DevOps experience to apply to the entire application lifecycle?

What if you could use Chef and the likes for installation but not stop there – automate things like failover and recovery, and even monitoring and scaling? You will still have all the Ops wisdom tailored to each of your applications and be able to automate any of your existing applications without re-architecting them.

This is exactly our take on PaaS, a DevOps style process that can describe any application’s lifecycle on any runtime environment, providing full automation without taking away the control.  And this is exactly what we set out to do with our Open Source PaaS platform – Cloudify, borrowing the idea of recipes but extending it to be application-centric and not infrastructure-centric.

The recipe describes the application dependencies and lifecycle events externally without any code or architecture change.

lifecycle{
init "mongod_install.groovy"
start "mongod_start.groovy"
postStart "mongod_poststart.groovy"
}

view rawmongod.groovyThis Gist brought to you by GitHub.

See how to create your own recipes here.

Mapping events like installation, start, post-start and stop to scripts or Chef cookbooks, exposing groovy andREST interfaces for context sharing and dynamic configuration and even provide a way to describe monitoring techniques, scaling rules and process “liveness” detection.

So think about it this way: while most PaaS services come with a catalog of predefined application blueprints, allowing the user to control only the application code, this new kind of PaaS stack, allows the user to define the blueprint, that is – any blueprint!

So, the recipes combine the Ops expertise with the power of automation for the developers. They completely remove the lock-in risk from the application-stack perspective.

You can read more the recipes, download Cloudify and experience it yourself or even join the community and influence the roadmap at cloudifysource.org. Or you can come see more when we present at HP Discover next week in Las Vegas.


Cloud Computing: Amplidata Defines Big Data Best Practices at Cloud Expo

Amplidata, an innovator in optimized object-based storage technology, will define best practices for storing Big Unstructured Data at Cloud Expo next week in New York City, on June 11– 14.
Big Data has come a long way to encompass a variety of subsets, including the fast growing Big Unstructured Data domain, where Big Entertainment, Big Science, and many other Big Demands are quickly overwhelming enterprise data stores.
At Cloud Expo, Amplidata’s Tom Leyden will discuss Big Unstructured Data as the increasingly prevalent data where organizations generate Petabytes of large, unstructured files. Leyden will explain a major constraint of traditional data storage approaches – the file system – and how it limits many organizations, particularly those with cloud computing deployments.

read more

Cloud Computing: If a Tree Falls in Your Network, Does Anybody Hear?

Using cloud based security to separate log data from actionable events. The ability to employ situational awareness across all the silos of an enterprise creates the necessary context to break through the white noise of network traffic.
I recently came across an article regarding the difficulty of separating log data from actionable events. The issue at hand is a network is pinged potentially millions of times a day. Most of it innocuous-the legitimate log on and off of employees, genuine transactions of data, etc… But what gets lost amidst all this “white noise,” are the red flags that indicate breaches or worse malicious activities.
It can be overwhelming. In fact, the article Struggling to Make Sense of Log Data, points out a study by the SANS Institute that the biggest critical concern for security is the ability to discern usable and actionable data from log files.

read more

Cloud Computing: Appcara Debuts AppStack Release 2 Platform

Appcara announced a major new version of its AppStack application and portability platform that helps enterprises and service providers accelerate complex applications into the cloud and deliver new application-based revenue sources. Incorporating a real time Dynamic Application Environment layer that eliminates the need for server templates or scripting, AppStack Release 2 changes the rules of cloud computing. With AppStack 2, enterprises and service providers gain exceptionally fast time-to-market for key business applications – maximizing time, knowledge and profitability.

read more

CDW Demonstrates Cloud Computing Options for HP CloudSystem Solutions

CDW LLC (CDW), a provider of technology solutions to business, government, education and healthcare, announced it has been designated an HP Cloud Center of Excellence, based upon the expertise of its solution architects and integration of HP systems into CDW’s newly opened Technology Experience Center. The center offers customers live cloud computing demonstrations, which may be viewed real-time from any CDW or customer location, as well as in person at the center.
CDW’s HP Cloud Center of Excellence features HP CloudSystem, a complete and integrated solution to build and manage services across private, public and hybrid cloud environments. In addition to HP servers, the center also includes HP storage and networking solutions, including HP Tipping Point network security offerings.

read more

HP Names Veghte COO; Imports New Software Savior

HP Wednesday named Bill Veghte chief operating officer, a newly created position that relieves the former Windows executive of running the company’s poorly functioning software operation two years and three CEOs after he got the job.
Veghte is keeping the corporate strategy charter he was given a few months ago as well as responsibility for Autonomy, HP’s great problematic British acquisition that he was given last week when Autonomy founder Mike Lynch was fired.
HP doesn’t have much of a strategy, at least nothing that doesn’t resemble what everybody else is doing and that – as of last week – is to focus on Big Data, cloud and security – all of which depend on software.

read more

Moving to the Cloud and Need to Protect Sensitive Financial Data?

Among small and mid-size businesses, the refrain is increasingly familiar: “We’re responsible for sensitive financial data, and our customers trust us to protect it at all times. We want to move to the cloud but have concerns about the risks in handing this data off to another company. If we make the switch, how can we be certain our data will be protected?”
According to Adam Stern of Infinitely Virtual, a leading provider of virtual server cloud computing services for growing companies, the rationale for migrating to the cloud is sound, the questions are valid, and the answers are becoming ever more straightforward.

read more

CloudOne Provides Hourly Performance and Load Testing from the Cloud

CloudOne, a provider of Software-as-a-Service, on Monday announced that it is expanding its hosted Software-as-a-Service (SaaS) testing program to offer performance, load and stress tests available in hourly increments. By expanding its monthly offerings to include an hourly increment of consumption, test teams can take advantage of executing tests on-demand in shorter, more flexible and affordable bursts of time that are aligned with test execution best practices.

CloudOne is working with IBM software to provide a unique hourly performance offering designed for testing professionals who require periodic heavy load testing capabilities, but quickly discover that hardware and licensing costs for typical peak load testing become very expensive, even in monthly increments. This hourly approach allows users to execute essential large load testing to accelerate the discovery of weaknesses in the application under test when needed, without the need to search for available resources or invest in additional hardware and bandwidth.

read more