WNS Partners with GT Nexus for Business Process, Supply Chain Services

WNS, a provider of global Business Process Outsourcing (BPO) services, today announced it has entered into a strategic partnership with GT Nexus, a market leading cloud-based supplychain service provider, to deliver platform-based BPO services and solutions to the Shipping and Logistics industry. According to the agreement, WNS and GT Nexus will jointly work towards providing shippers, forwarders, 3PLs and carriers with improved quality of service for their end-customers and reduced costs in areas such as documentation, freight management, contracts, pricing and analytics.

“The shipping and logistics industry has been facing multiple pressures created by unfavorable economic conditions,” said Keshav R. Murugesh, Group CEO, WNS. “There is a strong need within the industry for managed services to help shift inefficient and manual transactions onto a global digital platform. We believe that the integration of GT Nexus’s proven cloud-based technology platform with WNS’ deep domain knowledge and operational process excellence will help companies effectively manage this transition.”

“One of the big opportunities for BPO providers is to build entire practices and services around existing, mature cloud technology platforms,” said Aaron Sasson, CEO of GT Nexus. “Through this partnership, WNS is taking advantage of our cloud supply chain platform and offering a much needed new service for the international logistics industry. WNS has made it simple for companies in shipping and logistics to move to a complete digital transaction business process.”

Jaison Augustine, Senior Vice President & Segment Head – Shipping & Logistics at WNS added, “Our ‘BlackBox’ solution will offer 40-60% percent reduction in costs incurred in the production of master and house bills of ladings for NVOCCs and freight forwarders. It will also help increase the digitization of shipping instructions which offers benefits to both shippers and carriers alike. By introducing cutting-edge tools like optical and intelligent character recognition, scanning & imaging and workflow solutions our managed services approach will reduce cycle times and enable error-free documentation at a fraction of the current cost.”

WNS currently offers a broad spectrum of services to the global shipping & logistics industry, including export and import documentation, freight audits, driver logs, trip records, Finance & Accounting and analytics. WNS’s clients in this vertical include ocean carriers, 3PL’s, express companies, truckers and shippers.


GraphOn Adds iPhone to iPad Windows Application Access App

Image representing iPhone as depicted in Crunc...

GraphOn Corporation today announced its new GO-Global iOS Client. Available immediately as a free downloadable app from Apple’s App Store, the new GO-Global iOS Client is used in conjunction with GraphOn’s GO-Global Windows Host solution to seamlessly deliver Windows applications to Apple iPad, iPhone, and iPod Touch users.

The new GO-Global iOS Client replaces GraphOn’s popular GO-Global iPad Client. New and improved features include support for iPhone and iPad Touch platforms in addition to existing iPad support, enhanced navigation, voice-to-text input, auto-resizing on device rotation, increased performance, plus iPhone and iPad 3 Retina Display screen optimization.

“iPad, iPhone, and iPod Touch users can now interact with their favorite Windows programs using just their fingers,” said Christoph Berlin, GraphOn’s vice president of product management and marketing. “Our GO-Global iOS Client uses intuitive, multi-touch gestures to achieve popular mouse and keyboard operations. Windows applications appear on the iOS device just as though they were running locally, retaining all features, functions, and branding.”

GO-Global Windows Host securely delivers server-resident Windows applications to virtually any location, platform, and operating system. Current GO-Global Windows Host customers can gain immediate access to their remoted Windows programs from their iOS device by simply downloading the free GO-Global iOS Client from the App Store at http://itunes.apple.com/app/go-global/id441263366?mt=8. iPad, iPhone, and iPod Touch users who are not currently using GO-Global Windows Host, but who wish to evaluate its remote access capabilities, can download the free GO-Global iOS Client and then immediately connect to GraphOn’s online demonstration server running several popular Windows programs. No sign-up or registration is required.

GraphOn’s GO-Global iOS Client requires Apple iOS 5.0 or later.


Automation and Orchestration: Why What You Think You’re Doing is Less Than Half of What You’re Really Doing

One of the main requirements of the cloud is that most—if not all—of the commodity IT activities in your data center need to be automated (i.e. translated into a workflow) and then those singular workflows strung together (i.e. orchestrated) into a value chain of events that delivers a business benefit. An example of the orchestration of a series of commodity IT activities is the commissioning of a new composite application (an affinitive collection of assets—virtual machines—that represent web, application and database servers as well as the OSes and software stacks and other infrastructure components required) within the environment. The outcome of this commissioning is a business benefit whereas a developer can now use those assets to create an application for either producing revenue, decreasing costs or for managing existing infrastructure better (the holy trinity of business benefits).

When you start to look at what it means to automate and orchestrate a process such as the one mentioned above, you will start to see what I mean by “what you think you’re doing is less than half of what you’re really doing.” Hmm, that may be more confusing than explanatory so let me reset by first explaining the generalized process for turning a series of commodity IT activities into a workflow and by turn, an orchestration and then I think you’ll better see what I mean. We’ll use the example from above as the basis for the illustration.

The first and foremost thing you need to do before you create any workflow (and orchestration) is that you have to pick a reasonably encapsulated process to model and transform (this is where you will find the complexity that you don’t know about…more on that in a bit). What I mean by “reasonably encapsulated” is that there are literally thousands of processes, dependent and independent, going on in your environment right now and based on how you describe them, a single process could be either A) a very large collection of very short process steps, or, Z) a very small collection of very large process steps (and all letters in between). A reasonably encapsulated process is somewhere on the A side of the spectrum but not so far over that there is little to no recognizable business benefit resulting from it.

So, once you’ve picked the process that you want to model (in the world of automation, modeling is what you do before you get to do anything useful ;) ) you then need to analyze all of the processes steps required to get you from “not done” to “done”…and this is where you will find the complexity you didn’t know existed. From our example above I can dive into the physical process steps (hundreds, by the way) that you’re well aware of, but you already know those so it makes no sense to. Instead, I’ll highlight some areas of the process that you might not have thought about.

Aside from the SOPs, the run books and build plans you have for the various IT assets you employ in your environment, there is probably twice that much “required” information that resides in places not easily reached by a systematic search of your various repositories. Those information sources and locations are called “people,” and they likely hold over half of the required information for building out the assets you use, in our example, the composite application. Automating the process steps that are manifested in those locations only is problematic (to say the least), if not for the fact that we haven’t quite solved the direct computer-to-brain interface, but for the fact that it is difficult to get an answer to a question we don’t yet know how to ask.

Well, I should amend that to say “we don’t yet know how to ask efficiently” because we do ask similar questions all the time, but in most cases without context, so the people being asked seldom can answer, at least not completely. If you ask someone how they do their job, or even a small portion of their job, you will likely get a blank stare for a while before they start in how they arrive at 8:45 AM and get a cup of coffee before they start looking at email…well you get the picture. Without context, people rarely can give an answer because they have far too many variables to sort through (what they think you’re asking, what they want you to be asking, why you are asking, who you are, what that blonde in accounting is doing Friday…) before they can even start answering. Now if you give someone a listing or scenario in which they can relate (when do you commission this type of composite application, based on this list of system activities and tools?) they can absolutely tell you what they do and don’t do from the list.

So context is key to efficiently gaining the right amount of information that is related to the subject chain of activities that you are endeavoring to model- but what happens when (and this actually applies to most cases) there is no ready context in which to frame the question? Well, it is then called observation, either self or external, where all process steps are documented and compiled. Obviously this is labor intensive and time inefficient, but unfortunately it is the reality because probably less than 50% of systems are documented or have recorded procedures for how they are defined, created, managed and operated…instead relying on institutional knowledge and processes passed from person to person.

The process steps in your people’s heads, the ones that you don’t know about—the ones that you can’t get from a system search of your repositories—are the ones that will take most of the time documenting, which is my point, (“what you think you’re doing is less than half of what you’re really doing”) and where a lot of your automation and orchestration efforts will be focused, at least initially.

That’s not to say that you shouldn’t automate and orchestrate your environment—you absolutely should—just that you need to be aware that this is the reality and you need to plan for it and not get discouraged on your journey to the cloud.

Blackbaud Acquires Convio

Blackbaud, Inc. today announced that it has completed its acquisition of Convio, Inc., a leading provider of on-demand constituent engagement solutions. Under the terms of the merger agreement, Blackbaud paid an aggregate purchase price of approximately $325 million. Blackbaud financed the deal through a combination of cash and borrowings from its credit facility.

“This is an exciting day for the Blackbaud and Convio teams. Together, we can build better solutions for nonprofits, and that’s what drives us,” said Marc Chardon, Blackbaud’s chief executive officer. “Convio’s strengths in online and social marketing, and subscription and cloud-based offerings complement ours, and will accelerate our ability to deliver more to the nonprofit sector.”

“Having worked with both Convio and Blackbaud over the past few years, we are pleased to see this new development,” said Major George Hood, National Community Relations Secretary for The Salvation Army. “Being able to work with one vendor across multiple channels of engagement will be a benefit to The Salvation Army, and we are confident Blackbaud will continue to help us more effectively engage with our donors and supporters.”

Originally announced on January 17, 2012, the acquisition followed the completion of the tender offer Blackbaud made through its wholly owned subsidiary, Caribou Acquisition Corporation, for all the outstanding shares of Convio common stock for $16 per share, net to the seller in cash, without interest and less any applicable withholding taxes. Immediately prior to the merger, Caribou Acquisition Corporation held approximately 90.4% of Convio’s outstanding common stock. As a result, Blackbaud was able to complete a “short-form” merger under Delaware law where all outstanding shares of Convio common stock that were not previously tendered (other than shares held by Caribou Acquisition Corporation or Convio stockholders that properly exercise appraisal rights under Delaware law) were converted into the right to receive the same consideration paid to stockholders in the tender offer. Blackbaud assumed Convio equity awards that were unvested as of closing. Convio’s common stock has ceased trading on the Nasdaq Global Select Market.

Blackbaud plans to support Convio’s current offerings, and the companies’ combined research and development (R&D) teams will work with customers to improve and extend current products and build new offerings. Blackbaud plans to keep Convio’s current office structure, adding key offices in the Bay Area and Austin. Gene Austin, Convio’s former president and CEO, will lead the enterprise customer business unit at Blackbaud, reporting to Marc Chardon. “We are excited to work together to bring nonprofits the technology they need at a faster pace than either of us could have separately,” said Austin.

Easter Seals, an organization focused on providing exceptional services, education, outreach, and advocacy to people living with autism and other disabilities, has worked closely with both Blackbaud and Convio for nearly a decade. “In that time, each company has developed core, and in many cases distinct strengths, in the fundraising and marketing arenas,” said Steve Bergman, Easter Seals’ CIO. “Blackbaud’s acquisition of Convio will hasten the creation of new products that could enhance the effectiveness of nonprofits like Easter Seals.”Mexico, the Netherlands, and the United Kingdom.


The Definition of a Converged Infrastructure

There’s been a cacophony of hyperbole and at times marketing fluff from vendors and analysts with regards to Reference Architectures and Converged Infrastructures. As IBM launched PureSystems, NetApp & Cisco decided it was also a good time to reiterate their strong partnership with FlexPod. In the midst of this, EMC decided to release their new and rather salaciously titled VSPEX. From the remnants and ashes of all these new product names and fancy launch conferences, the resultant war blogs and Twitterati battles ensued. As I poignantly watched on from the trenches in an almost Siegfried Sassoon moment, it was quickly becoming evident that there was now an even more ambiguous understanding of what distinguishes a Converged Infrastructure from a Reference Architecture, what it’s relation was with the Private Cloud and more importantly whether you, the end user should even care.

read more

Using an In-Memory Data Grid to Scale Cloud-Based Apps at Cloud Expo NY

The elastic resources offered by cloud computing have created an exciting opportunity for applications to handle very large workloads. However, writing applications that span an elastic pool of virtual servers creates huge challenges for developers. How can these virtual servers easily and efficiently share application data while avoiding scaliability bottlenecks? The answer lies in using in-memory data grids (IMDGs) to provide a powerful, easy-to-use, and highly scalable storage layer.
IMDGs unlock the potential for application scaling while eliminating the cost and performance impact of using blob stores and database servers. IMDGs also provide a powerful platform for fast data analysis and enable transparent data migration between on-premise sites and the cloud.

read more

Xerox’s Cloud Computing Capabilities to Aid Airline Safety

“Airlines can now leapfrog to the cloud to expedite their communications and do so at costs much lower than maintaining existing mainframe systems,” said Ken Stephens, senior vice president of cloud services, Xerox, as Xerox and AvFinity announced and agreement to provide seamless transmission of flight-critical communication.

“There is no room for error in ensuring safety in the skies,” Stephens added.

read more

Multi Community Cloud Services – Moving Beyond the Community Cloud

Community cloud services are services that are shared by multiple members of a community. They centralize the functions and can provide specific mission requirements. The mission requirements can be related to specific policies and compliance. Community clouds leverage the benefits of a public cloud but can satisfy specific requirements. These are perfect for communities with specific focus areas such as performance, auditing or policies that need to be applied across the community.
A major benefit of community clouds is that many organizations can come together and pitch in for the cost of the cloud. This sharing results in significant savings compared to each organization setting up and supporting their own services. Community cloud is a term that is discussed a lot, but I have a new term for the cloud that combines multiple communities together, “Multi community cloud”. Multi Community cloud services span across communities with similar functions and interests. For example, payment cloud services can be used by multiple communities (such as banking, finance, insurance etc) to make payments to individuals and companies, this would fall under multi community cloud services. These services have to be broad enough to encompass the functions of the multiple communities. With the rapid growth of the community cloud, multi community clouds will evolve to a higher level as communities identify and realize the benefits of leveraging many similar capabilities, even though some policies may be different.

read more

Canada Cloud Network – A Triple Helix Design

The idea behind the Canada Cloud Network has been to identify, review and then utilize a number of Innovation Best Practices to design how (and why) this forum works.

In particular one key goal is to implement a ‘Next Generation Cluster‘, a thought leadership piece from Cisco on how the original cluster model from Michael Porter can be upgraded through new Cloud 2.0 technologies.

read more

Cloud Expo New York Speaker Profile: Mårten Mickos – Eucalyptus Systems

With Cloud Expo 2012 New York (10th Cloud Expo) now five weeks away, what better time to introduce you in greater detail to the distinguished individuals in our incredible Speaker Faculty for the technical and strategy sessions at the conference…

We have technical and strategy sessions for you every day from June 11 through June 14 dealing with every nook and cranny of Cloud Computing and Big Data, but what of those who are presenting? Who are they, where do they work, what else have they written and/or said about the Cloud that is transforming the world of Enterprise IT, side by side with the exploding use of enterprise Big Data – processed in the Cloud – to drive value for businesses…?

read more