In 2013, there was increasing demand for cloud computing, with enterprise after enterprise committing to a rapid and vigorous redeployment of resources toward cloud-based solutions. Gigaom Research conducted surveys of both mainstream and leading-edge users in the second quarter of 2014 and results suggest that another wave of cloud technology investment is anticipated over the next 1-2 years.
In Gigaom’s analysis of the survey, there were evident key drivers and inhibitors for cloud adoption, as well as workload migration patterns.
Monthly Archives: September 2014
VDI: Why is it still virtually untapped?
By Giorgio Bonuccelli, Marketing Director
Cloud computing and virtualization technologies have undoubtedly brought about a revolution in the IT world in the past decade. With the increase in BYOD environments, a work-from-home culture and the drive for resource optimization, desktop virtualization and remote application delivery have been welcome solutions for modern businesses of all sizes – often being their first foray into cloud computing.
IT analysts had predicted that 2012 would be the ‘year of VDI’ (virtual desktop infrastructure). The same forecast was made for 2013, and again for 2014. However, while desktop virtualization is perceived as a mature market, it is far from being so – VDI penetration is still less than 2% of all desktops worldwide. VDI has been touted as the short cut to cloud computing, so what has put a sea of traffic cones in the way?
Desktop virtualization trends
Despite all the discussions and predictions, it is surprising to discover that the penetration of desktop virtualization is not as extensive as expected. Most companies are not using VDI, and those that are use it for only a small part of their business.
According to DataCore’s State of Virtualization survey in March 2013, the percentage of organizations that haven’t implemented some desktop virtualization is 55%, while only 11% have virtualized up to a quarter of their desktops. According to Gartner, VDI penetration was around 1.5% before 2012 and this percentage is expected to grow to 8-15% by 2015. Clearly, these statistics highlight a major shortfall and potential in the market.
To reap the rewards of desktop virtualization, organizations need to understand how this technology works and the benefits it offers to businesses.
Benefits of desktop virtualization
Coupled with application virtualization and Windows user virtualization, desktop virtualization offers centralized desktop management. Each desktop is virtualized and offered in an isolated state, resulting in highly secure networks.
Employees moving between work locations can access the same work environment, data and applications. If a user loses a device, he can easily connect to the server from another device as all components are readily available at login. All data are saved in the data center, so lost or stolen devices will have little effect on the organization’s data integrity (assuming that they are secured correctly). Recovery from any disaster is easily achievable.
Potential barriers for desktop virtualization
With optimized resources, lower TCO (total cost of ownership) and highly scalable desktop solutions, desktop virtualization is a very attractive approach to IT. As we have seen, few organizations have implemented this technology widely, so what are the issues that are acting as potential barriers?
Cost / Return on investment (ROI)
It is commonly thought that desktop virtualization reduces infrastructure costs when compared to other network solutions. However, there are certain caveats attached. The costs saved on desktop hardware and infrastructure are balanced by the more expensive server infrastructure required, including storage and network solutions. The network must be always on, and graphically rich applications demand more bandwidth and low latency to provide a rich user experience – all of which adds to the infrastructure cost. The solution is to effectively plan the VDI environment so that more desktops are delivered and resources are optimized.
Uninterrupted network connection
Secondly, the VDI environment requires an always-on network connection. As desktops, application and data are delivered from a centralized server, any network issues deny access to company resources. For this reason, organizations have to provide highly reliable network solutions with contingencies for possible outages.
Complexity
Compared to an RDP (remote desktop protocol) network, a VDI deployment is a complex procedure. It has to be well-planned and effectively implemented. There are several aspects to consider, such as the components that are to be virtualized, the types of users that require virtualization, and the total ROI. Moreover, adequate bandwidth and low latency must be managed, based on the organization’s network requirements.
How can these barriers be overcome effectively to improve ROI?
Looking at present desktop virtualization trends, many organizations are obviously unable to reap the full potential of desktop virtualization. However, it is important to understand that desktop virtualization is designed with a specific purpose: to deliver a rich user experience with an easy and scalable desktop management environment, and to deliver medium/long term ROI.
Desktop virtualization adds value to businesses in a number of ways
ROI – the investment in server hardware for desktop virtualization results in a customized user experience and better (and more reliable) management of desktops from a centralized location.
Efficiency – administrators can deliver virtual applications to virtual machines (VMs) on the go.
Offline virtualization – running applications inside a VM means that you can securely access corporate information; additionally, you can work offline when the network connection is not available and synchronize later.
VDI with 2X Remote Application Server
2X Remote Application Server (2X RAS) allows companies to test and experiment with the advantages of VDI. 2X RAS allows you to deliver remote desktop and virtual desktop services to your network through the same console. One of the the reasons discouraging companies from fully embracing VDI is the initial cost, with CEOs and CIOs reluctant to trash previous investment to migrate to a new paradigm. With 2X RAS you can easily implement VDI in specific areas of your business, as it is possible to rely on different hypervisors at the same time. For example, is possible to implement VDI side by side with remote desktops and virtual applications. IT administrators can migrate part of the infrastructure and perform stress testing on the network, with immediate gains in flexibility through the hybrid cloud infrastructure.
2X RAS easily delivers Windows applications hosted on hypervisors and Windows remote desktop servers to anyone anywhere, using any type of operating system, computer or mobile device. By hosting applications in the cloud, businesses benefit from reduced administrative overheads and less helpdesk support, with easy control over access to applications, and assurance that all users are using the latest and most secure versions of applications.
Conclusion
Not all desktop virtualization systems are equal. Likewise, storage solutions differ too. What works well for one organization cannot be assumed to be the ideal fit for a second. A comprehensive desktop virtualization plan would involve factors like IT requirements, infrastructure, user experience and application workload.
With the evolution of software delivery models, a company’s CIO now has multiple options to choose from. Desktop virtualization is sure to yield good ROI in the long run, notwithstanding its cost and complexity. With proper planning, complexity can be replaced with ease of management and highly scalable, agile, and cost-effective virtualization solutions for businesses of all sizes – effectively clearing the road up ahead for your 21st century corporate network.
HP buys Eucalyptus to offer cloud compatibility products
Eucalyptus is known in the cloud arena as being an open source offering that gives private cloud engineers the tools they need work seamlessly with Amazon Web Services APIs. Eucalyptus helps organizations pool together their resources such as compute, network and storage which in turn gives end users the ability to tap into on-demand resources within hybrid and private clouds.
It is being reported that HP has purchased Eucalyptus for an undisclosed amount. HP has largely shied away from mergers and acquisitions considering their last notable acquisition was quite the flop. HP once ponied up $11 billion for Autonomy, which proved to be a catastrophe according to some analysts. HP’s acquisition of Eucalyptus seems much more methodical than the Autonomy deal and the PC giant looks to stretch further into cloud by picking up one of the marquis names in hybrid and private cloud.
“We want to be able to go to those customers and say, ‘When you go with HP Helion, we give you that level of choice.’ We’re not going to try to have you bet just on our public cloud,” says Bill Hilf, SVP at HP Cloud.
Many analysts are describing this deal as having multiple benefits for HP. Not only can they increase the profile of their private cloud offerings , the HP team gains valuable experience by onboarding all of the talent of the Eucalyptus team. Marten Mickos, the CEO of Eucalyptus, will transition into a Vice President and General Manager role reporting directly to HP CEO Meg Whitman. Mickos is notable for his role in the development of MySQL, which was previously purchased by Sun Microsystems for nearly $1 billion.
Eucalyptus, which was founded in 2007, will retain is Goleta, CA office while operating under the HP cloud brand. Before the acquisition, Eucalyptus was awarded over $55 million in funding from venture capital. Although dollar figures have not been announced, many speculate that the purchase of Eucalyptus was at least a 9 figure deal.
The post HP Buys Eucalyptus to Offer Cloud Compatibility Products appeared first on CloudWedge.
‘Internet of Things’ Sponsorship Opportunities at @ThingsExpo
Launched this June at the Javits Center in New York City with over 6,000 delegate attendance, the largest IoT event in the world, 2nd international Internet of @ThingsExpo will take place November 4-6, 2014, at the Santa Clara ConventionCenter in Santa Clara, California with estimated 7,000 plus attendance over three days. @ThingsExpo is co-located with 15th international Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading IoT industry players in the world. In 2014, more than 200 companies will be present at the @ThingsExpo show floor, including global players, and hottest new technology pioneers.
Forbes Cloudwashing Article: A Few Key Points
By John Dixon, Consulting Architect
I came across an article from a couple of months ago by Jason Bloomberg in Forbes entitled, “Why Implement Cloud When Cloudwashing Will Suffice?” The article briefly describes adoption of cloud computing and the term “cloudwashing” – what vendors and customers are tending to do in order to get started with cloud computing. The article makes a lot of good points. Below, I highlighted a few of the points that stood out the most to me and added in some of my own thoughts.
“Cloudwashing typically refers to vendors’ and service providers’ exaggerated marketing, where they label a product as “Cloud” even when such designation is either completely false or at best, jumping the gun on a future capability … But it’s not just vendors and service providers who Cloudwash – executives often exaggerate the truth as well … some CIOs are only too happy to put OpenStack or CloudFoundry on their Cloud roadmaps, secure in the knowledge that they will now be able to present themselves as forward-looking and Cloud savvy…”
I’ve seen this firsthand in various conversations. And I don’t think it’s malicious or wrong – I think it represents a limited understanding of cloud computing. I like to point back to recent history and the days of datacenter consolidation. The whole thing was pretty straightforward…we installed some software on servers (vSphere, Hyper-V, or similar) and went to work virtualizing servers. From that, I derived benefits that were easy to understand – fewer servers to administer, less power consumed, vastly improved time to provision new servers, etc. We didn’t have to do much measurement of those things either. Who needs measurement when you consolidate, say, at least ten servers onto one. The whole thing was very comfortable. I think “cloudwashing” is a good term, and it feels like an attempt to replay datacenter consolidation in terms of cloud computing. And…It’ll be BETTER! After all, it is cloud, so the benefits must be greater!
Not so fast though. Mr. Bloomberg makes a key point later in the article, and I agree 100%.
“The underlying story [of cloud computing] is one of business transformation. Cloud Computing does not simply represent new ways of procuring IT assets or software. It represents new ways of doing business. Cloud, along with other Digital Transformation trends in IT including the rise of mobile technologies, the Internet of Things, and the Agile Architecture that facilitates the entire mess, are in the process of revolutionizing how businesses – and people – leverage technology. And as with any revolution, the change is both difficult to implement and impossible to understand while it’s happening.”
I think the author is absolutely correct here. Cloud Computing is a new capability for the business. One of the most exciting prospects of the whole cloud computing scene is this: it allows a business to take on more risk, fail fast, learn, and begin again. Call that the Deming Cycle or P-D-C-A, or whatever you like. Cloud computing has made real the fantastic growth of companies like Uber (valuation greater than Hertz and Avis combined) and Airbnb (in 2014, estimated to book more “hotel stays” than Hilton). Two crazy ideas that were no doubt implemented in “the cloud.”
Cloud is difficult to implement – if you think of it like any other technology project. Where do you start? How do you know when you’re done? Maybe it is not an implementation project at all.
“The choices facing today’s executives are far more complex than is it Cloud or isn’t it, or should we do Cloud or not. Instead, the question is how to keep your eye on your business goals as technology change transforms the entire business landscape.”
I couldn’t agree more with this. I think that IT organizations should specialize in its company’s core business, rather than administering systems that do not provide competitive advantages. At GreenPages, we’re currently bringing to market a set of Professional and Cloud services that will help organizations take advantage of cloud computing that I’m pretty excited about. To learn more about the evolution of the corporate IT department and what it means for IT decision makers, check out my ebook.
What do you think of Jason’s take on cloudwashing?
Alex Gorbansky, CEO, Docurated: Why storage is free, yet information pays out
We’ve all been there. It might be an email address you neglected to put in your contacts book. It’s on the tip of your tongue but you just can’t remember it. It might be a photo, or even an important email from the boss you swore you’d put in the right folder.
Docurated, a New York and Boston based firm, feels your pain. The firm’s product lifts, shifts and sorts through a company’s content repository, using machine learning to automatically tag the most relevant, effective and important content.
“That’s really a story that’s relevant to a lot of people,” Alex Gorbansky, CEO of Docurated, tells CloudTech. “Whether you work in sales or marketing in a large organisation, you are looking and flipping through so much content it impossible for you to find that information you need – and that’s where we help customers.”
Gorbansky says his view of the company’s position in life is as “a connectivity layer between knowledge workers and their contents.” It’s knowledge and information that’s the key. Here’s the irony: as cloud storage becomes ever cheaper, to the point of no cost at all, it’s the information that you’re actually being charged for.
At least, that’s how Docurated sees it. With clients including Netflix and The Weather Channel, and in a world where data threatens to overwhelm everyone in the team, others are seeing it that way too.
“We’re seeing a world where there’s much more heterogeneity on where content is stored,” Gorbansky explains. “People have information on Box, they have information on Google Drive, SharePoint, Dropbox, all over the place.
“The cost of information retrieval from a human productivity perspective is going through the roof,” he adds. “And that creates an enormous challenge for organisations.”
There are other options available to companies with different fingers in different pies. Some storage vendors offer capability to merge all their storage accounts into one big cloud, for instance. Yet the biggest partner for Docurated is Box, announcing integration back in June. As Box’s search functionality is limited to 10,000 characters, Docurated can go through the entire subset.
Docurated uses a variety of what it calls ‘signals’ to assess what content is relevant or not. As Gorbansky puts it: “When people are looking for content they’re specifically not looking for a document. They’re looking for something within the document, maybe even within the page, and we bring all of that to the forefront.”
These signals are inspired in part by how Google sorts its SEO rankings. “What was really brilliant about Google Pagerank is the way that they looked at the web, they realised that the more links a site had pointing to it, the more relevant that page was,” Gorbansky explains. “We’re getting the same thing with documents, but of course documents don’t have links, so you have other signals.”
The signals include how often a slide has been used – say, to decide which bio page is best – how recently a slide has been modified, as well as analytics. “You can call it data science,” Gorbansky concedes. “We apply data science to all this content to figure out which is the most interesting and the most relevant.”
Documents can be broken up incrementally, from individual slides, to individual text blocks and charts within that slide. Users can adopt a clipboard – a workspace where they can fit all their useful data. And to complete the cycle, the user can then save as a PowerPoint file, or PDF – effectively creating a new piece of content.
Yes, it’s adding to the noise, but it’s making that noise clearer and more tuneful. “Imagine trying to do this with today’s workflow, and today’s environment,” Gorbansky adds. “It’s virtually impossible.”
Docurated was in attendance at BoxWorks last week, with Gorbansky speaking at a panel session. Given the two companies work closely together, Gorbansky is in a perfect position to comment on Aaron Levie’s strategy, from the on-again off-again IPO drama to nixing cloud storage costs – and he defended the Box CEO.
“I think a lot of companies go through ups and downs,” Gorbansky explains, adding that the same happened with a previous startup of his but the company ‘persevered, and truly understood its true north.’ “The same will happen with Box,” he adds.
“One thing I will tell you is you can listen to all the stuff in the media, which has become negative recently, but if you actually talk to customers and leading CIOs, because we have many of those in common, they’re all enthusiastically investing in Box, despite all these things.
“I think that’s really what matters,” he adds.
@ThingsExpo | @Numerex CTO Discusses #M2M Needs
Numerex has been reported as an “IoT company to watch,” and that seems to be a reasonable statement. The company does its work in the cloud, of course, delivering its IoT DNA to devices, networks, and application tools for M2M customers. We had a few questions for Numerex’s Chief Innovation & Technology Officer Jeffrey O. Smith, and here’s what he told us:
IoT Journal: Numerex has been around for about 20 years, with a background in telemetry, as I understand it. How has that migrated to what’s known as M2M today?
Jeffrey O. Smith: I have referred to a Maslow’s hierarchy of telemetry needs as a tongue-in-cheek way of helping people understand the evolution the M2M market isgoing through. “Just get me the data, stupid” is the first layer, and the hierarchy’s top layer is optimization.
Supply chain is a good example of that evolution. Initial supply chain deployments were Œdots on a map most of our deployments lately are in optimizing things like dwell time in the supply chain utilizing location data with higher level data analytics and visualization.
IoT Journal: How much data will M2M be producing within a few years, and how should
IT managers plan for it?
Jeffrey: Most M2M applications do not create that much data, except for possibly fleet management, which has location reports continuously. But who really needs to know the address of a delivery van or other fleet asset with 15 seconds resolution from six months ago?
By contrast, I have said that control and BSS/OSS data and metadata will far outweigh the data from a device. Part of the reason for this is that devices which generate large amounts of data will have local
intelligence.
For example, cavitation detection in large pumps generate a tremendous amount of real-time vibration, pressure, etc. data. But that data is usually analyzed using high-level frequency analysis locally and
only one bit of info is needed to transmit—are we in cavitation or not? You will be able to drill down to the device to get at a particular moments raw data, but it should not be transmitted and analyzed in the
cloud if it can be done on the edge.
A second example is video. In this case, it can be filtered locally—motion detection and capture—or
transmitted through out of wireless-band medium, ie a land line.
IoT Journal: I wanted to say next that massive dataflows are of no value without monitoring, analysis,
collaborative tools, and a strategy, and you’ve pointed out that the non-local dataflows may not even be that massive. In any case, how does Numerex address these issues with its customers?
Jeffrey: Numerex has had an approach to “big data for real-time streaming” of data for several years. We have PhD¹s with specialization in this area.
Most of our initial success has been in internal projects to determine areas for cost savings and determining anomalistic behavior of network elements and devices. We use some pretty-leading edge technologies to do machine learning and real-time classification on streaming data (in terabytes) and use the cloud for processing this data in real-time.
We have only recently been commercializing these capabilities. The most interesting opportunity is what we have down lately in deep packet content analysis of live streams in coordination with network and billing data.
IoT Journal: What similarities and what differences do you find in M2M challenges
within the large vertical markets that you serve?
Jeffrey: Differences are usually in how the data is presented or the action that is taken. The similarities of challenges are in devices—which our new NX platform through licensing and off-the-shelf solutions can solve—by eliminating or reducing certification, for example.
Other similarities that we leverage are services like LBS. Indoor location is needed across verticals and the integration of those capabilities like advanced Kalman filtering to disparate tea-time data to infer good indoor location is helpful in most domains.
@CloudExpo | @Racemi Delivers #Cloud Migration Software
Racemi announced an update to its DynaCenter cloud migration software that synchronizes changed data between source and target to minimize data latency prior to migration cut over. This new delta-data synchronization feature is especially useful for migrating large enterprise applications such as Customer Relationship Management (CRM), Enterprise Resource Planning (ERP), Business Process Management (BPM), and other enterprise applications, which are often performed by system integrators.
@BigDataExpo | @Databricks to Present at #BigData Expo Silicon Valley
Amazon, Google and Facebook are household names in part because of their mastery of Big Data. But what about organizations without billions of dollars to spend on Big Data tools – how can they extract value from their data? In his session at 6th Big Data Expo®, Ion Stoica, CEO and co-founder of Databricks, will discuss how the zero management cost and scalability of the cloud is addressing the challenges and pain points that data engineers face when working with Big Data. He will share how the growing demand for cloud-based Big Data workloads and frameworks is shaping the future of Big Data analysis.
@ThingsExpo | @Dell Opens ‘Internet of Things’ Lab (#IoT)
Dell announced the opening of the Dell Internet of Things (IoT) Lab to partner with customers to help them explore, test and deploy IoT solutions that help drive business outcomes and accelerate time to market. The Dell IoT Lab is located in the Dell Silicon Valley Solution Center in Santa Clara, Calif. and demonstrates Dell’s expanding IoT solutions and service capabilities.