Archivo de la categoría: Virtualisation

Report: EMC mulls selling itself to its subsidiary VMware

EMC might be selling itself to VMware in a move that could see the child become the parent

EMC might be selling itself to VMware in a move that could see the child become the parent

Storage giant EMC is reportedly considering a buyout by its virtualization-focused subsidiary, VMware, at the behest of activist investor Elliott Manage according to multiple reports.

The deal according to Re/Code, which first reported the news, would work like this: VMware would issue between $50bn and $55bn in new share, with about $30bn going towards cancelling EMC’s stake in VMware, and the remaining shares in VMware issues to current EMC stakeholders.

While no deal has been agreed or confirmed by spokespeople at EMC, VMware and Elliott Management it is clear the EMC Federation is under increasing pressure to split up and drastically reorganize its operations, something that has been on the cards for a couple of years now amidst flat or declining revenues and a bloating portfolio of products and services.

Elliott has made no secret of its desire to see EMC balkanize the Federation – EMC, VMware and Pivotal – into autonomous entities with more streamlined product portfolios, much like its support of Citrix’s reorganization and divestiture(s).

The market’s reaction to the potential acquisition was mixed, with VMware’s share price dropping from $93.43 to $85.72 per share in the space of just over an hour after the news broke, levelling off at $86.65 per share by the close of trading yesterday. EMC shares, however, rose from about $25.93 per share to $27.05 during the same period, closing at $26.85.

A deal that would see EMC and VMware combine into one entity wouldn’t be too far fetched given EMC’s more recent acquisition streak – namely, software companies that bolster its software-defined storage and enterprise software capabilities. VMware, an embedded component of today’s datacenters, complements that strategy nicely, but with the news sending VMware’s share price downward it doesn’t seem the market favours the child becoming the parent.

In a call with analysts in July EMC chairman and chief executive Joe Tucci rejected the possibility of a split, but emphasized a transformation that puts cloud technology (like VMware’s) at its core.

“Undoubtedly everybody on this call believes deeply that one of the biggest transitions every company has to do is move to the cloud. We talked about digital transformation which I think is an even bigger market where the Internet of Things and all of that falls in. But just take where we live in datacentres. And datacentres are moving to cloud technologies, both private and managed.”

“Obviously, if you were doing that, would you rather do that as just VMware, just EMC, just Pivotal with their past or are you a lot stronger in front of a customer’s doing it together? So, do I think we’re much stronger? The answer is absolutely. So I think splitting this federation or spinning off VMware is not a good idea. I firmly believe that we are better together, a lot better together.”

Storage tech provider Tintri bags $125m to take on EMC, NetApp

Tintri secured $125m in series F funding this week

Tintri secured $125m in series F funding this week

Storage specialist Tintri has secured $125m in a funding round the company said would go towards accelerating development of its virtualised storage solution.

The latest funding round, led by Silver Lake Kraftwerk with participation from Insight Venture Partners, Lightspeed Ventures, Menlo Ventures and NEA brings the total investment secured by Tintri since its founding in 2008 to $260m.

Tintri specialises in storage hardware optimised to serve up data for individual virtual machines. The company’s storage servers blend both HDD and SSD tech in order to optimise hot and cold storage and access, making storage more performant by making it smarter.

“The storage industry is going through a dramatic transformation. Virtualization and cloud are forces for change—and conventional DAS, NAS and SAN storage is struggling to keep pace. That’s why our message of VM-aware storage (VAS) is winning in the marketplace,” said Ken Klein, chairman and chief executive for Tintri.

“This funding fuels our mission—we’ll be growing our global footprint and raising visibility of the business benefits of storage built specifically for virtualized enterprises.”

The company’s virtualisation-aware storage wares have enjoyed some solid traction among some of the world’s largest companies and service providers including Chevron, GE, the EIB, NTT, SK Telecom and Rogers Communications.

Will Microsoft’s ‘walled-garden’ approach to virtualisation pay off?

Microsoft's approach to virtualisation: Strategic intent or tunnel vision?

Microsoft’s approach to virtualisation: Strategic intent or tunnel vision?

While the data centre of old played host to an array of physical technologies, the data centre of today and of the future is based on virtualisation, public or private clouds, containers, converged servers, and other forms of software-defined solutions. Eighty percent of workloads are now virtualised with most companies using heterogeneous environments.

As the virtual revolution continues on, new industry players are emerging ready to take-on the market’s dominating forces. Now is the time for the innovators to strike and to stake a claim in this lucrative and growing movement.

Since its inception, VMware has been the 800 lb gorilla of virtualisation. Yet even VMware’s market dominance is under pressure from OpenSource offerings like KVM, RHEV-M, OpenStack, Linux Containers and Docker. There can be no doubting the challenge to VMware presented by purveyors of such open virtualisation options; among other things, they feature REST APIs that allow easy integration with other management tools and applications, regardless of platform.

I see it as a form of natural selection; new trends materialise every few years and throw down the gauntlet to prevailing organisations – adapt, innovate or die. Each time this happens, some new players will rise and other established players will sink.

VMware is determined to remain afloat and has responded to the challenge by creating an open REST API for VSphere and other components of the VMware stack.  While I don’t personally believe that this attempt has resulted in the most elegant API, there can be no arguing that it is at least accessible and well-documented, allowing for integration with almost anything in a heterogeneous data centre. For that, I must applaud them.

So what of the other giants of yore? Will Microsoft, for example, retain its regal status in the years to come? Not if the Windows-specific API it has lumbered itself with is anything to go by! While I understand why Microsoft has aspired to take on VMware in the enterprise data centre, its API, utilising WMI (Windows Management Instrumentation), only runs on Windows! As far as I’m concerned this makes it as useless as a chocolate teapot. What on earth is the organisation’s end-goal here?

There are two possible answers that spring to my mind, first that this is a strategic move or second that Microsoft’s eyesight is failing.

Could the Windows-only approach to integrating with Microsoft’s Hyper-V virtualisation platform be an intentional strategic move on its part? Is the long-game for Windows Server to take over the enterprise data centre?

In support of this, I have been taking note of Microsoft sales reps encouraging customers to switch from VMware products to Microsoft Hyper-V. In this exchange on Microsoft’s Technet forum, a forum user asked how to integrate Hyper-V with a product running on Linux.  A Microsoft representative then responded saying (albeit in a veiled way) that you can only interface with Hyper-V using WMI, which only runs on Windows…

But what if this isn’t one part of a much larger scheme? The only alternative I can fathom then is that this is a case of extreme tunnel vision, the outcome of a technology company that still doesn’t really get the tectonic IT disruptions and changes happening in the outside world. If it turns out that Microsoft really does want Windows Server to take over the enterprise data centre…well, all I can say is, good luck with that!

Don’t get me wrong. I am a great believer in competition, it is vital for the progression of both technology and markets. And it certainly is no bad thing when an alpha gorilla faces troop challenger. It’s what stops them getting stale, invigorating them and forcing them to prove why they deserve their silver back.

In reality, Microsoft probably is one of the few players that can seriously threaten VMWare’s near monopolistic market dominance of server virtualisation. But it won’t do it like this. So unless new CEO Satya Nadella’s company moves to provide platform-neutral APIs, I am sad to say that its offering will be relegated to the museum of IT applications.

To end with a bit of advice to all those building big data and web-scale applications, with auto-scaling orchestration between applications and virtualisation hypervisors: skip Hyper-V and don’t go near Microsoft until it “gets it” when it comes to open APIs.

Written by David Dennis, vice president, marketing & products, GroundWork

Orange creates NFV, cloud testing lab for 5G advances

Orange and Inria are partnering on an NFV, cloud testing lab for 5G

Orange and Inria are partnering on an NFV, cloud testing lab for 5G

Orange has unveiled its new lab dedicated to network virtualization and cloud computing, called I/O Lab. It’s targeting an open and accessible environment for collaboration with the wider industry.

In a particularly buzzwordy announcement, the telco has claimed the new testing environment for NFV and cloud tech will enable advances in the development of 5G, IoT and Big Data; while also referencing “fog” computing – a form distributed cloud computing where near-user network edge devices are utilised for storage – and Mobile Edge Computing.

“The networks… will undergo a radical transformation in the next decade as a result of the progress of virtualization techniques,” Orange said in a statement. “General purpose servers will be able to use software to incorporate more and more network functions, all while meeting the networks’ growing needs for capacity and reliability. At the same time, cloud computing techniques will contribute to the development of flexible storage and processing capacities in data centres and even within networks and their peripheries, including connected devices and objects.” This trend could be strengthened by the increased momentum of the Internet of Things and Big Data processing”

“The I/O Lab’s vision is to develop a coherent, flexible and reliable management structure for the networks of the future, seen as distributed communication, storage and processing infrastructures. This will be achieved by virtue of the dual distributed network and software culture of its partners and a large contribution of the worldwide Open Source communities.”

The lab has been developed in partnership with Inria, the French Institute for Research in Computer Science and Automation, and the two companies say the test-bed will be dedicated to contributing heavily to relevant Open Source communities.

Orange also claims the lab will promote and develop a broad scale network OS, called “Global OS”, which will be designed to support a variety of app development for the infrastructure, including security, performance, availability, cost and energy efficiency management. It has also targeted 2020 for tangible outputs from the lab in terms of network infrastructure ready for 5G-compatible deployment.

OpenDaylight launches third open source SDN platform, announces advisory group

OpenDaylight has released the latest version of its open source SDN platform and cobbled together an advisory group to improve the feedback loop between deployment and feature evolution

OpenDaylight has released the latest version of its open source SDN platform and cobbled together an advisory group to improve the feedback loop between deployment and feature evolution

The OpenDaylight project has released the third version of its open source software-defined networking (SDN) platform, Lithium, as the organisation launches an advisory tasked with feeding technical insights learned through deployment back into the developer community.

The OpenDaylight Project is an open source collaboration between many of the industry’s major networking incumbents on the core architectures enabling software defined networking (SDN) and network function virtualisation (NFV).

The community is developing an open source SDN architecture and software, the latest release of which has been dubbed Lithium, that supports a wide range of protocols including OpenFlow, the southbound protocol around which most vendors have consolidated.

“End users have already deployed OpenDaylight for a wide variety of use cases from NFV, network on demand, flow programming using OpenFlow and even Internet of Things,” said Neela Jacques, executive director, OpenDaylight.

“Lithium was built to meet the requirements of the wide range of end users embedding OpenDaylight into the heart of their products, services and infrastructures. I expect new and improved capabilities such as service chaining and network virtualization to be quickly picked up by our user base,” Jacques said.

The organisation said Lithium boats a number of improvements over the previous release of its platform, Helium, like increased scalability, native support for OpenStack Neutron, new security, monitoring and automation features, support for more APIs and protocols including Source Group Tag eXchange (SXP), Link Aggregation Control Protocol (LACP), IoT Data Management (IoTDM), SMNP Plugin, Open Policy Framework (OpFlex) and Control and Provisioning of Wireless Access Points (CAPWAP).

“We see OpenDaylight as a powerful platform for carrier-grade SDN solutions, which is getting more feature-rich with every release,” said Sarwar Raza, vice president, NFV Product Management, HP and OpenDaylight Project board member. “ConteXtream, now an HP Company, has been active in the OpenDaylight community since its inception and has made significant contributions to Service Function Chaining, an important capability for NFV. We look forward to our continued involvement in the OpenDaylight project to help enable widespread adoption of SDN and create a solid foundation for NFV.”

The move comes the same week the project announced the formation of the OpenDaylight Advisory Group (AG), a group composed mostly of telcos tasked with providing technical input to the OpenDaylight developer community based on deployment experience.

The twelve founding members of the advisory group include researchers and specialists from China Telecom, Deutsche Telekom, T-Mobile, China Mobile, Telefónica I+D, AT&T, Orange, and Comcast.

The organisation said the advisory group was set up to help provide technical and strategic guidance to the steering committee and developer community – in other words, to keep the open source platform from straying from the requirements of those deploying it.

Interestingly, apart from NASDAQ, enterprises seem relatively under-represented on the committee, which could see future iterations of OpenDaylight focus more heavily on those use cases – possibly over others more common in the enterprise.

Nvidia: ‘Cloud to generate $1bn for the firm in a few years’

Nvidia's chief exec believes cloud will generate over $1bn for the firm in just a few years

Nvidia’s chief exec believes cloud will generate over $1bn annually for the firm in just a few years

Chip maker and GPU specialist Nvidia Corp said it expects cloud computing to generate over $1bn in revenues for the firm in the next few years, according to a report from Reuters.

Speaking to reporters at Computex Nvidia’s chief executive officer Jen-Hsun Huang also said the company expects cloud revenues to grow between 60 and 70 per cent each year.

A number of cloud service providers have borrowed from the high performance computing world to add GPU acceleration to their services in a bid to cope with diminishing returns on CPU performance.

HPC and cloud revenue at Nvidia was $79m for the recently reported Q1 2016, up 57 per cent year-on-year, and the company has over the past year or so announced some large deals with companies like Baidu, Facebook, Flickr, Microsoft and Twitter, largely around its Tesla and GRID offerings.

Last year it also struck a deal with AWS to add GPU-accelerated instances to its growing roster of services.

Nvidia has said much of its growth in recent quarters has come from datacentre, cloud, gaming and automotive, and that its deals with virtualisation incumbents VMware and Citrix are helping to give it a strong boost in the enterprise. Speaking to journalists and analysts in February this year Huang said its deal with VMware alone means about 80 per cent of the world’s enterprises now support its GRID GPU virtualisation technology.

HP buys ConteXtreme in SDN, NFV play

HP is acquiring SDN specialist ConteXtreme

HP is acquiring SDN specialist ConteXtreme

HP has acquired software-defined networking (SDN) specialist ConteXtreme to strengthen its service provider business and network function virtualisation (NFV) offerings.

Founded in 2007, ConteXtream provides an OpenDaylight-based, carrier-grade SDN fabric controller that works on most hypervisors and commodity server infrastructure. It’s based on the IETF network virtualisation overlay (NVO3) architecture, which includes virtualised network edge nodes that aggregate flows and maps them to specific functions, a mapping subsystem based on the Location-Identity Separation Protocol (LISP), a set of application-specific flow handlers for service chaining, and a high-performance software flow switch.

The company also offers analytics that help monitor traffic and detect anomalies.

“We’re moving away from being tied to dedicated machines to having a resource pool with automated, self-service mechanisms. In the networking world, there are countless functions – firewall, caching, optimization, filtering etc. – and a bunch of inflexible hardware to do those things. NFV is about saying, ‘Why can’t we put these various functions in the cloud? Why does each function need to be on specialized and dedicated hardware?’,” explained HP’s telco business lead Saar Gillai.

“ConteXtream’s scalable and open and standards-based technology delivers innovative capabilities like advanced service function chaining, and is deployed at a number of major carrier networks across the globe. ConteXtream’s technology connects subscribers to services, enabling carriers to leverage their existing standard server hardware to virtualize functions and services.”

Gillai said the acquisition will accelerate its leadership in NFV, and that HP also plans to increase its involvement with OpenDaylight, an open source collaboration between many of the industry’s major networking incumbents on the core architectures enabling SDN and NFV.

The past year has seen HP slowly scale up its involvement with SDN and NFV initiatives.

In September last year the company announced the launch of an app store for HP customers to download SDN-enabled and virtual networking applications and tools – networking monitoring tools, virtual firewalls, virtual load balancers and the like – developed by HP as well third parties and open source communities. It also partnered with Wind River to integrate its NFV technologies with HP Helion OpenStack.

AWS doubles down on DaaS with virtual desktop app marketplace

AWS is bolstering its ecosystem around desktops

AWS is bolstering its ecosystem around desktops

Amazon has launched an application marketplace for AWS WorkSpaces, the company’s public cloud-based desktop-as-a-service, which it said would help users deploy virtualised desktop apps more quickly while keeping costs and permissioning under control.

Last year AWS launched WorkSpaces to appeal to mobile enterprises and the thin-client crowd, and the company said the app marketplace will allow users to quickly provision and deploy software directly onto virtual desktops – with software subscriptions charged monthly, and Amazon handling all of the billing.

To complement the marketplace the company unveiled the WorkSpaces Application Manager, which will enable IT managers to track and manage application usage, cost, and permissions.

“With just a few clicks in the AWS Management Console, Amazon WorkSpaces customers are able to provision a high-quality, cloud-based desktop experience for their end users at half the cost of other virtual desktop infrastructure solutions,” said Gene Farrell, general manager of AWS Enterprise Applications.

“By introducing the AWS Marketplace for Desktop Apps and Amazon WAM, AWS is adding even more value to the Amazon WorkSpaces experience by helping organizations reduce the complexity of selecting, provisioning, and deploying applications. With pay-as-you-go monthly pricing and end-user self-provisioning of applications, customers will lower the costs associated with provisioning and maintaining applications for their workforce,” Farrell said.

AWS has spent the better part of the last 9 years building up a fairly vibrant ecosystem of third-party services around its core set of infrastructure offerings, and it will be interesting to see whether the company can replicate that success on the desktop. Amazon says many companies, particularly the larger ones, deploy a mix of upwards of 200 software titles to their desktops, which would suggest a huge opportunity for the cloud giant and its partners.

Cisco to buy Embrane in NFV automation play

Cisco is consolidating its NFV portfolio with an increasing focus on automation

Cisco is consolidating its NFV portfolio with an increasing focus on automation

Networking giant Cisco announced its intent to acquire network function virtualisation (NFV) and Cisco tech specialist Embrane for an undisclosed sum this week, a move intended to bolster the company’s networking automation capabilities.

“With agility and automation as persistent drivers for IT teams, the need to simplify application deployment and build the cloud is crucial for the datacentre,” explained Cisco’s corporate development lead Hilton Romanski.

“As we continue to drive virtualization and automation, the unique skillset and talent of the Embrane team will allow us to move more quickly to meet customer demands. Together with Cisco’s engineering expertise, the Embrane team will help to expand our strategy of offering freedom of choice to our customers through the Nexus product portfolio and enhance the capabilities of Application Centric Infrastructure (ACI),” he said, adding that the purchase also builds on previous commitments to open standards, open APIs, and playing nicely in multi-vendor environments.

Beyond complimenting Cisco’s ACI efforts, Dante Malagrinò, one of the founders of Embrane and its chief product officer said the move will help further the company’s goal of driving software-hardware integration in the networking space, and offer Embrane an attractive level of scale few vendors playing in this space have.

“Joining Cisco gives us the opportunity to continue our journey and participate in one of the most significant shifts in the history of networking:  leading the industry to better serve application needs through integrated software-hardware models,” he explained.

“The networking DNA of Cisco and Embrane together drives our common vision for an Application Centric Infrastructure.  We both believe that innovation must be evolutionary and enable IT organizations to transition to their future state on their own terms – and with their own timelines.  It’s about coexistence of hardware with software and of new with legacy in a way that streamlines and simplifies operations.”

Cisco is quickly working to consolidate its NFV offerings, and more recently its OpenStack services, as the vendor continues to target cloud service providers and telcos looking to revamp their datacentres. In March it was revealed Cisco struck a big deal with T-Systems, Deutsche Telekom’s enterprise-focused subsidiary, that will see the German incumbent roll out Cisco’s OpenStack-based infrastructure in datacentre in Biere, near Magdeburg, as well as a virtual hotspot service for SMEs.

IBM opens SDN, NFV labs in Dallas, Paris

IBM is moving to bolster its service provider business

IBM is moving to bolster its service provider business

IBM has announced the launch of two Network Innovation Centres, where the company’s clients can experiment with software-defined networking and network function virtualisation technologies. The move seems aimed at bolstering its service provider business.

The centres, one in Paris, France and the other in Dallas, Texas, will focus primarily on experimenting with solutions for large enterprise networking systems and telecoms operators, and feature technologies from a range of IBM partner companies including Brocade, Cisco, Citrix, Juniper Networks, Riverbed, and VMware.

IBM said facilitating automation and orchestration innovation will be the main thrust of the centres.

“Effectively applying cloud technologies to the network could allow a company to reduce its overall network capacity while increasing utilization by dynamically providing resources during the day in Beijing while it’s nighttime in New York, and vice versa,” said Pete Lorenzen, general manager, Networking Services, IBM Global Technology Services.

“A telecom company could better manage periodic, localized spikes in smartphone usage caused by major sporting events or daily urban commutes, dynamically provisioning capacity when and where it’s needed,” Lorenzen added.

IBM has pushed farther into the networking space in recent years, having scored a number of patents in the area of networking automation and dynamic network resource allocation. A significant driver of this is its service provider business, where some of the company’s competitors – like HP – are attempting to make inroads.