Archivo de la categoría: Hyper-V

Download Trial-in-a-Box Hyper-V Virtual Machine with Parallels Mac Management for Microsoft SCCM

We’re excited to announce that a Hyper-V virtual machine with Parallels® Mac Management for Microsoft® SCCM is now available—a pre-built Microsoft SCCM trial environment with a trial version of our plugin for managing Mac computers. This pre-built trial allows you to try our solution without any impact on your Microsoft SCCM infrastructure. It also enables […]

The post Download Trial-in-a-Box Hyper-V Virtual Machine with Parallels Mac Management for Microsoft SCCM appeared first on Parallels Blog.

Will Microsoft’s ‘walled-garden’ approach to virtualisation pay off?

Microsoft's approach to virtualisation: Strategic intent or tunnel vision?

Microsoft’s approach to virtualisation: Strategic intent or tunnel vision?

While the data centre of old played host to an array of physical technologies, the data centre of today and of the future is based on virtualisation, public or private clouds, containers, converged servers, and other forms of software-defined solutions. Eighty percent of workloads are now virtualised with most companies using heterogeneous environments.

As the virtual revolution continues on, new industry players are emerging ready to take-on the market’s dominating forces. Now is the time for the innovators to strike and to stake a claim in this lucrative and growing movement.

Since its inception, VMware has been the 800 lb gorilla of virtualisation. Yet even VMware’s market dominance is under pressure from OpenSource offerings like KVM, RHEV-M, OpenStack, Linux Containers and Docker. There can be no doubting the challenge to VMware presented by purveyors of such open virtualisation options; among other things, they feature REST APIs that allow easy integration with other management tools and applications, regardless of platform.

I see it as a form of natural selection; new trends materialise every few years and throw down the gauntlet to prevailing organisations – adapt, innovate or die. Each time this happens, some new players will rise and other established players will sink.

VMware is determined to remain afloat and has responded to the challenge by creating an open REST API for VSphere and other components of the VMware stack.  While I don’t personally believe that this attempt has resulted in the most elegant API, there can be no arguing that it is at least accessible and well-documented, allowing for integration with almost anything in a heterogeneous data centre. For that, I must applaud them.

So what of the other giants of yore? Will Microsoft, for example, retain its regal status in the years to come? Not if the Windows-specific API it has lumbered itself with is anything to go by! While I understand why Microsoft has aspired to take on VMware in the enterprise data centre, its API, utilising WMI (Windows Management Instrumentation), only runs on Windows! As far as I’m concerned this makes it as useless as a chocolate teapot. What on earth is the organisation’s end-goal here?

There are two possible answers that spring to my mind, first that this is a strategic move or second that Microsoft’s eyesight is failing.

Could the Windows-only approach to integrating with Microsoft’s Hyper-V virtualisation platform be an intentional strategic move on its part? Is the long-game for Windows Server to take over the enterprise data centre?

In support of this, I have been taking note of Microsoft sales reps encouraging customers to switch from VMware products to Microsoft Hyper-V. In this exchange on Microsoft’s Technet forum, a forum user asked how to integrate Hyper-V with a product running on Linux.  A Microsoft representative then responded saying (albeit in a veiled way) that you can only interface with Hyper-V using WMI, which only runs on Windows…

But what if this isn’t one part of a much larger scheme? The only alternative I can fathom then is that this is a case of extreme tunnel vision, the outcome of a technology company that still doesn’t really get the tectonic IT disruptions and changes happening in the outside world. If it turns out that Microsoft really does want Windows Server to take over the enterprise data centre…well, all I can say is, good luck with that!

Don’t get me wrong. I am a great believer in competition, it is vital for the progression of both technology and markets. And it certainly is no bad thing when an alpha gorilla faces troop challenger. It’s what stops them getting stale, invigorating them and forcing them to prove why they deserve their silver back.

In reality, Microsoft probably is one of the few players that can seriously threaten VMWare’s near monopolistic market dominance of server virtualisation. But it won’t do it like this. So unless new CEO Satya Nadella’s company moves to provide platform-neutral APIs, I am sad to say that its offering will be relegated to the museum of IT applications.

To end with a bit of advice to all those building big data and web-scale applications, with auto-scaling orchestration between applications and virtualisation hypervisors: skip Hyper-V and don’t go near Microsoft until it “gets it” when it comes to open APIs.

Written by David Dennis, vice president, marketing & products, GroundWork

Microsoft unveils Hyper-V containers, nano servers

Microsoft has unveiled Hyper-V containers and nano servers

Microsoft has unveiled Hyper-V containers and nano servers

Microsoft has unveiled a number of updates to Windows Server including Hyper-V containers, which are essentially Docker containers embedded in Hyper-V VMs, and nano servers, a slimmed down Windows server image.

Microsoft said Hyper-V containers are ideal for users that want virtualisation-grade isolation, but still want to run their workloads within Docker containers in a Windows ecosystem.

“Through this new first-of-its-kind offering, Hyper-V Containers will ensure code running in one container remains isolated and cannot impact the host operating system or other containers running on the same host,” explained Mike Neil, general manager for Windows Server, Microsoft in a recent blog post.

“In addition, applications developed for Windows Server Containers can be deployed as a Hyper-V Container without modification, providing greater flexibility for operators who need to choose degrees of density, agility, and isolation in a multi-platform, multi-application environment.”

Windows Server Containers will be enabled in the next release of Windows Server, which is due to be demoed in the coming weeks, and makes good on Microsoft’s commitment to make the Windows Server ecosystem (including Azure) Docker-friendly.

The company also unveiled what it’s calling nano servers, a “purpose-built OS” that is essentially a stripped down Windows Server image optimised for cloud and container workloads. They can be deployed onto bare metal, and because Microsoft removed tons of code it boots up and runs more quickly.

“To achieve these benefits, we removed the GUI stack, 32 bit support (WOW64), MSI and a number of default Server Core components. There is no local logon or Remote Desktop support. All management is performed remotely via WMI and PowerShell. We are also adding Windows Server Roles and Features using Features on Demand and DISM. We are improving remote manageability via PowerShell with Desired State Configuration as well as remote file transfer, remote script authoring and remote debugging.  We are working on a set of new Web-based management tools to replace local inbox management tools,” the company explained.

“Because Nano Server is a refactored version of Windows Server it will be API-compatible with other versions of Windows Server within the subset of components it includes. Visual Studio is fully supported with Nano Server, including remote debugging functionality and notifications when APIs reference unsupported Nano Server components.”

The move is a sign Microsoft is keen to keep its on-premise and cloud platform ahead of the technology curve, and is likely to appeal to .NET developers who are attracted to some of the benefits of containers while wanting to stay firmly within a Windows world in terms of the tools and code used. Still, the company said it is working with Chef to ensure nano servers work well with their DevOps tools.

F5 Extends Dynamic Networking to Windows Server-Based Virtual Network Environments

F5 Networks, Inc. today announced the F5 Network Virtualization Solution for Microsoft Windows Server 2012 Hyper-V. The solution gives F5 customers the flexibility to use the BIG-IP platform to deploy network services in cloud-driven data centers that are built on Windows Server 2012 Hyper-V. This announcement underscores F5’s commitment to deliver a dynamic, efficient data center that will ensure scalability, security, and manageability across an organization’s IT environments and systems.

With this solution, the same network-based services that the BIG-IP platform provides—such as local and global load balancing, advanced traffic steering, access control, and application security and acceleration—can now also be used to deliver applications in the Microsoft cloud and virtualized network environments. The solution is enabled by F5 BIG-IP Local Traffic Manager (LTM®) Virtual Edition (VE) running on Windows Server 2012 Hyper-V.

Organizations that use Hyper-V network virtualization to realize cost savings and operational efficiencies stand to gain many additional benefits from the F5 solution, including:

  • Improved Flexibility – Working in conjunction with Hyper-V
    network virtualization, the F5 solution supports seamless, low-cost
    migration to the cloud by allowing organizations to use the same
    policies and IP addresses in the cloud that they currently use in the
    physical network.
  • Cost Savings – The F5 solution accelerates data center
    consolidation by connecting hybrid cloud environments, enabling
    organizations to cut costs while extending their applications and
    services.
  • Efficient Network Management – The F5 solution can
    intelligently manage network traffic at layers 4-7, mitigating the
    need for organizations to build and manage large layer 2 networks.
  • Streamlined ADN Services – The F5 solution runs on Windows
    Server 2012 Hyper-V, and all services are applied in BIG-IP LTM VE, so
    no software upgrades or special code is required on the physical
    network.