The point of virtual desktop infrastructure (VDI) is to offer employees anytime, anywhere accessibility to your organization’s applications and data. VDI is a “desktop away from the desktop.” The problem is that more emphasis has had to be placed on the infrastructure part of VDI due to outdated technology that creates complexity. But newer VDI technology is about to restore the desktop to its rightful place.
A promising beginning
In a tradition dating back more than a quarter of a century, when it’s time to buy desktops, the IT department focuses on how much CPU, memory and storage comes with each desktop. Why should a virtual desktop project be any different? IT staff should spend their time thinking about the same desktop attributes. Instead, they spend most of their time talking about "infrastructure": servers, storage, layers, management tools and much more. Managing all this infrastructure is exhausting and expensive, and when the focus is on the "I" and not on the "D," users end up unhappy and IT staff end up frustrated. How did this happen?
IT typically spends about $1500 USD (roughly €1200) per desktop or laptop. That cost is amortized over three to four years. But the overhead of dealing with physical desktops is often unsustainable, and once the world went mobile, being tethered to a desktop was a sure way to give your competitors the advantage. In response, some IT teams made the decision to deploy virtual desktops and apps.
The promise of VDI was compelling: Greater IT efficiency and information security and a productive mobile workforce. But in order to implement VDI on-premises, IT needs to translate the desktop attributes into expensive and complex data center technologies. IT staff started asking questions like, “If I have 1000 users, how many servers do I need? How much shared SAN/NAS storage do I need? In which data centers do I put this infrastructure?”
A complicated middle act
With traditional VDI there are many moving parts. Organizations that choose VDI need to determine how many servers they’ll need. “Which applications are used? What is the CPU and memory usage rate? How many users can I fit onto a certain class of servers? Do I need 20, 30 or 50 servers for 1000 users?” It all depends on usage.
Even trickier is the struggle to determine storage needs. Local storage on PCs is the cheapest storage available – about $100/TB. SAN/NAS can be 25-100 times that cost. If each user had 1TB of storage on their desktop, then you would need 1000 TB of SAN/NAS. That is massively expensive.
To keep VDI from dying on the vine, providers created various ways to optimize storage. The conversation went something like this: “Oh, you can optimize with a single image so you don't need to have 1000 copies of Windows OS. Now, let's put in layers so you don't need to have 1000 copies of each application. Wait, what about profile management tools to store end user personalization? You need that, too. Oh, and you can no longer manage it with your existing PC management tools like SCCM and Altiris. So, your VDI infrastructure is a stand-alone management framework.”
Now, these workarounds may seem viable, but they fail to take into account the fact that Windows wasn't architected to operate in this manner. So, customers struggle with app compatibility, corrupted profiles and application updates that blow away desktops. At the same time, the storage vendors started implementing de-duplication so that the 1000 copies of Windows and applications in each user's desktop were automatically de-duped at the storage layer. Hyper-converged infrastructure (HCI) vendors ultimately adopted de-duplication, and even though HCI really began to affect the cost of VDI implementations, it hasn't gone far enough.
At this point in the process, you have to plan for where all this infrastructure is going to live. Which data center should it be in? How far away will all your end users be from that data center? What does that mean for latency? How will their user experience be? How much bandwidth will they require?
A solid finish
So then, the infrastructure had to be addressed before the desktop could get its moment in the spotlight. IT departments have had to jump through complex infrastructural hoops to deliver a mission-critical workload to a class of users. But there are more important things for IT teams to do and more value for them to add than dealing with all this complexity.
The advent of cloud computing created an opportunity to completely re-imagine what the phrase “virtual desktops” means. Now, the data center is any region of the public cloud you select. Essentially, the infrastructure becomes invisible in that region – at least in terms of you having to worry about it. Desktops can be placed close to the users so they have a great experience. All IT needs to do is determine the configuration of the desktop, just like they determine the configuration of a physical PC.
It is similar and, in fact, simpler to buy a desktop cloud solution than it is to buy a PC. The IT team simply chooses a desktop configuration running in Azure. They order the number of units needed for their end users. Then they use their corporate image to create copies of desktops in the various regions where the users are. But rather than shipping a PC to each user, IT simply emails a link for their desktop.
In this way, cloud computing has restored the emphasis on the desktop and eliminated infrastructure complexity. Instead of fretting over infrastructure, organizations can now focus on what class of desktop their users need. It’s no longer a question of how many servers they will need but how much CPU. And when needs change, modify the desktop configuration in minutes. Users will have a better experience and IT teams can concentrate on higher-value projects. Now VDI is delivered simply and efficiently as a turnkey service.