Category Archives: Virtualization

Trick or Treat: Top 5 Fears of a CTO

By Chris Ward, CTO

Journey to the Cloud’s Ben Stephenson recently sat down with Chris Ward, CTO of GreenPages-LogicsOne, to get his take on what the top 5 fears of a CTO are.

Ben: Chief Technology Officer is obviously an extremely strategic, important, and difficult role within an organization. Since it’s almost Halloween, and since you’re an active (and successful) CTO yourself, I thought we would talk about your Top 5 Fears of a CTO. You also have the unique perspective of seeing how GreenPages uses technology internally, as well as how GreenPages advises clients to utilize different technologies.

Chris: Sounds good. I think a major fear is “Falling Behind the Trends.” In this case, it’s not necessarily that you couldn’t see what was coming down the path. You can see it there and know it’s coming, but can you get there with velocity? Can you get there before the competition does?

Ben: Do you have any examples of when you have avoided falling behind the trends?

Chris: At GreenPages, we were fortunate to catch virtualization early on when a lot of others didn’t. We had a lot of customers who were not sold on virtualization for 2-4 years. Those customers are now very far behind the competition and are trying to play catch up. In some cases, I’m sure it’s meant the CTO is out of a job. We also utilized virtualization internally early on and reaped the benefits. Another example is our CMaaS Brokerage and Governance offering. We recognize the significance of cloud brokerage and the paradigm shift towards a hybrid cloud computing model. In this case we are out ahead of the market.

Ben: How about a time when GreenPages did fall behind a trend?

Chris: I would say we fell behind a trend when we began our managed services business. It was traditional, old school managed services. It definitely took us some time to figure out where we wanted to go and where we wanted to be. While we may have fallen behind initially, we recognized change was needed and our Cloud Management as a Service offering has transformed us. Instead of sitting back and missing the boat, we are now in a great spot. This will be a huge help to our customers – but will (and does already) help us significantly internally as well.

Ben: How about fear number 2?

Chris: Fear number two is not seeing around the bend.  From my perspective as the CTO at a solutions provider, things move so fast in this industry and GreenPages offers such a wide variety and breadth of products and services to customer – it can be very difficult to keep up with. If we focused on only one area it would be a lot easier, but since we focus on cloud, virtualization, end user computing, security, storage, datacenter transformation, networking and more it can be quite challenging. For a corporate CTO you are allowed to be a market follower, which can be somewhat of an advantage. While you don’t want to fall behind, you do have partners, like GreenPages and others out there, that you can count on.

Ben: That makes sense. What about a 3rd fear?

Chris: Another large fear for CTOs is making a wrong turn. CTOs can get the crystal ball out and there may be a couple of things coming down the road…but what happens if you turn left and everyone else turns right? What happens if you make the wrong decision or the decision to early?

Ben: Can you give us an example?

Chris: A good example of taking a turn too early in the Cloud era is with the company Nirvanix. Cloud storage is extremely important, but what happens when a business model has not been properly vetted? This is one of the “gotchas” of being an early adopter. To be successful you need a good mix. You can’t be too conservative, but you can’t jump all in any time a new company pops up – the key is balance.

Ben: Do you have any advice for CTOs about this?

Chris: Sure – just because you can doesn’t mean you should!

Ben: I’ve heard you say that one before…

Chris: For example, software defined networking stacks, with products like Cisco Insieme and VMware NSX are very cool new technologies. I personally, and we at GreenPages, think this is going to be the next big thing. But we’re at a crossroads…who should use these? Who will gain the benefits? For example, maybe it makes sense for the enterprise but not for small businesses? This is something major that I have to determine – who is this a good fit for?

Ben: How about fear number 4?

Chris: Fear number 4 revolves around retaining my talent. I want my team to feel like they are always learning something new. I want them to know they are always on the bleeding edge of IT. I want to give them a world that changes very quickly. In my experience, most people that are stellar employees in a technical capacity want to be challenged constantly and to try new things and look at different ways of doing things.

Ben: What should CTOs do to try and retain talent?

Chris: Really take the time and focus on building a culture and environment that harnesses what I mentioned above. If not, you’re at serious risk of losing top talent.

Ben: Before I get too scared let’s get to number 5 and finish this up.

Chris: I’d say the fifth fear of mine is determining if I am working with the right technologies and the right vendors. IT can often be walking a tightrope between vendors from technical and business perspectives. From my perspective, I need to make sure we are providing our customers with the right technology from the right vendor to meet their needs. I need to determine if the technology works as advertised. Is it something that is reasonable to implement? Is there money in this for GreenPages?

Ben: What about from a customer’s perspective?

Chris: The customer also needs to make sure they align themselves with the right partners.  CTOs want to find partners that are looking towards the future, who will advise them correctly, and who will allow the business to stay out ahead of the competition. If a CTO looks at a partner or technology and doesn’t think it’s really advancing the business, then it’s time to reevaluate.

Ben: Thanks for the time Chris – and good luck!

What are your top fears as an IT decision makers? Leave them in the comment section!

Download this free ebook on the evolution of the corporate IT department. Where has the IT department been, where is it now, and where should it be headed?

 

 

Rapid Fire Summary of Carl Eschenbach’s General Session at VMworld 2013

By Chris Ward, CTO, LogicsOne

I wrote a blog on Monday summarizing the opening keynote at VMworld 2013. Checking in again quickly to summarize Tuesday’s General Session. VMware’s COO Carl Eschenbach took the stage and informed the audience that there are 22,500 people in attendance, which is a new record for VMware. This makes it the single largest IT infrastructure event of the year. 33 of these attendees have been to all 10 VMworlds, and Carl is one of them.

Carl started the session by providing a recap of Monday’s announcements around vSphere/vCloud Suite 5.5, NSX, vSAN, vCHS, Cloud Foundry, and vCHS. The overall mantra of the session revolved around IT as a Service. The following points were key:

  • Virtualization extends to ALL of IT
  • IT management gives way to automation
  • Compatible hybrid cloud will be ubiquitous
  • Foundation is SDDC

After this, came a plethora of product demos. If you would like to watch the presentation to be able to check out the demos you can watch them here: http://www.vmworld.com/community/conference/us/learn/generalsessions

vCAC Demo

  • Started with showing the service catalogue & showing options to deploy an app to a private or public cloud. Also showed costs of each option as well
    • I’m assuming this is showing integration between vCAC & ITBM, although that was not directly mentioned
    • Next they displayed the database options as part of the app – assuming this is vFabric Data Director (DB as a Service)
    • Showed the auto-scale option
    • Showed the health of the application after deployment…this appears to be integration with vCOPS (again, not mentioned)
    • The demo showed how the product provided self-service, transparent pricing, governance, and automation

NSX Demo

  • Started with a networking conversation around why there are challenges with networking being the ball and chain of the VM. After that, Carl discussed the features and functions that NSX can provide. Some key ones were:
    • Route, switch, load balance, VPN, firewall, etc.
  • Displayed the vSphere web client & looked at the automated actions that happened via vCAC and NSX  during the app provisioning
  • What was needed to deploy this demo you may ask? L2 switch, L3 router, firewall, & load balancer. All of this was automated and deployed with no human intervention
  • Carl then went through the difference in physical provisioning vs. logical provisioning with NSX & abstracting the network off the physical devices.
  • West Jet has deployed NSX, got to hear a little about their experiences
  • There was also a demo to show you how you can take an existing VMware infrastructure and convert/migrate to an NSX virtual network. In addition, it showed how vMotion can make the network switch with zero downtime

The conversation then turned to storage. They covered the following:

  • Requirements of SLAs, policies, management, etc. for mission critical apps in the storage realm
  • vSAN discussion and demo
  • Storage policy can be attached at the VM layer so it is mobile with the VM
  • Showcased adding another host to the cluster and the local storage is auto-added to the vSAN instance
  • Resiliency – can choose how many copies of the data are required

IT Operations:

  • Traditional management silos have to change
  • Workloads are going to scale to massive numbers and be spread across numerous environments (public and private)
  • Conventional approach is scripting and rules which tend to be rigid and complex –> Answer is policy based automation via vCAC
  • Showed example in vCOPS of a performance issue and drilled into the problem…then showed performance improve automatically due to automated proactive response to detected issues.  (autoscaling in this case)
  • Discussing hybrid and seamless movement of workloads to/from private/public cloud
  • Displayed vCHS plugin to the vSphere web client
  • Showed template synchronization between private on prem vSphere environment up to vCHS
  • Provisioned an app from vCAC to public cloud (vCHS)  (it shows up inside of vSphere Web client)

 

Let me know if there are questions on any of these demos.

Rapid Fire Summary of Opening Keynote at VMworld 2013

By Chris Ward, CTO, LogicsOne

For those of you who aren’t out in San Francisco at the 10th annual VMworld event, here is a quick overview of what was covered in the opening keynote delivered by CEO Pat Gelsinger’s opening:

  • Social, Mobile, Cloud & Big Data are the 4 largest forces shaping IT today
  • Transitioned from Mainframe –>Client Server –>Mobile Cloud
  • Pat sets the stage that the theme of this year’s event is networking – basically setting the stage for a ton of Nicira/NSX information. I think VMware sees the core of the software defined datacenter as networking-based, and they are in a very fast race to beat out the competition in that space
  • Pat also mentioned that his passion is to get every x86 application/workload 100% virtualized. He drew parallels to Bill Gates saying his dream was a PC on every desk in every home that runs Microsoft software.
  • Next came announcements around vSphere 5.5 & vCloud Suite 5.5…here are some of the highlights:
    • 2x CPU and Memory limits and 32x storage capacity per volume to support mission critical and big applications
    • Application Aware high availability
    • Big Data Extensions – multi-tenant Hadoop capability via Serengeti
    • vSAN officially announced as public beta and will be GA by 1st half of 2014
    • vVOL is now in tech preview
    • vSphere Flash Read Cache included in vSphere 5.5

Next, we heard from Martin Casado. Martin is the CTO – Networking at VMware and came over from the Nicira acquisition and was speaking about VMware NSX. NSX is a combination of vCloud Network and Security (vCNS) and Nicira. Essentially, NSX is a network hypervisor that abstracts the underlying networking hardware just like ESX abstracts underlying server hardware.

Other topics to note:

  • IDC names VMware #1 in Cloud Management
  • VMware hypervisor fully supported as part of OpenStack
  • Growing focus on hybrid cloud. VMware will have 4 datacenters soon (Las Vegas, Santa Clara, Sterling, & Dallas). Also announcing partnerships with Savvis in NYC & Chicago to provide vCHS services out of Savvis datacenters.
  • End User Computing
    • Desktop as a Service on vCHS is being announced (I have an EUC Summit Dinner later on tonight so I will be able to go into more detail afterward that).

So, all-in-all a good start to the event. Network virtualization/NSX is clearly the focus of this conference and vCHS is a not too distant 2nd. Something that was omitted from the keynote was the rewritten SSO engine for vCenter 5.5. The piece was weak for 5.1 and has been vastly improved with 5.5…this could be addressed tomorrow as most of the tech staff is in Tuesday’s general session.

If you’re at the event…I’ll actually be speaking on a panel tomorrow at 2:30 about balancing agility with service standardization. I’ll be joining Khalid Hakim and Kurt Milne of VMware, along with Dave Bartoletti of Forrester Research and Ian Clayton of Service Management 101. I will also be co-presenting on Wednesday with my colleague John Dixon at 2:30-3:30 in the Moscone West Room 2011 about deploying a private cloud service catalogue. Hopefully you can swing by.

More to come soon!

 

Software Defined Networking Series — Part 2: What Are the Business Drivers?

By Nick Phelps, Consulting Architect, LogicsOne

 

http://www.youtube.com/watch?v=7U9fCg1Zpio

 

In Part one of this series on Software Defined Networking, I gave a high level overview of what all the buzz is about. Here’s part two…in this video I expand on the capabilities of SDN by delving into the business drivers behind the concept. Leave any questions or thoughts in the comments section below.

 

Free webinar on 8/22: Horizon Suite – How to Securely Enable BYOD with VMware’s Next Gen EUC Platform.

With a growing number of consumer devices proliferating the workplace, lines of business turning to cloud-based services, and people demanding more mobility in order to be productive, IT administrators are faced with a new generation of challenges for securely managing corporate data across a broad array of computing platforms. 

 

Breaking: IBM Acquiring CSL International

IBM today announced a definitive agreement to acquire CSL International, a  provider of virtualization management technology for IBM’s zEnterprise system. CSL International is a privately held company headquartered in Herzliya Pituach, Israel.

The zEnterprise System enables clients to host the workloads of thousands of commodity servers on a single system for simplification, improved security and cost reduction. The combination of IBM and CSL International technologies will allow clients to manage all aspects of z/VM and Linux on System z virtualization, including CPU, memory, storage, and network resources.

How RIM Can Improve Efficiency and Add Value To Your IT Ops

This is a guest post from Chris Joseph, VP, Product Management & Marketing, NetEnrich

 

Cloud, virtualization and hybrid IT technologies are being used in small and large IT enterprises everywhere to both modernize, and achieve business goals and objectives. As such, a top concern for today’s IT leaders is whether the investments being made in these technologies are delivering on the promise of IT modernization. Another concern is finding ways to free up IT funds currently spent on routine maintenance of IT infrastructure, so that they can invest in these new and strategic IT modernization projects.

Don’t Waste Time, Money and Talent on Blinking Lights

Everyone knows that IT organizations simply can’t afford to have a team of people dedicated to watching for blinking lights and waiting for something to fix.  It’s a waste of talent and will quickly burn through even the most generous of IT budgets. Yet, according to a Gartner study, 80% of an enterprise IT budget is generally spent on routine IT, while only 20% is spent on new and strategic projects.

If this scenario sounds familiar, then you may want to consider taking a long and hard look at third-party Remote Infrastructure Management (RIM) services for your IT infrastructure management. In fact, RIM services have been shown to reduce spending on routine IT operations by 30-40%, but how is this possible?

(1)     First of all, RIM services rationalize, consolidate and integrate the tools that are used to power the functionality of the monitoring and management of IT infrastructure within an enterprise.  According to Enterprise Management Associates, a leading IT and data management research and consulting firm, a typical enterprise has nearly 11 such tools running in its environment, and these typically include IT Operations Management (ITOM) tools and IT Service Management (ITSM) tools. As any IT professional can attest to, while there is significant overlap, some of these tools tend to be deficient in their capabilities, and they can be a significant source of noise and distraction, especially when it comes to false alerts and tickets. Yet, through RIM, IT organizations can eliminate many of these tools and consolidate their IT operations into a single pane of glass view, which can result in significant cost savings.

(2)     Secondly, by leveraging RIM, IT teams can be restructured and organized into shared services delivery groups, which can result in better utilization of skilled resources, while supporting the transformation of IT into a new model that acts as a service provider to business units.  Combine these elements of RIM with remote service delivery, and not only will you improve economies of scale and scope, but you will also promote cost savings.

(3)     Thirdly, RIM services consistently look to automation, analytics, and best practices to promote cost savings in the enterprise. Manual processes and runbooks are not only costly, but also time consuming and error prone. Yet, to automate processes effectively, IT organizations must rely on methodologies, scripts, and tools. This is where RIM comes into play. In fact, within any enterprise, 60-80% of manual processes and runbooks can easily be automated with RIM.

Download this free whitepaper to learn how to avoid focusing on ”keeping the lights on” to allow your team to focus on strategic initiatives

Beyond Cost Savings and Greater Efficiency: Building a Case for RIM

In addition to reducing routine spending and improving the efficiency of your IT operations, there are several other benefits to leveraging third-party RIM services:

  • 24×7 IT operations support.  Third-party RIM services often provide 24×7 IT ops support.  IT organizations benefit from around the clock monitoring and management of their IT infrastructures without additional headcount, or straining internal resources, which saves operating costs.
  • Be the first to know. 24×7 IT operations support means that you are always the first to know when customer-facing IT systems such as the company’s website, online shopping portal, mobile apps and cloud-based solutions go down. And, the issue is resolved in many cases by RIM services teams before the end-user has time to notice.
  • Skills and expertise. Third-party RIM services can provide your IT organization with certified engineers in various IT infrastructure domains. These engineers are responsible for monitoring, alerting, triaging, ticketing, incident management, and the escalation of critical outages or errors to you and your IT staff, if they cannot be immediately resolved. In addition, they may also be available on an on-demand basis if you are looking for skills and expertise in a specific domain.

The bottom line: by leveraging RIM services, IT organizations like yours can not only enhance their service capabilities and bolster service levels, but they can also can say goodbye to the fire drills and late night calls that plague your IT staff.  Proactive management of your IT infrastructure through RIM ensures that it is always running at peak performance.

To hear more from Chris, visit the NetEnrich blog

To learn more about how GreenPages can help you monitor and manage your IT Operations fill out this form

Want to Go Cloud? What’s the Use Case?

By Lawrence Kohan, Senior Consultant, LogicsOne

This is the first of a two-part blog series intended to provide practical, real world examples of when it makes sense to use the cloud and when it does not.

We’re well into an exciting new era in the technology world.  The buzz-words are flying around at light speeds, and talk of “Cloud” and “software-defined-everything” is all the rage.

Advances in virtualization, which allows software processes to be decoupled from underlying hardware is giving way to amazing possibilities for moving around workloads as needed, either between racks in a datacenter, or even between datacenters!  In addition, the concept of “Cloud” is very exciting in the possibilities is offers business to leverage these advances by being able to move workloads offsite for greater availability, redundancy, disaster recovery.

Indeed, the promise of the Cloud as a whole is to provide IT as a service.  It’s a way of offering companies resources on a metered usage basis, so that they can consume as needed, grow or reduce their resources as needed, and only pay on a per use basis for what they consume, as needed.  The hope is to free up a business and their IT staff from worrying about the mundane, daily details and repetitive administrative tasks and burdens of keeping the business functioning and allows them to be more strategic with their time and efforts.  In the Cloud Era, servers and desktops can be provisioned, configured, and deployed in minutes instead of weeks!  The time saved allows the business to focus on all other areas of the business to make it more profitable, such as their marketing and advertising strategies, application/website development, and the betterment of their product and services.

Cloudy Conditions Ahead

All of this sounds like a wonderful dream.  However, before jumping in, it is important to understand what the business goals are.  What is it you intend to get out of the Cloud?  How do you intend to leverage it to your best advantage?  These questions and answers must come first before any decision is made regarding software vendors or Cloud service providers to be used.  The promise of the Cloud is tremendous gains in efficiency, but only when it is adopted and utilized correctly.

 

Register for our upcoming free webinar on June 12th on what’s missing in hybrid cloud management today. Speakers include Chris Ward from GreenPages, Praveen Asthana from Gravitant, and David Bartoletti, a top analyst from Forrester Research.

 

To Host or Not to Host?

For starters, let’s look at a simple use case: Whether or not to host a company’s e-mail in the Cloud.

Pros:

  • Hosting email will be billed on a per-usage basis, either by number of user mailboxes, number of emails sent/received, or storage used.
  • Day-to-day administration, availability, fault tolerance, backups are all handled by the service provider.  Little administration is needed aside from creating user accounts and mailboxes.
  • Offsite-hosted email still has the same look-and-feel as on-premise email, and can be accessed remotely, in the same ways, from anywhere.  Most users don’t even know the difference!

Cons:

  • Company is subject to outages and downtime windows of the service provider.  (In such a case, as long as it is not an unplanned outage or disaster, steps should be taken to ensure continued e-mail delivery, but systems may be unavailable for periods of time, usually on weekends or overnight)
  • Initial migration and large data transfers can be an administrative burden.

There are factors that can either be positives or negatives depending on the business size and need.  For example, a small startup company with only several people needs to be extremely budget conscious.  In their case, it would certainly make more sense financially to outsource their e-mail for a monthly fee instead of looking to install and maintain their own internal email servers, which after hardware and software costs and licensing would cost 5 figures, not to mention needed at least one dedicated person to maintain it.  This is certainly not a cost-effective option for a small, young company trying to get off the ground.

 

Download this free whitepaper to learn more about how organizations can revolutionize the way it manages hybrid cloud environments.

 

At the same time, a very large enterprise with thousands of mailboxes may find the process of migration to be an expensive, time consuming administrative burden.  While offsite email would offer good availability and safeguards against system failure, perhaps even above and beyond what the enterprise currently utilizes, it is also a substantial risk; if the Cloud Provider has an outage that could affect the enterprise’s email access.  The same risk would apply to a small business as well; however, the smaller and more localized the business, the more likely they are to adapt to an e-mail outage and resume intra-office communications via secondary means—a contingency plan that is more difficult to act upon for a larger global enterprise.  And, yes, the enterprise that hosts e-mail internally has the same risk of an outage, however that enterprise can respond to an internal e-mail outage immediately and be able to ascertain how long the outage will be, instead of being at the mercy of the Cloud Provider’s timetable and troubleshooting efforts.

Therefore, in our sample “hosted e-mail” use case, it would make more sense for a smaller business to consider the option of moving e-mail services to the Cloud, and may not provide much value, if any, for the enterprise.

In the second part of this two-part blog series, I will cover when is a good time to utilize cloud for medium to large businesses. In the meantime, would love to hear your thoughts!

Webinar June 12th 11am-12pm EST “What’s Missing in Today’s Hybrid Cloud Management – Leveraging Cloud Brokerage” Speakers from Forrester, GreenPages, and Gravitant. Register here!

Survey Infographic: Customer Relying on Virtualization Vendors For Security

BeyondTrust has released a survey, Virtual Insecurity, that reveals organizations are relying heavily on virtualization vendors for security if for any security at all. Key survey takeaways from the 346 respondents that participated include:

  • 42 percent do not use security tools regularly as part of their virtual systems administration
  • 34 percent lean heavily on antivirus protection as a primary security tool
  • 57 percent often use existing image templates for new virtual images
  • Nearly 3 out of every 4 respondents say that up to a quarter of virtual guests are offline at any given time
  • 64 percent have no security controls in place that require a security sign off prior to releasing a new virtual image or template

Here’s an infographic based on these results:

Virtual Insecurity Infographic FINAL

Measurement, Control and Efficiency in the Data Center

Guest Post by Roger Keenan, Managing Director of City Lifeline

To control something, you must first be able to measure it.  This is one of the most basic principles of engineering.  Once there is measurement, there can be feedback.  Feedback creates a virtuous loop in which the output changes to better track the changing input demand.  Improving data centre efficiency is no different.  If efficiency means better adherence to the demand from the organisation for lower energy consumption, better utilisation of assets, faster response to change requests, then the very first step is to measure those things, and use the measurements to provide feedback and thereby control.

So what do we want to control?  We can divide it into three: the data centre facility, the use of compute capacity and the communications between the data centre and the outside world.  The balance of importance of those will differ between all organisations.

There are all sorts of types of data centres, ranging from professional colocation data centres to the server-cupboard-under-the-stairs found in some smaller enterprises.  Professional data centre operators focus hard on the energy efficiency of the total facility.  The most common measure of energy efficiency is PUE, defined originally by the Green Grid organisation.  This is simple:   the energy going into the facility divided by the energy used to power electronic equipment.  Although it is often abused, a nice example is the data centre that powered its facility lighting over POE, (power over ethernet) thus making the lighting part of the ‘electronic equipment, it is widely understood and used world-wide.  It provides visibility and focus for the process of continuous improvement.  It is easy to measure at facility level, as it only needs monitors on the mains feeds into the building and monitors on the UPS outputs.

Power efficiency can be managed at multiple levels:  at the facility level, at the cabinet level and at the level of ‘useful work’.  This last is difficult to define, let alone measure and there are various working groups around the world trying to decide what ‘useful work’ means.  It may be compute cycles per KW, revenue generated within the organisation per KW or application run time per KW and it may be different for different organisations.  Whatever it is, it has to be properly defined and measured before it can be controlled.

DCIM (data centre infrastructure management) systems provide a way to measure the population and activity of servers and particularly of virtualised machines.  In large organisations, with potentially many thousands of servers, DCIM provides a means of physical inventory tracking and control.  More important than the question “how many servers do I have?” is “how much useful work do they do?”  Typically a large data centre will have around 10% ghost servers – servers which are powered and running but which do not do anything useful.  DCIM can justify its costs and the effort needed to set it up on those alone.

Virtualisation brings its own challenges.  Virtualisation has taken us away from the days when a typical server operated at 10-15% efficiency, but we are still a long way from most data centres operating efficiently with virtualisation.  Often users will over-specify server capacity for an application, using more CPU’s, memory and storage than really needed, just to be on the safe side and because they can.   Users see the data centre as a sunk cost – it’s already there and paid for, so we might as well use it.  This creates ‘VM Sprawl’.  The way out of this is to measure, quote and charge.  If a user is charged for the machine time used, that user will think more carefully about wasting it and about piling contingency allowance upon contingency allowance ‘just in case’, leading to inefficient stranded capacity.  And if the user is given a real-time quote for the costs before committing to them, they will think harder about how much capacity is really needed.

Data centres do not exist in isolation.  Every data centre is connected to other data centres and often to multiple external premises, such as retail shops or oil rigs.  Often those have little redundancy and may well not operate efficiently.  Again, to optimise efficiency and reliability of those networks, the first requirement is to be able to measure what they are doing.  That means having a separate mechanism at each remote point, connected via a different communications network back to a central point.  The mobile phone network often performs that role.

Measurement is the core of all control and efficiency improvement in the modern data centre.  If the organisation demands improved efficiency (and if it can define what that means) then the first step to achieving it is measurement of the present state of whatever it is we are trying to improve.  From measurement comes feedback.  From feedback comes improvement and from improvement comes control.  From control comes efficiency, which is what we are all trying to achieve.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

StratoGen Launches Hosted VMware View Solution

StratoGen, a provider of VMware hosting services, today announced the launch of ViewDesk, a hosted VMware View solution which promises to simplify the deployment of virtual desktops in the cloud.  Available in StratoGen datacenter locations across the US, UK and Asia, ViewDesk offers a cost effective and verified VDI environment for 25 to 500 virtual desktops.

The move comes in response to exceptional demand for desktop-as-a-service solutions built on the latest VMware View platform.

StratoGen ViewDesk  helps organizations to quickly benefit from the operational savings of desktop virtualization by streamlining the deployment of a dedicated VMware View platform and enabling migration from on-site PCs to hosted virtual desktops.

Highlights of the solution include:

  •     Cost effective, verified VMware View infrastructure built to best practice standards
  •     Delivered on enterprise class components from NetApp, HP and Cisco
  •     Scale-as-you-grow platform enables VDI resources to grow with your business
  •     ISO27001 certified security at platform and datacenter levels
  •     Full administrative access provided
  •     Available now at StratoGen cloud locations across the US, UK and Asia.

Karl Robinson, VP of Sales said “Most businesses are aware that a hosted desktop solution delivers significant increases in IT manageability, security and performance. With the launch of our ViewDesk platform we have taken the complexity out of implementing desktop virtualization by delivering a verified solution built to the highest standards. Organizations deploying  ViewDesk will immediately enjoy much higher levels of availability and agility in desktop services when compared to on-site desktop estates. ”

For further information please visit http://www.stratogen.com/products/hosted-desktop.html