Archivo de la categoría: Featured

Behind the Scenes of IT Resource Scheduling

By Ryann Edwards, PMP, Resource Specialist, LogicsOne

 

A few years ago, we made a change at GreenPages-LogicsOne to streamline how we handle the resource scheduling process.   It’s a good thing too, because so far in 2013 there have been close to 400 engagements that have used this new process.  While scheduling a large group of resources may sound easy, at times it feels like it takes a team of highly skilled scientific specialists and analysts to get it right.  Ok, that might be a bit of an exaggeration, but there is actually a bit of a science to it.

I should begin with the disclaimer that the ­process hasn’t always been so efficient. In fact, most of my new hire trainings begin with the “back in the day” spiel because it brings to light the lessons we’ve learned.  This in turn leads to where we are today.  So where did we begin? We were slightly blind. Our services team of Account Executive, Solutions Architects, and Project Managers were working in silos when it came to choosing and selecting resources for our services engagements.  Everyone involved had the best of intentions: to find the right resource for the project, meet our customers’ deadlines or requests, schedule the project, and implement a highly successful engagement. The problem came when multiple Project Managers had “just the right project” for “just the right Consultant” who, yes, happened to be just the same person.  Needless to say, as our professional services organization has grown and matured over the years, the need became strong for a streamlined scheduling system.  That brings us to present day where we now have a Resource Specialist team to handle scheduling requests.

As I mentioned above, there is quite a bit of thought and strategy (“science” may have been pushing it) that plays out behind the scenes when it comes to the scheduling of service projects. It is imperative that at the forefront of it all is our customers’ best interests, including special requests and internal deadlines. While some might joke that we should just throw darts at a board of names to figure out who to schedule, I assure that you that we really don’t.  In fact, we look at each Statement of Work and scheduling request that comes in to our queue in great detail. From researching the background and history that Consultants may already have with a client, to looking at geographical location, travel, availability, customer dependencies and deadlines; there are a lot of considerations.

At the end of the day, our objective is always the same: to make sure we are looking at the big picture and are doing everything we can to keep our internal and external customers satisfied.  Our top priority is making sure the resource(s) assigned to a project are a good match for all parties involved, so the outcome is a successful professional service engagement for our customers.  Believe it or not, our customers can help in this process. Here are some things that help us ensure a successful engagement:

  1. Sign Off. The signed Statement of Work is crucial. It is the only way we can fairly and accurately prioritize requests for services.
  2. Information. The more details you can provide regarding the project or services, the better. Does a key resource on the project have an upcoming vacation?  Are there outside dependencies that will effect when your project can start?  Do you have an important internal deadline that you need to meet?  All of those things are pieces to the scheduling puzzle.
  3. Be open-minded. While you may have worked with one Consultant in the past and would like to use them again, we have a full staff of highly qualified resources that welcome the opportunity to work with you!

Streamlining the scheduling process has allowed members of the services organization to focus on other important aspects in the project lifecycle; project planning, managing, and executing. Having a team dedicated solely to resourcing has improved efficiency in scheduling, increased visibility into utilization of the solutions team, and is a key piece of the puzzle for successful project delivery.

 

Project Management Form

Looking for more information around Project Management? Please fill out this form and we will get in touch with you shortly.

!

Top 10 Ways to Kill Your VDI Project

By Francis Czekalski, Consulting Architect, LogicsOne

Earlier this month I presented at GreenPages’ annual Summit Event. My breakout presentation this year was an End User Computing Super Session. In this video, I summarize the ‘top 10 ways to kill your VDI project.’

If you’re interested in learning more, download this free on-demand webinar where I share some real world VDI battlefield stories.

http://www.youtube.com/watch?v=y9w1o0O8IaI

 

 

Rapid Fire Summary of Carl Eschenbach’s General Session at VMworld 2013

By Chris Ward, CTO, LogicsOne

I wrote a blog on Monday summarizing the opening keynote at VMworld 2013. Checking in again quickly to summarize Tuesday’s General Session. VMware’s COO Carl Eschenbach took the stage and informed the audience that there are 22,500 people in attendance, which is a new record for VMware. This makes it the single largest IT infrastructure event of the year. 33 of these attendees have been to all 10 VMworlds, and Carl is one of them.

Carl started the session by providing a recap of Monday’s announcements around vSphere/vCloud Suite 5.5, NSX, vSAN, vCHS, Cloud Foundry, and vCHS. The overall mantra of the session revolved around IT as a Service. The following points were key:

  • Virtualization extends to ALL of IT
  • IT management gives way to automation
  • Compatible hybrid cloud will be ubiquitous
  • Foundation is SDDC

After this, came a plethora of product demos. If you would like to watch the presentation to be able to check out the demos you can watch them here: http://www.vmworld.com/community/conference/us/learn/generalsessions

vCAC Demo

  • Started with showing the service catalogue & showing options to deploy an app to a private or public cloud. Also showed costs of each option as well
    • I’m assuming this is showing integration between vCAC & ITBM, although that was not directly mentioned
    • Next they displayed the database options as part of the app – assuming this is vFabric Data Director (DB as a Service)
    • Showed the auto-scale option
    • Showed the health of the application after deployment…this appears to be integration with vCOPS (again, not mentioned)
    • The demo showed how the product provided self-service, transparent pricing, governance, and automation

NSX Demo

  • Started with a networking conversation around why there are challenges with networking being the ball and chain of the VM. After that, Carl discussed the features and functions that NSX can provide. Some key ones were:
    • Route, switch, load balance, VPN, firewall, etc.
  • Displayed the vSphere web client & looked at the automated actions that happened via vCAC and NSX  during the app provisioning
  • What was needed to deploy this demo you may ask? L2 switch, L3 router, firewall, & load balancer. All of this was automated and deployed with no human intervention
  • Carl then went through the difference in physical provisioning vs. logical provisioning with NSX & abstracting the network off the physical devices.
  • West Jet has deployed NSX, got to hear a little about their experiences
  • There was also a demo to show you how you can take an existing VMware infrastructure and convert/migrate to an NSX virtual network. In addition, it showed how vMotion can make the network switch with zero downtime

The conversation then turned to storage. They covered the following:

  • Requirements of SLAs, policies, management, etc. for mission critical apps in the storage realm
  • vSAN discussion and demo
  • Storage policy can be attached at the VM layer so it is mobile with the VM
  • Showcased adding another host to the cluster and the local storage is auto-added to the vSAN instance
  • Resiliency – can choose how many copies of the data are required

IT Operations:

  • Traditional management silos have to change
  • Workloads are going to scale to massive numbers and be spread across numerous environments (public and private)
  • Conventional approach is scripting and rules which tend to be rigid and complex –> Answer is policy based automation via vCAC
  • Showed example in vCOPS of a performance issue and drilled into the problem…then showed performance improve automatically due to automated proactive response to detected issues.  (autoscaling in this case)
  • Discussing hybrid and seamless movement of workloads to/from private/public cloud
  • Displayed vCHS plugin to the vSphere web client
  • Showed template synchronization between private on prem vSphere environment up to vCHS
  • Provisioned an app from vCAC to public cloud (vCHS)  (it shows up inside of vSphere Web client)

 

Let me know if there are questions on any of these demos.

Rapid Fire Summary of Opening Keynote at VMworld 2013

By Chris Ward, CTO, LogicsOne

For those of you who aren’t out in San Francisco at the 10th annual VMworld event, here is a quick overview of what was covered in the opening keynote delivered by CEO Pat Gelsinger’s opening:

  • Social, Mobile, Cloud & Big Data are the 4 largest forces shaping IT today
  • Transitioned from Mainframe –>Client Server –>Mobile Cloud
  • Pat sets the stage that the theme of this year’s event is networking – basically setting the stage for a ton of Nicira/NSX information. I think VMware sees the core of the software defined datacenter as networking-based, and they are in a very fast race to beat out the competition in that space
  • Pat also mentioned that his passion is to get every x86 application/workload 100% virtualized. He drew parallels to Bill Gates saying his dream was a PC on every desk in every home that runs Microsoft software.
  • Next came announcements around vSphere 5.5 & vCloud Suite 5.5…here are some of the highlights:
    • 2x CPU and Memory limits and 32x storage capacity per volume to support mission critical and big applications
    • Application Aware high availability
    • Big Data Extensions – multi-tenant Hadoop capability via Serengeti
    • vSAN officially announced as public beta and will be GA by 1st half of 2014
    • vVOL is now in tech preview
    • vSphere Flash Read Cache included in vSphere 5.5

Next, we heard from Martin Casado. Martin is the CTO – Networking at VMware and came over from the Nicira acquisition and was speaking about VMware NSX. NSX is a combination of vCloud Network and Security (vCNS) and Nicira. Essentially, NSX is a network hypervisor that abstracts the underlying networking hardware just like ESX abstracts underlying server hardware.

Other topics to note:

  • IDC names VMware #1 in Cloud Management
  • VMware hypervisor fully supported as part of OpenStack
  • Growing focus on hybrid cloud. VMware will have 4 datacenters soon (Las Vegas, Santa Clara, Sterling, & Dallas). Also announcing partnerships with Savvis in NYC & Chicago to provide vCHS services out of Savvis datacenters.
  • End User Computing
    • Desktop as a Service on vCHS is being announced (I have an EUC Summit Dinner later on tonight so I will be able to go into more detail afterward that).

So, all-in-all a good start to the event. Network virtualization/NSX is clearly the focus of this conference and vCHS is a not too distant 2nd. Something that was omitted from the keynote was the rewritten SSO engine for vCenter 5.5. The piece was weak for 5.1 and has been vastly improved with 5.5…this could be addressed tomorrow as most of the tech staff is in Tuesday’s general session.

If you’re at the event…I’ll actually be speaking on a panel tomorrow at 2:30 about balancing agility with service standardization. I’ll be joining Khalid Hakim and Kurt Milne of VMware, along with Dave Bartoletti of Forrester Research and Ian Clayton of Service Management 101. I will also be co-presenting on Wednesday with my colleague John Dixon at 2:30-3:30 in the Moscone West Room 2011 about deploying a private cloud service catalogue. Hopefully you can swing by.

More to come soon!

 

Software Defined Networking Series — Part 2: What Are the Business Drivers?

By Nick Phelps, Consulting Architect, LogicsOne

 

http://www.youtube.com/watch?v=7U9fCg1Zpio

 

In Part one of this series on Software Defined Networking, I gave a high level overview of what all the buzz is about. Here’s part two…in this video I expand on the capabilities of SDN by delving into the business drivers behind the concept. Leave any questions or thoughts in the comments section below.

 

Free webinar on 8/22: Horizon Suite – How to Securely Enable BYOD with VMware’s Next Gen EUC Platform.

With a growing number of consumer devices proliferating the workplace, lines of business turning to cloud-based services, and people demanding more mobility in order to be productive, IT administrators are faced with a new generation of challenges for securely managing corporate data across a broad array of computing platforms. 

 

A Guide to Successful Cloud Adoption

Last week, I met with a number of our top clients near the GreenPages HQ in Portsmouth, NH at our annual Summit event to talk about successful adoption of cloud technologies. In this post, I’ll give a summary of my cloud adoption advice, and cover some of the feedback that I heard from customers during my discussions. Here we go…

The Market for IT Services

I see compute infrastructure looking more and more like a commodity, and that there is intense competition in the market for IT services, particularly Infrastructure-as-a-Service (IaaS).

  1. Every day, Amazon installs as much computing capacity in AWS as it used to run all of Amazon in 2002, when it was a $3.9 billion company.” – CIO Journal, May 2013
  2. “[Amazon] has dropped the price of renting dedicated virtual server instances on its EC2 compute cloud by up to 80 percent […]  from $10 to $2 per hour” – ZDNet,  July 2013
  3. “…Amazon cut charges for some of its services Friday, the 25th reduction since its launch in 2006.” – CRN, February 2013

I think that the first data point here is absolutely stunning, even considering that it covers a time span of 11 years. Of course, a simple Google search will return a number of other similar quotes. How can Amazon and others continue to drop their prices for IaaS, while improving quality at the same time? From a market behavior point of view, I think that the answer is clear – Amazon Web Services and others specialize in providing IaaS. That’s all they do. That’s their core business. Like any other for-profit business, IaaS providers prefer to make investments in projects that will improve their bottom line. And, like any other for-profit business, those investments enable companies like AWS to effectively compete with other providers (like Verizon/Terremark, for example) in the market.

Register for our upcoming webinar on 8/22 to learn how to deal with the challenges of securely managing corporate data across a broad array of computing platforms. 

With network and other technologies as they are, businesses now have a choice of where to host infrastructure that supports their applications. In other words, the captive corporate IT department may be the preferred provider of infrastructure (for now), but they are now effectively competing with outside IaaS providers. Why, then, would the business not choose the lowest cost provider? Well, the answer to that question is quite the debate in cloud computing (we’ll put that aside for now). Suffice to say that we think that internal corporate IT departments are now competing with outside providers to provide IaaS and other services to the business and that this will become more apparent as technology advances (e.g., as workloads become more portable, network speeds increase, storage becomes increasingly less costly, etc.).

Now here’s the punch line and the basis for our guidance on cloud computing; how should internal corporate IT position itself to stay competitive? At our annual Summit event last week, I discussed the progression of the corporate IT department from a provider of technology to a provider of services (see my whitepaper on cloud management for detail). The common thread is that corporate IT evolves by becoming closer and closer to the requirements of the business – and may even be able to anticipate requirements of the business or suggest emerging technology to benefit the business. To take advantage of cloud computing, one thing corporate IT can do is source commodity services to outside providers where it makes sense. Fundamentally, this has been commonplace in other industries for some time – manufacturing being one example. OEM automotive manufacturers like GM and Ford do not produce the windshields and brake calipers that are necessary for a complete automobile – it just isn’t worth it for GM or Ford to produce those things. They source windshields, brake calipers, and other components from companies who specialize. GM, Ford, and others are then left with more resources to invest in designing, assembling and marketing a product that appeals to end users like you and I.

So, it comes down to this: how do internal corporate IT departments make intelligent sourcing decisions? We suggest that the answer is in thinking about packaging and delivering IT services to the business.

GreenPages Assessment and Design Method

So, how does GreenPages recommend that customers take advantage of cloud computing? Even if you are not considering external cloud at this time, I think it makes sense to prepare your shop for it. Eventually, cloud may make sense for your shop even if, at this time, there is no fit for it. The guidance here is to take a methodical look at how your department is staffed and operated. ITIL v2 and v3 provide a good guide here of what should be examined:

  • Configuration Management
  • Financial Management
  • Incident and Problem Management
  • Change Management
  • Service Level and Availability, and Service Catalog Management
  • Lifecycle Management
  • Capacity Management
  • Business Level Management

 

Assigning a score to each of these areas in terms of repeatability, documentation, measurement, and continuous improvement will paint the picture of how well your department can make informed sourcing decisions. Conducting an assessment and making some housekeeping improvements where needed will serve two purposes:

  1. Plans for remediation could form one cornerstone of your cloud strategy
  2. Doing things according to good practice will add discipline to your IT department – which is valuable regardless of your position on cloud computing at this time

When and if cloud computing services look like a good option for your company, your department will be able to make an informed decision on which services to use at which times. And, if you’re building an internal private cloud, the processes listed above will form the cornerstone of the way you will operate as a service provider.

Case Study: Service Catalog and Private Cloud

Implementing a Service Catalog, corporate IT departments can take a solid first step to becoming a service provider and staying close to the requirements of the business. This year at VMworld in San Francisco, I’ll be leading a session to present a case study of a recent client that did exactly this with our help. If you’re going to be out at VMworld, swing by and listen in to my session!

 

 

Free webinar on 8/22: Horizon Suite – How to Securely Enable BYOD with VMware’s Next Gen EUC Platform.

With a growing number of consumer devices proliferating the workplace, lines of business turning to cloud-based services, and people demanding more mobility in order to be productive, IT administrators are faced with a new generation of challenges for securely managing corporate data across a broad array of computing platforms. 

 

Cloud Corner Series – Unified Communications in the New IT Paradigm

http://www.youtube.com/watch?v=XHp6Q5RMMR8

 

In this segment of Cloud Corner, former CEO of Qoncert, and new GreenPages-LogicsOne employee, Lou Rossi answers questions around how unified communications fits into the new IT paradigm moving forward.

We’ll be hosting a free webinar on 8/22: How to Securely Enable BYOD with VMware’s Next Gen EUC Platform. Register Now!

Days 5 at Cisco Live – Video Recap

By Nick Phelps, Consulting Architect, LogicsOne

http://www.youtube.com/watch?v=we5PRDAH_p0

Here’s the recap of the final day of Cisco Live. All in all, a great event with a ton of useful information. I got to sit in on some great sessions and get a lot of hands-on experience with a lot of cutting edge technologies. You can watch the recaps of days 1-4 here if you missed them:

Day 1

Day 2

Day 3 & 4

 

 

 

 

 

Five Things to Consider Before Choosing your Professional Services Automation Tool

By Alyson Gallant, PMP, Project Administrator, LogicsOne

Choosing a Professional Services Automation (PSA) tool can be an arduous task. There are a number of options out there, and everyone’s business and workflows are unique. Also, the potential cost and time of evaluating a number of tools and running a proof of concept can be overwhelming.

Why do you need a PSA tool? A PSA tool is crucial to understanding where your resources are spending their time and the profitability of projects. A suitable PSA tool should allow your organization to continuously evaluate performance in order to improve and scale.

Like any Professional Services organization (or any organization looking to track time and budget on large internal projects), we’ve evaluated and used a number of different PSA tools. During the implementation of our current PSA tool, the Project Management Office (PMO) was lucky enough to have significant input into our PSA tool selection as well as the rollout to our Professional Services organization.

There is no one size fits all when it comes to a PSA tool, and there are many methods of evaluation, which we won’t get into here. However, here’s some food for thought based on our experiences for anyone preparing to search for a new PSA tool:

1) Can you consolidate your applications?
Do you have a lot of applications? Will your new PSA tool be an additional application for your users to use? Are your users experiencing application overload?? If so, have you investigated what your current CRM application already offers? A number of PSA applications are part of a larger suite, and the more modules you use, the better your data flows through.

2) Are you trying too hard to reinvent the wheel? Customization vs. integration.
One of the potential cons of going to a tool that you cannot customize is that you may have to change your workflow to fit the tool rather than having a tool that works with your existing workflow. But is that necessarily a bad thing? Using an application that hundreds or thousands of other customers are using can help provide a baseline for what your organization should be doing if your current workflow has a number of inefficiencies. Also, if you customize your tool, how does that affect future upgrades? Be open to revaluating your workflow. Look for a tool that is open to integrations with your other applications as an alternative to customization.

3) Is there an active community of users?
A great feature of a PSA tool is the energy and enthusiasm of the community of users. We did not have this with our first PSA tool, and it wasn’t until our second PSA tool that we recognized the value of this. Through an online community, users actively discuss new releases and provide feedback on forums open to all users. Users are encouraged to enter “New Feature Requests” and vote on them. We all know our time is highly prized during the workday, but sometimes it can be a good gut check when you’re running into an issue or workflow conundrum to take a look at what others are seeing or experiencing to see if you’re on the right track.

4) Are you able to roll out your tool in a staged approach?
If you have the luxury of rolling out a PSA in a staged fashion, this may be an easier way to encourage adoption of your users, as well as ensure you’re getting accurate data entered by your users. As we all know, change can be difficult, and when users are overwhelmed and unsure of a new process, it may not be the best setting for the most accurate information to be entered. If you have the ability to roll out a single module of your new PSA at a time, your users can focus on getting each process down correctly before moving onto the next new process. A staged approach may not always work for your rollout, but it is worth considering to ensure you have “good data”.

5) Are you willing to perform constant evaluation on the new tool and provide recurring training?
As rollouts can take time, there can be quite a gap between inputting your data into your new PSA tool and evaluating the data that you extract. What happens when you extract data that isn’t useful? What if the information is incorrect? You’ll need to constantly gauge how well your workflow is providing your management with information, and changing that workflow can require new training. Make sure to factor this in with the rollout of your PSA tool – the work is never done.
Are you in the market for a new tool to track your projects? What do you use currently, and what are your pain points?

 

If you do have any questions, this is something we can help with as part of our On Demand Project Management Offering, so feel free to reach out!