Archivo de la categoría: Featured

Tech News Recap for the Week of 3/17/2014

 

In case you missed it: Here’s a quick recap of tech news and articles from the week of 3/17/2014!

 

 

Check out this whitepaper: The Business Savvy CIO

 

 

Are You Ready for a Project Management Office? Part 1 – Where to Start

By Nancy Mather, Director of Professional Services Operations, PMP

As modern IT continues to transform, so must traditional project management approaches and methodologies. Conversations have shifted away from a sole focus on technology to more of an emphasis on business vision and outcomes creating an additional layer of complexity as new stakeholders become involved in the process.

A Project Management Office (PMO) is a centralized group set up for the purpose of implementing project management expertise across an organization. At its best, a PMO benefits an organization by providing accountability, visibility, a sense of discipline, and ensuring that projects are completed successfully, within budget, and on time. At its worst, a PMO is viewed as a police force, roadblock, and layer of red tape that slows down progress while not providing any value.

How do you know if you need a PMO? When GreenPages made the decision to implement a PMO, it was a natural progression based on the size that our project management team had reached and the number of projects that were coming in per year. We had reached a point where stratification of the team was necessary. The one size fits all role of “Project Manager” was no longer effectively representing the varying levels of experience across the team. In addition, we had collected a significant amount of collateral from templates to best practices, so for us the formation of a PMO was a natural progression in the evolution of the department.

One of the questions we often hear from our customers, is how do you create a PMO and where do you start? I’m a big believer that it is important to start with the basics. Define your mission, vision, and goals. Formally defining the role of the PMO can be a challenge, however defining where you want to go will help ensure you are on the right path to get there. Consider a value proposition for the PMO. It would be something as simple as projects delivered on-time, on-budget, with higher quality.

Define your timelines with phases. After changes are made, take time to breath to understand the effects of those changes. This will allow you to make refinements as needed. Define what effectiveness looks like and how it will be measured in the future. This is where the vision and goals come in to effect. Defining what you want to achieve will help you steer the course.

It’s also important to perform a gap analysis of where you are today and where you are trying to go. It is important to look at the staff that you already have and begin to think about the roles you envision them in under the PMO. It’s also important to think about who will manage the PMO, and if there will be layers of management within it. The formation of a PMO can be an opportunity to create a management career path for those on the team that want it and are ready for it.

Develop a training program for the PMO. Consider a program that is tiered and on-going. At the onset, a focused training on tools and process is necessary.

Determine the new project funnel flow. Where will the project that the PMO will be responsible for come from? Determine how many projects you believe each person can reasonable and effectively manage. Will you be able to control the flow of project to manage to that level? It’s critical to identify key metrics and watch trends closely that effect your staffing needs. Weekly one on ones with staff are valuable to understand what the team has on their plates and to understand current bandwidth.

Stay tuned for part 2: Are You Ready for a Project Management Office? Players and Pitfalls

 

Free Whitepaper: The Business Savvy CIO

 

 

vCOPS? vCAC? Where and When It Makes Sense to Use VMware Management Solutions

By Chris Ward, CTO

 

I’ve been having a lot of conversations recently, both internally and with customers, around management strategies and tools related to virtualized and cloud infrastructures.  There are many solutions out there and, as always, there is not a one size fits all silver bullet to solve all problems.  VMware in particular has several solutions in their Cloud Infrastructure Management (CIM) portfolio, but it can get confusing trying to figure out the use cases for each product and when it may be the right fit to solve your specific challenge. I just finished giving some training to our internal teams around this topic and thought it would be good to share with the broader community at large.  I hope you find it helpful and know that we at GreenPages are happy to engage in more detailed conversations to help you make the best choices for your management challenges.

The core solutions that VMware has brought to market in the past few years include vCenter Operations Manager (vCOPS), vCloud Automation Center (vCAC), IT Business Management (ITBM), and Log Insight.  I’ll briefly cover each of these including what they do and where/when it makes sense to use them.

vCOPS

What is it?   vCOPS is actually a solution suite which is available in four editions: Foundation, Standard, Advanced, and Enterprise. 

The core component of all four editions is vCenter Operations Manager which came from the acquisition of Integrian back in 2010 and is essentially a monitoring solution on steroids.  In addition to typical performance and health monitoring/alerting, the secret sauce of this tool is its ability to learn what ‘normal’ is for your specific environment and provide predictive analytics.  The tool will collect data from various virtual or physical systems (networking, storage, compute, virtual, etc.) and dynamically determine proper thresholds rather than the typical ‘best practice’ model thus reducing overall noise and false positive alarms.  It can also provide proactive alerts as to when a problem may arise in the future vs. simply alerting after a problem has occurred.  Finally, it also does a great job analyzing VM sizing and assisting in capacity planning.  All of this is coupled with a very slick interface which is highly customizable.  

The Advanced and Enterprise editions of the suite also include vCenter Configuration Manager (vCM), vCenter Hyperic, vCenter Infrastructure Navigator (VIN), and vCenter Chargeback Manager (vCBM). 

vCM automates configuration and compliance management across virtual, physical, and cloud environments.  Essentially this means those pesky Windows registry key changes, Linux iptables settings, etc. can be automated and reported upon to ensure that your environment remains configured to the standards you have developed. 

Hyperic does at the application layer what vCOPS does for the underlying infrastructure.  It can monitor operating system, middleware, and application layers and provide automated workflows to resolve potential issues. 

VIN is a discovery tool used to create application dependency maps which are key when planning and designing security boundaries and disaster recovery solutions.

vCBM is utilized for showback or chargeback so that various lines of business can be accountable for IT resource utilization.

Where is it best utilized?

The vCOPS suites are best suited for environments that require robust monitoring and/or configuration management and that have fairly mature IT organizations capable of realizing the toolset’s full potential. 

vCAC

What is it?  Stemming from the acquisition of DynamicOps, this is primarily an automation/orchestration toolset designed to deploy and provision workloads and applications across multiple platforms be they physical, virtual, or cloud based.  Additionally, vCAC provides a front end service catalog enabling end user IT self-service.  Like most VMware product sets, vCAC comes in multiple editions as well including standard, advanced, and enterprise.  Standard edition provides the base automation toolsets, advanced adds in the self-service catalog (the original DynamicOps feature set), and enterprise adds in dynamic application provisioning (formally vFabric AppDirector).

Where is it best utilized?

If you have a very dynamic environment, such as development or devops, then vCAC may well be the tool for you.  By utilizing automation and self-service, it can take the time required to provision workloads/applications/platforms from potentially days or weeks down to minutes.  If you have the issue of ‘shadow IT’ where end users are directly utilizing external services, such as Amazon, to bypass internal IT due to red tape, vCAC can help solve that problem by providing the speed and flexibility of AWS while also maintaining command and control internally.

ITBM

What is it?  Think of ITBM as more a CFO tool vs. a raw IT technology tool.  Its purpose is to provide financial management of large (millions of dollars) IT budgets by providing visibility into true costs and quality so that IT may be better aligned to the business.  It too comes in multiple editions including standard, advanced, and enterprise.  The standard edition provides visibility into VMware virtualized environments and can determine relative true cost per VM/workload/application.  Advanced adds the physical and non-VMware world into the equation and enterprise adds the quality component.

Where is it best utilized?

The standard edition of ITBM makes sense for most mid-market and above level customers who want/need to get a sense of the true cost of IT.  This is very important when considering any move to a public cloud environment as you need to be able to truly compare costs.  I hear all the time that ‘cloud is cheaper’ but I have to ask ‘cheaper than what.’  If I ask you how much it costs to run workload X on your internal infrastructure per hour, week, month, etc. can you honestly give me an accurate answer?  In most cases the answer is no, and that’s exactly where ITBM comes into play.  On a side note, the standard edition of ITBM does require vCAC so if you’re already looking at vCAC then it makes a lot of sense to also consider ITBM.

Log Insight

What is it?  Simply stated, it’s a dumping ground for just about any type of log you can imagine but with a google type flare.  It has a very nice indexing/search capability that can help make sense of insanely large amounts of log data from numerous sources thus helping greatly in event correlation and troubleshooting as well as auditing.

Where is it best utilized?

Any environment where log management is required and/or for enhanced troubleshooting/root cause analysis.  The licensing for this is interesting because unlike similar products it is per device rather than a per terabyte of data model, which can potentially provide a huge cost savings.

vSOM and vCloud Suites

vSOM (vSphere with Operations Management) is simply a bundle of traditional vSphere with vCOPS.  The editions here are a little confusing as the standard edition of vCOPs comes with every edition of vSOM.  The only difference in the vSOM editions are the underlying vSphere edition.

The vCloud Suite includes most of what I have described above, but again comes in our favorite three editions of standard, advanced, and enterprise.   Basically, if you’re already looking at two to three a la carte solutions that are part of a vCloud Suite edition, then you’re better off looking at the suite.  You’ll get more value because the suites include multiple solutions and the suites, along with vSOM, remain licensed by physical processor socket vs by the number of VMs.

 

Leave a comment if you have any other questions or would like a more detailed answer. Again, GreenPages helps our customers make the right choices for their individual needs so reach out if you would like to set up some time to talk. Hope this was helpful!

 

Download this webinar recording to learn more about VMware’s Horizon Suite

 

 

 

Tech News Recap for the Week of 3/10/2014

 

In case you missed it: Here’s a quick recap of tech news and articles from the week of 3/10/2014!

 

 

To keep up with tech news and updates throughout the week follow @GreenPagesIT on Twitter! If you’re looking for additional resources here are some whitepapers:

 

 

Are We All Cloud Service Brokers Now? Part II

By John Dixon, Consulting Architect

In my last post, I discussed Cloud Service Brokers and some of their benefits after reading a couple of articles from Robin Meehan (Article 1 here and Article 2 here). In this post, I will break down some of Robin’s points and explain why I agree or disagree with each.

At the end of last post, I was breaking down cloud arbitrage into three areas (run-time, deployment-time, plan-time). Credit to Robin for run-time and deployment-time arbitrage. I really like those terms, and I think it illuminates the conversation. So, run-time cloud arbitrage is really science fiction right now – this is where the CSB moves running workloads around on the fly to find the best benefit for the customer. I haven’t seen any technology (yet) that does this. However, VMware does deployment-time and run-time arbitrage with VMotion and Distributed Resource Scheduling – albeit, in a single virtual datacenter, with individual VMs, and with a single policy objective to balance a cluster’s load across vSphere nodes. See Duncan Epping’s excellent write up on DRS here. Even 10 years ago, this was not possible. 15 years ago, this was certainly science fiction. Now, it’s pretty common to have DRS enabled for all of your vSphere clusters.

A few of Robin’s points…

Point 1:
“The ability to migrate IT workloads dynamically (i.e. at run-time, not at deployment time) is something I sometimes see as a capability under the ‘cloud broker’ banner, but in my view it really just doesn’t make sense – at least not at the moment.”

I agree. Run-time cloud arbitrage and workload migration ala vMotion is not possible today in cloud. Will it be possible within the next few years? Absolutely. I think it will first manifest itself in a VMware High Availability-like scenario. Again, see Duncan Epping’s fantastic deep-dive into HA. If cloud provider X drops off of the internet suddenly, then restart the resources and application at cloud provider Y (where cloud provider Y might even be your own datacenter). This is sometimes known as DR as a service, or DRaaS. And even now, there are some DRaaS solutions that are coming onto the market.

Point 2:
“The rate of innovation in the IaaS/PaaS/DaaS market is such that most of the other vendors are playing catch-up with AWS, as AWS continue to differentiate themselves from the following pack. This shows no sign of slowing down over the next couple of years – so the only way a migrated workload is going to work across multiple cloud vendors is if it only relies on the lowest common denominator functionality across the vendors, which is typically basic storage, virtualised compute and connectivity.”

Also agree, the rate of innovation in the market for cloud computing is rapid as specialization sets in at an industrial level. This also means that downward price pressures are enormous for vendors in the cloud space, even today as vendors vie for market share. As switching costs decrease (e.g., portability of applications increases), prices for IaaS will decrease even more. Now, wouldn’t you, as a customer, like to take advantage of this market behavior? Take in to consideration that CSBs aggregate providers but they also aggregate customer demand. If you believe this interpretation of the market for IaaS, then you’ll want to position yourself to take advantage of it by planning portability for your applications. A CSB can help you do this.

Point 3:
“The bottom line is that if you are going to architect your applications so they can run on any cloud service provider, then you can’t easily use any of the good bits and hence your value in migrating to a cloud solution is diminished. Not ruined, just reduced.”

Disagree. To take advantage of market behavior, customers should look to avoid using proprietary features of IaaS platforms because they compromise portability. Like we noted earlier, increased portability of applications means more flexibility to take advantage of market behavior that leads to decreasing prices.

This is where perspective on cloud becomes really important. For example, GreenPages has a customer with a great use case for commodity IaaS. They may deploy ~800 machines in a cluster at AWS for only a matter of hours to run a simulation or solve a problem. After the result is read, these machines are completely destroyed—even the data. So, it makes no difference to this customer where they do this work. AWS happens to be the convenient choice right now. Next quarter, it may be Azure, who knows? I’m absolutely certain that this customer sees more benefit in avoiding the use of propriety features (a.k.a., the “good bits” of cloud) in a cloud provider rather than using them.

What is your perspective on cloud?
• A means to improve time to market and agility
• A way to transform capex into opex
• Simply a management paradigm – you can have cloud anywhere, even internally as long as you have self-service and infinite resources
• An enabler for a new methodology like DevOps
• Simply a destination for applications

I think that a good perspective may include all of these things. Leave a comment and let me know your thoughts.

Interested in learning more? Download this free whitepaper ‘Cloud Management, Now!’

Are We All Cloud Service Brokers Now?

By John Dixon, Consulting Architect

 

Robin Meehan of Smart421 recently wrote a couple of great posts on cloud service brokers (CSBs) and the role that they play for consumers of cloud services. (http://smart421.wordpress.com/2014/02/24/were-mostly-all-cloud-services-brokers-now/ and http://smart421.wordpress.com/2014/02/25/cloud-brokerage-and-dynamic-it-workload-migration/). I’m going to write two blogs about the topic. The first will be a background on my views and interpretations around cloud service brokers. In the second post, I will break down some of Robin’s points and explain why I agree or disagree.

Essentially, a cloud broker offers consumers three key things that a single cloud provider does not (these are from the NIST definition of a Cloud Service Broker):

  • Intermediation
  • Aggregation
  • Arbitrage (run-time, deployment-time, plan-time)

My interpretation of these is as follows. We’ll use Amazon Web Services as the example IaaS cloud provider and GreenPages as the example of the cloud broker:

Intermediation. As a cloud broker, GreenPages, sits between you, the consumer, and AWS. GreenPages and other CSBs do this so they can add value to the core AWS offering. Why? Billing and chargeback is a great example. A bill from AWS includes line item charges for EC2, S3, and whichever other services you used during the past month – so you would be able to see that EC2 charges for January were $12,502.90 in total. GreenPages takes this bill and processes it so that you would be able to get more granular information about your spend in January. We would be able to show you:

  • Spend per application
  • Spend per environment (development, test, production)
  • Spend per tier (web, application, database)
  • Spend per resource (CPU, memory, storage, managed services)
  • Compare January 2014 to December, or even January 2013
  • Estimate the spend for February 2014

So, going directly to AWS, you’d be able to answer a question like, “how much did I spend in total for compute in January?”

And, going through GreenPages as a cloud broker, you’d be able to answer a question like, “how much did the development environment for Application X cost in January, and how does that compare with the spend in December?”

I think you’d agree that it is easier to wrap governance around the spend information from a cloud service broker rather than directly from AWS. This is just one of the advantages of using a CSB in front of a cloud provider – even if you’re like many customers out there and choose to use only one provider.

Aggregation. As a CSB, GreenPages aggregates the offerings from many providers and provides a simple interface to provision resources to any of them. Whether you choose AWS, Terremark, Savvis, or even your internal vSphere environment, you’d use the same procedure to provision resources. On the provider side, CSBs also aggregate demand from consumers and are able to negotiate rates. Why is this important? A CSB can add value in three ways here:

1) By allowing you to compare the offerings of different providers – in terms of pricing, SLA guarantees, service credits, supported configurations, etc.

2) By placing a consistent approval framework in front of requests to any provider.

3) By using aggregated demand to negotiate special pricing and terms with providers – terms that may not be available to an individual consumer of cloud services

The approval framework is of course optional – if you wish, you could choose to allow any user to provision infrastructure to any provider. Either way, a CSB can establish a request management framework in front of “the cloud” and can, in turn, provide things like an audit trail of requests and approvals. Perhaps you want to raise an ITIL-style change whenever a cloud request is fulfilled? A CSB can integrate with existing systems like Remedy or ServiceNow for that.

Arbitrage. Robin Meehan has a follow-on post that alludes to cloud arbitrage and workload migration. Cloud arbitrage is somewhat science fiction at this time, but let’s look forward to the not-too-distant future.

First, what is arbitrage and cloud arbitrage? NIST says it is an environment where the flexibility to CSB has the flexibility to choose, on the customer’s behalf, where to best run the customer’s workload. In theory, the CSB would always be on the lookout for a beneficial arrangement, automatically migrate the workload, and likely capture the financial benefit of doing so. This is a little bit like currency arbitrage, where a financial institution is looking for discrepancies in the market for various currencies, and makes various transactions to come up with a beneficial situation. If you’ve ever seen the late-night infomercials for forex.com, don’t believe the easy money hype. You need vast sums of money and perfect market information (e.g., you’re pretty much a bank) to play in that game.

So, cloud arbitrage and “just plain currency arbitrage” are really only similar when it comes to identifying a good idea. This is where we break it down cloud arbitrage into three areas:

  • Run-time arbitrage
  • Deployment-time arbitrage
  • Plan-time arbitrage

In my next post, I will break down cloud arbitrage as well as go over some specific points Robin makes in his posts and offer my opinions on them.

 

To learn more about transforming your IT Department to a broker of IT services download this ebook

 

 

The Big Shift: From Cloud Skeptics & Magic Pills to ITaaS Nirvana

By Ron Dupler, CEO GreenPages Technology Solutions

Over the last 4-6 quarters, we have seen a significant market evolution, with our customers and the overall market moving from theorizing about cloud computing to defining strategies and plans to reap the benefits of cloud computing solutions and implement hybrid cloud models. In a short period of time we’ve seen IT thought leaders move from debating the reality and importance of cloud computing, to trying to understand how to most effectively grasp the benefits of cloud computing to improve organizational efficiency, velocity, and line of business empowerment. Today, we see the leading edge of the market aggressively rationalizing their application architectures and driving to hybrid cloud computing models.

Internally, we call this phenomenon The Big Shift. Let’s discuss what we know about The Big Shift. First for all of the cloud skeptics reading this, it is an undeniable fact that corporate application workloads are moving from customer owned architectures to public cloud computing platforms. RW Baird released an interesting report in Q’4 of 2013 that included the following observations:

  • Corporate workloads are moving to the public cloud.
  • Much of the IT industry has been asleep at the wheel as Big Shift momentum has accelerated due to the fact that public cloud spending still represents a small portion of overall IT spend.
  • Traditional IT spending is growing in the low single digits. 2-3% per year is a good approximation.
  • Cloud spending is growing at 40% plus per year.
  • What we call The Big Shift is accelerating and is going to have a tremendous impact on the traditional IT industry in the coming years. For every $1.00 increase in public cloud spending, there is a corresponding $3.00-$4.00 decrease in customer-owned IT spend.

There are some other things we know about The Big Shift:

The Big Shift is disrupting old industry paradigms and governance models. We see market evidence of this in traditional IT industry powerhouses like HP and Dell struggling to adapt and reinvent themselves and to maintain relevance and dominance in the new ITaaS era. We even saw perennial powerhouse Cisco lower its 5 year growth forecast during last calendar Q’4 due to the forces at play in the market. In short, the Big Shift is driving disruption throughout the entire IT supply chain. Companies tied to the traditional, customer-owned IT world are finding themselves under financial pressures and are struggling to adapt. Born in the cloud companies like Amazon are seeing tremendous and accelerating growth as the market embraces ITaaS.

In corporate America, the Big Shift is causing inertia as corporate IT leaders and their staffs reassess their IT strategies and strive to determine how best to execute their IT initiatives in the context of the tremendous market change going on around them. We see many clients who understand the need to drive to an ITaaS model and embrace hybrid cloud architectures but do not know how best to attack that challenge and prepare to manage in a hybrid cloud world. This lack of clarity is causing delays in decision making and stalling important IT initiatives.

Let’s discuss cloud for a bit. Cloud computing is a big topic that elicits emotional reactions. Cloud-speak is pervasive in our industry. By this point, the vast majority of your IT partners and vendors are couching their solutions as cloud, or as-a-service, solutions. Some folks in the industry are bold enough to tell you that they have the magic cloud pill that will lead you to ITaaS nirvana. Due to this, many IT professionals that I speak with are sick of talking about cloud and shy away from the topic. My belief is that this avoidance is counterproductive and driven by cloud pervasiveness, lack of precision and clarity when discussing cloud, and the change pressure the cloud revolution is imposing on all professional technologists. The age old mandate to embrace change or die has never been more relevant. Therefore, we feel it is imperative to tackle the cloud discussion head on.

Download our free whitepaper “Cloud Management, Now

Let me take a stab at clarifying the cloud discussion. Figure 1 below represents the Big Shift. As noted above, it is undeniable that workloads are shifting from private, customer owned IT architectures, to public, customer rented platforms, i.e. the public cloud. We see three vectors of change in the industry that are defining the cloud revolution.

Cloud Change Vectors

The first vector is the modernization of legacy, customer-owned architectures. The dominant theme here over the past 5-7 years has been the virtualization of the compute layer. The dominant player during this wave of transformation has been VMware. The first wave of virtualization has slowed in the past 4-6 quarters as the compute virtualization market has matured and the vast majority of x86 workloads have been virtualized. There is a new second wave that is just forming and that will be every bit as powerful and important as the first wave. This wave is represented by new, advanced forms of virtualization and the continued abstraction of more complex components of traditional IT infrastructure: networking, storage, and ultimately entire datacenters as we move to a world of software defined datacenter (SDDC) in the coming years.

The second vector of change in the cloud era involves deploying automation, orchestration, and service catalogues to enable private cloud computing environments for internal users and lines of business. Private cloud environments are the industry and corporate IT’s reaction to the public cloud providers’ ability to provide faster, cheaper, better service levels to corporate end users and lines of business. In short, the private cloud change vector is driven by the fact that internal IT now has competition. Their end users and lines of business, development teams in particular, have new service level expectations based on their consumer experiences and their ability to get fast, cheap, commodity compute from the likes of Amazon. To compete, corporate IT staffs must enable self-service functionality for their lines of business and development teams by deploying advanced management tools that provide automation, orchestration, and service catalogue functionality.

The third vector of change in the cloud era involves tying the inevitable blend of private, customer-owned architectures together with the public cloud platforms in use today at most companies. The result is a true hybrid cloud architectural model that can be managed, preserving the still valid command and control mandates of traditional corporate IT,  and balancing those mandates with the end user empowerment and velocity expected in today’s cloud world.

In the context of these three change vectors we see several approaches within our customer base. We see some customers taking a “boil the ocean” approach and striving to rationalize their entire application portfolios to determine best execution venues and define a path to a true hybrid cloud architecture. We see other customers taking a much more cautious approach and leveraging cloud-based point solutions like desktop and disaster recovery as-a-service to solve old business problems in new ways. Both approaches are valid and depend on uses cases, budgets, and philosophical approach (aggressive, leading-edge, versus conservative follow-the-market thinking).

GreenPages business strategy in the context of the ITaaS and cloud revolution is simple. We have built an organization that has the people, process, and technologies to provide expert strategic guidance and proven cloud-era solutions for our clients through a historical inflection point in the way that information technology is delivered to corporate end users and lines of business. Our cloud management as a service offering (CMaaS) provides a technology platform that helps customers integrate the disparate management tools deployed in their environments and federate alerts through an enterprise command center approach that gives a singular view into physical, virtual, and public cloud workloads. CMaaS also provides cloud service brokerage and governance capabilities allowing our customers to view price-performance analytics across private and public cloud environments, design service models and view the related bills of material, and view and consolidate billings across multiple public cloud providers. What are your thoughts on the Big Shift? How is your organization addressing the changes in the IT landscape?

Don’t Be a Michael Scott – Embrace Change in IT

By Ben Stephenson, Journey to the Cloud

 

One of the biggest impediments to the adoption of new technologies is resistance to change. Many IT departments are entrenched and content in the way they currently run IT. But as the technology industry continues to embrace IT-as-a-Service, IT departments must be receptive to change if they want to stay competitive.

I’m a big fan of the TV show The Office. In my opinion, it’s the second funniest series behind Seinfeld (and it’s a very close second). Dunder Mifflin Scranton Regional Manager Michael Scott is a quintessential example of a decision maker who’s against the adoption of new technologies because of fear, a lack of understanding, and downright stubbornness.  

In the “Dunder Mifflin Infinity” episode in Season Four, the young, newly promoted hot-shot exec (and former intern) Ryan Howard returns to the Scranton branch to reveal his plan on how he’s going to use technology to revitalize the company. Part of his plan is the rollout of a new website that will allow Dunder Mifflin to be more agile and allow customers to make purchases online. Michael and his loyal sidekick (and part-time beet farmer) Dwight Schrute are staunchly opposed to this idea.

At this point in the episode Michael is against Ryan’s idea of leveraging technology to improve the business process out of pure stubbornness. Michael hasn’t heard Ryan’s strategy or thought out the pros and cons of leveraging technology to improve business processes. His mindset is simply “How can this new technology possibly be better than the way we have always done things?”

Maybe your company has always bought infrastructure and run it in house—so why change now? Well, running a hybrid cloud environment can provide better service to your end users and also contribute to cost savings. Regardless if you act or not, it’s something you need to keep an open mind about and look into closely. Dismissing the concept immediately isn’t going to do you any good.

Creed Bratton is the oldest employee in the Scranton office. After hearing Ryan’s announcement about implementing new technologies, Creed gets extremely worried that he’s going to get squeezed out of his job. He goes to Michael and shares his concerns that both their jobs may be in jeopardy. At this point, Michael is now against the adoption of technology due to a lack of understanding. Ryan’s plan is to retrain his employees so that they have the knowledge and skillset to leverage new technologies to improve the business—not to use it as a means to downsize the workforce.

This is similar to the fear that cloud computing will cause widespread layoffs of IT workers. This is not necessarily the case. It’s not about reducing jobs; it’s about retraining current employees to take on new roles within the department.

Ryan claims that the new website is going to significantly increase sales. Michael and Dwight set out on a road trip to win back several key customers whose accounts they have recently lost to competitors to prove to Ryan that they don’t need a website. Their strategy? Personally deliver fruit baskets. Each customer ends up turning them down because the vendors they are currently using have websites and offer lower prices.

In this case, Dunder Mifflin’s lack of IT innovation is directly affecting its bottom line. They’re making it an easy decision for customers to leave because they simply aren’t keeping pace with the competition. As a modern day IT department, you need to be leveraging technologies that allow people to do their jobs easier and in turn reduce costs for the organization. For example, by installing a SaaS-based marketing automation tool (i.e. HubSpot), your marketing team can automate workloads and spend more time generating leads for the sales team to drive revenue. By using Amazon, or another IaaS platform, you have the ability to buy only the capacity you actually need, saving on infrastructure hardware capital and maintenance costs. For workloads that make more sense running on-prem, creating a private cloud environment with a service catalog can streamline performance and give users the ability to choose and instantly receive the IT services they need.

At the end of the episode, an enraged Michael and Dwight head back to the office. On their way back, Michael’s GPS instructs him to take a right hand turn. Dwight looks at the screen and tells Michael that it’s saying to bear right around the bend, but Michael takes the sharp right trusting the machine and follows it…directly into a lake. Dwight shouts that he’s trained for this moment and jumps in the two feet of water to valiantly save Michael. When they get back to the office Michael announces “I drove my car into a [bleep] lake. Why you may ask did I do this? Well, because of a machine. A machine told me to drive into a lake. And I did it! I did it because I trusted Ryan’s precious technology, and look where it got me.” At this point, Michael is resisting technology because of fear.

In today’s changing IT landscape, embarking on new IT initiatives can be scary. There are risks involved, and there are going to be bumps along the way. (Full disclosure, Ryan ends up getting arrested later in the season for fraud after placing orders multiple times in the system—but you get the idea.)But at the end of the day, the change now taking place in IT is inevitable. To be successful, you need to carefully, and strategically, plan out projects and make sure you have the skillsets to get the job done properly (or use a partner like GreenPages to help).The risk of adopting new technologies is nothing compared to the risk of doing nothing and being left behind. Leave a comment and share how your organization is dealing with the changing IT landscape…or let me know what your favorite Office episode is…

If you’d like to talk more about how GreenPages can help with your IT transformation strategy, fill out this form!

 

 

What IT Can Learn From Sochi

 

By Ben Stephenson, Journey to the Cloud

It’s no secret that the Winter Olympics in Sochi has had its fair share of problems. From infrastructure issues, to handling incidents, to security, to amenities for athletes, it seems like anything that could go wrong has gone wrong. So, what can IT learn from what has unfolded at Sochi?

Have your infrastructure in place beforehand

There are plenty of examples from Sochi about the proper infrastructure not being in place before the games started. There was unfinished construction around the city that consisted of exposed wires, uncovered manholes and buildings that weren’t finished. Many of the hotels were also unfinished. Some didn’t have working elevators, completed lobbies, or even running water (not to mention toilets that don’t flush). There’s a great picture circulating the web of an employee spray painting the grass green outside of an Olympic venue. Even the rings at the opening ceremonies malfunctioned. There were also safety concerns regarding the infrastructure of some of the ski / snowboard courses. The women’s downhill ski training runs were delayed after only three racers on the opening day because it was deemed too dangerous because one of the jumps was too big and athletes were “getting too much air.” In addition, Shawn White pulled out of the slopestyle event over safety concerns.

Sochi Elevator

Sochi Bucket Lift

Sochi grass

 

The first takeaway for IT from Sochi is to have your infrastructure in place and running properly before trying to start new projects. For example, if your organization is going to rollout a virtual desktop initiative you better take the proper steps beforehand to ensure a smooth rollout or you’re going to have a lot of angry people to deal with. For example, you need the correct WAN bandwidth between offices as well as the correct storage requirements in place for suitable performance. You also need to ensure that you have the correct network infrastructure in place beforehand to handle additional traffic. Finally, you need the proper server infrastructure set up for the redundancy and horse power necessary to deliver virtual desktops.

Make sure you have a way of handling incidents as they arise

There are always going to be unexpected circumstances that arise during the course of an event or project that have the potential of throwing you off. For example, there was a pillow shortage for Olympic athletes in Sochi. The following message went out to surrounding communities

“ATTENTION, DEAR COLLEAGUES! Due to an extreme shortage of pillows for athletes who unexpectedly arrived at Olympic Village in the mountains, there will be a transfer of pillows from all apartments to the storehouse on 2 February 2014. Please be understanding. We have to help the athletes out of this bind.”

I’m not going to pretend like I know what the plan was ahead of time to deal with supply shortages, but I’m going to go out on a limb and guess it wasn’t to borrow used pillows from strangers.

Sochi Pillow

IT needs to make sure they have detailed plans in place BEFORE starting a project so there is a protocol to deal with unexpected issues as they arise. For example, a few months back GreenPages moved its datacenter. Our team put together an extremely detailed plan that broke out every phase of the move down to 15 minute increments. They devised teams for specific phases that had a communication plan for each team and also devised a backup emergency plan in the event they hit any issues the night of the move. This detailed planning of how to deal with various issues in different scenarios was a big reason why the move ended up being a success.

Have proper security measures in place

Another picture that is circulating the web was taken by a journalist who returned to her hotel room to find keys in her door and the door wide open…even though she left the room with the door shut and locked. There were also reports that visitors in Sochi faced widespread hacking on their mobile devices. IT departments need to make sure that the proper security measures are in place for its end users to protect corporate data. This includes implementing authentication and encryption, using intrusion detection technologies, and edge scanning for viruses.

Sochi door lock

 

When dealing with top talent, make sure they have the tools to get their jobs done & stay happy

Olympic athletes certainly qualify as top talent, as they represent the best of the best at their crafts in the entire world. When dealing with top talent, you need to make sure they have the tools to get their jobs done and to stay happy. The yellow colored tap water in Sochi is probably not all that appealing to world class athletes who may be looking to quench their thirst after a long day on the mountain. I can’t imagine that the small bathroom with multiple toilets, but no stalls or dividers, goes over very well either.

Sochi Drinking Water

sochi toilets

 

In the business world, it’s important to retain top talent. IT can help keep employees happy and enable them to do their jobs in a variety of ways. One example is to make sure you’re offering the applications that people actually use and want. Another example is empowering employees to use the devices of their choice by implementing a BYOD policy.

Conclusion

Take these lessons from this year’s Winter Olympics in Sochi and apply them to your IT strategy and maybe one day you too can win your very own shiny gold medal.

 

If you would like to learn more about how GreenPages can help you with your IT operations fill out this form!

 

Photo credit http://bleacherreport.com/articles/1952496-the-20-biggest-sochiproblems