Category Archives: Cloud computing

Cloud Management, Business Continuity & Other 2013 Accomplishments

By Matt Mock, IT Director

It was a very busy year at GreenPages for our internal IT department. With 2013 coming to a close, I wanted to highlight some of the major projects we worked on over the course of the year. The four biggest projects we tackled were using a cloud management solution, improving our business continuity plan, moving our datacenter, and creating and implementing a BYOD policy.

Cloud Management as a Service

GreenPages now offers a Cloud Management as a Service (CMaaS) solution to our clients. We implemented the solution internally late last year, but really started utilizing it as a customer would this year by increasing what was being monitored and managed. We decided to put Exchange under the “Fully Managed” package of CMaaS. Exchange requires a lot of attention and effort. Instead of hiring a full time Exchange admin, we were able to offload that piece with CMaaS as our Managed Services team does all the health checks to make sure any new configuration changes are correct. This resulted in considerable cost savings. Having access to the team 24/7 is a colossal luxury. Before using CMaaS, if an issue popped up at 3 in the morning we would find out about it the next morning. This would require us to try and fix the problem during business hours. I don’t think I need to explain to anyone the hassle of trying to fix an issue with frustrated coworkers who are unable to do their jobs. If an issue arises now in the middle of the night, the problem has already been fixed before anyone shows up to start working. The Managed Services team does research and remediates bugs that come up. This happened to us when we ran into some issues with Apple iOS calendaring. The Managed Services team did the research to determine the cause and went in and fixed the problem. If my team tried to do this it would have taken us 2-3 days of wasted time. Instead, we could be focusing on some of our other strategic projects. In fact, we are holding a webinar on December 19th that will cover strategies and benefits to being the ‘first-to-know,’ and we will also provide a demo of the CMaaS Enterprise Command Center. We also went live with fully automated patching, which requires zero intervention from my team. Furthermore, we leveraged CMaaS to allow us to spin up a fully managed Linux environment. It’s safe to say that if we didn’t implement CMaaS we would not have been able to accomplish all of our strategic goals for this year.

{Download this free whitepaper to learn more about how organizations can revolutionize the way they manage hybrid cloud environments}

Business Plan

We also determined that we needed to update our disaster recovery plan to a true robust business continuity plan. A main driver of this was because of our more diverse office model. Not only were more people working remotely as our workforce expanded, but we now have office locations up and down the east coast in Kittery, Boston, Attleboro, New York City, Atlanta, and Tampa. We needed to ensure that we could continue to provide top quality service to our customers if an event were to occur. My team took a careful look at our then current infrastructure set up. After examining our policies and plans, we generated new ones around the optimal outcome we wanted and then adjusted the infrastructure to match. A large part of this included changing providers for our data and voice, which included moving our datacenter.

Datacenter Move

In 2013 we wanted to have more robust datacenter facilities. Ultimately, we were able to get into an extremely redundant and secure datacenter at the Markley Group in Boston that provided us with cost savings. Furthermore, Markley is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7. Another benefit our new datacenter offered was excess office space. That way, if there ever was an event at one of our GreenPages locations we could have a place to send people to work. I recently wrote a post which describes the datacenter move in more details.

BYOD Policy

As 2013 ends, we are finishing our first full year with our BYOD policy. We are taking this time to look back and see where there were any issues with the policies or procedures and adjusting for the next year. Our plan is to ensure that year two is even more streamlined. I answered questions in a recent Q & A explaining our BYOD initiative in more detail.

I’m pretty happy looking back at the work we accomplished in 2013. As with any year, there were bumps along the way and things we didn’t get to that we wanted to. All in all though, we accomplished some very strategic projects that have set us up for success in the future. I think that we will start out 2014 with increased employee satisfaction, increased productivity of our IT department, and of course noticeable cost savings. Here’s to a successful 2014!

Is your IT team the first-to-know when an IT outage happens? Or, do you find out about it from your end users? Is your expert IT staff stretched thin doing first-level incident support? Could they be working on strategic IT projects that generate revenue? Register for our upcoming webinar to learn more!

 

Why Automate? What to Automate? How to Automate?

By John Dixon, Consulting Architect

Automation is extremely beneficial to organizations. However, the questions often come up around why to automate, what to automate, and how to automate.

Why automate?

There are several key benefits surrounding automation. They include:

  • Saving time
  • Employees can be retrained to focus on other (hopefully more strategic) tasks
  • Removing human intervention reduces errors
  • Troubleshooting and support is improved when everything is deployed the same way

What to automate?

Organizations should always start with the voice of the customer (VoC). IT departments need to factor in what the end user wants and what the end user expects to improve their experience. If you can’t trace back something you’re automating to an improved customer experience, that’s usually a good warning sign that you should not be automating it. In addition, you need to be able to track back to how automation has provided a benefit to the organization. The benefit should always be measurable and always financial.

What are companies automating?

Requests management is the hot one because that’s a major component of cloud computing. This includes service catalogues and self-service portals. Providing a self-service portal, sending the request for approval based on the dollar amount requested, and fulfilling the order through one or more systems is something that is commonly automated today. My advice here is to automate tasks through a general purpose orchestrator tool (such as CA Process Automation or similar tools) so that automated jobs can be managed from a single console. This is instead of stitching together disparate systems that call each other in a “rat’s nest” of automation. The general purpose orchestrator also allows for easier troubleshooting when an automated task does not complete successfully.

How to automate?

There are some things to consider when sitting down to automate a task, or even determining the best things to automate. Here are a few key points:

  1. Start with the VoC or Voice of the Customer, and work backwards to identify the systems that are needed to automate a particular task. For example, maybe the customer is the Human Resources department, and they want to automate the onboarding of a new employee. It may have to setup user accounts, order a new cell phone, order a new laptop, and schedule the new employee on their manager’s calendar on their first day of work. Map out the systems that are required to accomplish this, and integrate those – and no more. You may find that some parts of the procedure may already be automated; perhaps your phone provider already has an interface to programmatically request new equipment. Take every advantage of these components.
  2. Don’t automate things that you can’t trace back to a benefit for the organization. Just because you can automate something doesn’t mean that you should. Again, use the voice of the customer and user stories here. A common user story is structure as follows:
    1. “As a [role],
    2. I want to [get something done]
    3. So that I can [benefit in the following way]”
  3. Start small and work upwards to automate more and more complex tasks. Remember the HR onboarding procedure in point #1? I wouldn’t suggest beginning your automation journey there. Pick out one thing to automate from a larger story, and get it working properly. Maybe you begin by automating the scheduling of an appointment in Outlook or your calendaring system, or creating a user in Active Directory. Those pieces become components in the HR onboarding story, but perhaps other stories as well.
  4. Use a general purpose orchestrator instead of stitching together different systems. As in point #3, using an orchestrator will allow you to build reusable components that are useful to automate different tasks. A general purpose orchestrator also allows for easier troubleshooting when things go wrong, tracking of automation jobs in the environment, and more advanced conditional logic. Troubleshooting automation any other way can be very difficult.
  5. You’ll need someone with software development experience. Some automation packages claim that even non-developers can build robust automation with “no coding required.” In some cases, that may be true. However, the experience that a developer brings to the table is an absolute must have when automating complex tasks like the HR onboarding example in point #1.

 

What has your organization automated? How have the results been?

 

Cloud Spending Will Increase 1 Billion% by 2014

By Ben Stephenson, Journey to the Cloud

It seems like every week a new study comes out analyzing cloud computing growth. Whether it’s that Public Cloud Services Spending will reach $47.4B in 2013, Global SaaS spending projected to grow from $13.5B in 2011 to $32.8B in 2016, the public cloud services market is forecast to grow 18.5 percent in 2013, or cloud spending at Dunder Mifflin will increase 200% by 2020, the indication is that cloud adoption and spending are on the rise. But how is that relevant to you?

Does it matter to the everyday CIO that cloud spending at midsized companies west of the Mississippi is going to increase by 15% over the next 3 years? The relevant question isn’t how much will cloud adoption and spending increase, but why will it do so? It’s the “why” that matters to the business. If you understand the why, it becomes easier to put context around the statistics coming out of these studies. It comes down to a shift in the industry – a shift in the economics of how a modern day business operates. This shift revolves around the way IT services are being delivered.

To figure out where the industry is going, and why spending and adoption are increasing, you need to look at where the industry has come from. The shift from on-premise IT to public cloud began with SaaS based technologies. Companies like Salesforce.com realized that organizations were wasting a lot of time and money buying and deploying hardware for their CRM solutions. Why not use the internet to be able to allow organizations to pay a subscription fee instead of owning their entire infrastructure? This, however, was not true cloud computing. Next came IaaS with Amazon’s EC3 initiative. Essentially, Amazon realized it had excess compute capacity and decided to rent it out to people who needed the extra space. IaaS put an enormous amount of pressure on corporate IT because App Dev. teams no longer had to wait weeks or months to test and deploy environments. Instead, they could start up right away and become much more efficient. Finally, PaaS came about with initiatives such as Microsoft Azure.

{Free ebook: The Evolution of Your Corporate IT Department}

The old IT paradigm, or a private cloud environment, consists of organizations buying hardware and software and keeping it in their datacenter behind their own firewalls. While a private cloud environment doesn’t need to be fully virtualized, it does need to be automated and very few organizations are actually operating in a true private cloud environment. Ideally, a true private cloud environment is supposed to let internal IT compete with public cloud providers by providing a similar amount of speed and agility that a public cloud allows. While the industry is starting to shift towards public cloud, the private cloud is not going away. Public cloud will not be the only way to operate IT, or even the majority of the way, for a long time. This brings us to the hybrid cloud computing model; the direct result of this shift. Hybrid cloud is the combination of private and public cloud architectures. It’s about the ability to be able to seamlessly transition workloads between private and public, or, in other words, moving on-premise workloads to rented platforms where you don’t own anything in order to leverage services.

So why are companies shifting towards a hybrid cloud model? It all comes down to velocity, agility, efficiency, and elasticity. IT delivery methodology is no longer a technology discussion, but, rather, it’s become a business discussion. CIOs and CFOs are starting to scratch their heads wondering why so much money is being put towards purchasing hardware and software when all they are reading about is cloud this and cloud that.

{Free Whitepaper: Revolutionizing the Way Organizations Manage Hybrid Cloud Environments}

The spending and adoption rates of cloud computing are increasing because the shift in the industry is no longer just talk – it’s real and it’s here now. The bottom line? We’re past hypothetical discussions. There is a major shift in the industry that business decision makers need to be taking seriously. If you’re not modernizing your IT operations by moving towards a hybrid cloud model, you’re going to be missing out on the agility and cost savings that can give your organization a substantial competitive advantage.  This is why cloud adoption and spending are on the rise. This is why you’re seeing a new study every month on the topic.

Moving Our Datacenter: An IT Director’s Take

An Interview with Matt Mock, IT Director, GreenPages Technology Solutions

Journey to the Cloud’s Ben Stephenson sat down with GreenPages’ IT Director Matt Mock to discuss GreenPages’ recent datacenter move.

Ben: Why did GreenPages decide to move its datacenter?

Matt: Our current contract was up so we started evaluating new facilities looking for a robust, redundant facility to house our equipment in. We needed a facility to meet specific objectives around our business continuity plan. In addition, we were also looking for cost savings.

Ben: Where did you move the datacenter to and from?

Matt: Geographically, we stayed in a close area. We moved it from Charlestown, MA a couple of miles down the road into downtown Boston. Staying within a close area certainly made the physical move quicker and easier.

Ben: What were the benefits of moving the datacenter?

Matt: Ultimately, we were able to get into an extremely redundant and secure datacenter that provided us with cost savings. Furthermore, the datacenter is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7.

{Register for our upcoming webinar on 11/7 on key announcements from VMworld 2013}

Ben: Tell us about the process of the move? What had to happen ahead of time to ensure a smooth transition?

Matt: The most important parts were planning, testing, and communication. We put together an extremely detailed plan that broke out every phase of the move down to 15 minute increments. We devised teams for the specific phases that had a communication plan for each team. We also devised a backup emergency plan in the event that we hit any issues the night of the move.

Ben: What happened the night of the move?

Matt: The night of the move we leveraged the excellent facilities at Markley to be able to run a command center that was run by one of our project managers. In the room, we had multiple conference bridges to run the different work streams to ensure smooth and constant communication. We also utilized Huddle, our internal collaboration tool, to communicate as our internal systems were down during the move.

Ben: Anything else you had to factor in?

Matt: Absolutely. The same night of the move we were also changing both voice and data providers at three different locations, which added another layer of complexity. We had to work closely with our new providers to ensure a smooth transition. Because we have a 24/7 Managed Services division at GreenPages, we needed to continue to offer customers the same support during the move that we do on a day-to-day basis.

Ben: Did you experience unexpected events during the move? If so, what were they and how did you handle them?

Matt: With any complex IT project you’re going to experience unexpected events. A couple that we experienced were some hardware failures and unforeseen configuration issues. Fortunately, our detailed plan accounted for these issues, and we were able to address them with the teams on hand and remain on schedule.

Ben: You used an all GreenPages team to accomplish this, right?

Matt: Correct. We did not use any outside vendors for this move – all services were rendered by the GreenPages team. Last time we used outside providers and this time we had a much better experience. I’m in the unique position where I have access to an entire team of project managers and technical resources that made doing this possible. In fact, this is something we offer our customers (from consulting to project management to the actual move) so our team is very, very good at it.

Ben: What advice do you have for other IT Directors who are considering moving their datacenters?

Matt: Detailed planning and constant communication is critical, having a plan in place for every possible scenario, and having an emergency plan ready so that in the middle of the night you’re not scrambling with how to address those unforeseen issues.

Ben: Congratulations on the successful move. See you Monday after the Patriots crush your Steelers.

Would you like to learn more about how GreenPages can help you with your datacenter needs?

Moving Email to the Cloud Part 2

By Chris Chesley, Solutions Architect

My last blog post was part 1 of moving your Email to the Cloud with Office 365.  Here’s the next installment in the series in which I will be covering the 3 methods of authenticating your users for Office 365.  This is a very important consideration and will have a large impact on your end users and their day to day activities.

The first method of authenticating your users into Office 365 is to do so directly.  This has no ties to your Active Directory.  The benefits here are that your users get mail, messages and SharePoint access regardless of your site’s online status.  The downside is that your users may have a different password than they use to get into their desktop/laptops and this can get very messy if you have a large number of users.

The second way of authenticating your users is full Active Directory integration.  I will refer to this as the “Single Sign On” method.  In this method, your Active Directory is the authoritative source of authentication for your users.  Users log into their desktop/laptop and can access all of the Office 365 applications without typing their password again, which is convenient. You DO need a few servers running locally to make this happen.  You need an Active Directory Federation Server (ADFS) and an Azure Active Directory Sync Sever. Both of these services are needed to sync your AD and user information to Office 365. The con of this method is that you need a redundant AD setup because if it’s down your users are not going to be able to access mail or anything else in the cloud.  You can do this by hosting a Domain Controller, and the other 2 systems I mentioned, in a cloud or at one of your other locations, if you have one.

The third option is what I will refer to as “Single Password.”  In this setup, you install an Azure Active Directory Sync server in your environment but do not need an ADFS server.  The Sync tool will hash your user’s passwords and send them to Office 365.  When a user tries to access any of the Office 365 services, they are asked to type in their password.  The password is then hashed and compared to the stored hash and they are let in if they match.  This does require the users to type their password again, but it allows them to use their existing Active Directory password and anytime this password changes, it is synced to the cloud.

The choice of which method you use has a big impact on your users as well as how you manage them.  Knowing these choices and choosing one that meets your business goals will set you on the path of successfully moving your services to the cloud.

 

Download this free ebook on the evolution of the corporate IT department

 

My VMworld Breakout Session: Key Lessons Learned from Deploying a Private Cloud Service Catalog

By John Dixon, Consulting Architect, LogicsOne

 

Last month, I had the special privilege of co-presenting a breakout session at VMworld with our CTO Chris Ward. The session’s title was “Key Lessons Learned from Deploying a Private Cloud Service Catalog,” and we had a full house for it. Overall, the session went great and we had a lot of good questions. In fact, due to demand, we ended up giving the presentation twice.

In the session, Chris and I discussed a recent project we did for a financial services firm where we built a private cloud, front-ended by a service catalogue. A service catalog really enables self-service – it is one component of corporate IT’s opportunity to partner with the business. In a service catalog, the IT department can publish the menu of services that it is willing to provide and (sometimes) the price that it charges for those services. For example, we published a “deploy VM” service in the catalog, and the base offering was priced at $8.00 per day. Additional storage or memory from the basic spec was available at an additional charge. When the customer requests “deploy VM,” the following happens:

  1. The system checks to see if there is capacity available on the system to accommodate the request
  2. The request is forwarded to the individual’s manager for approval
  3. The manager approves or denies the request
  4. The requestor is notified of the approval status
  5. The system fulfills the request – a new VM is deployed
  6. A change record and a new configuration item is created to document the new VM
  7. The system emails the requestor with the hostname, IP address, and login credentials for the new VM

This sounds fairly straightforward, and it is. Implementation is another matter however. It turns out that we had to integrate with vCenter, Active Directory, the client’s ticketing system, and client’s CMDB, an approval system, and the provisioned OS in order to automate the fulfillment of this simple request. As you might guess, documenting this workflow upfront was incredibly important to the project’s success. We documented the workflow and assessed it against the request-approval-fulfillment theoretical paradigm to identify the systems we needed to integrate. One of the main points that Chris and I made at VMworld was to build this automation incrementally instead of tackling it all at once. That is, just get automation suite to talk to vCenter before tying in AD, the ticketing system, and all the rest.

Download this on-demand webinar to learn more about how you can securely enable BYOD with VMware’s Horizon Suite

Self-service, automation, and orchestration all drove real value during this deployment. We were able to eliminate or reduce at least three manual handoffs via this single workflow. Previously, these handoffs were made either by phone or through the client’s ticketing system.

During the presentation we also addressed which systems we integrated, which procedures we selected to automate, and what we plan to have the client automate next. You can check out the actual VMworld presentation here. (If you’re looking for more information around VMworld in general, Chris wrote a recap blog of Pat Gelsinger’s opening keynote as well as one on Carl Eschenbach’s General Session.)

Below are some of the questions we got from the audience:

Q: Did the organization have ITSM knowledge beforehand?

A:The group had very limited knowledge of ITSM but left our project with real-world perspective on ITIL and ITSM

Q: What did we do if we needed a certain system in place to automate something

A: We did encounter this and either labeled it as a risk or used “biomation” (self-service is available, fulfillment is manual, customer doesn’t know the difference) until the necessary systems were made available

Q: Were there any knowledge gaps at the client? If so, what were they?

A: Yes, the developer mentality and service management mentality are needed to complete a service catalog project effectively. Traditional IT engineering and operations do not typically have a developer mentality or experience with languages like Javascript.

Q: Who was the primary group at the client driving the project forward?

A: IT engineering and operations were involved with IT engineering driving most of the requirements.

Q: At which level was the project sponsored?

A: VP of IT Engineering with support from the CIO

All in all, it was a very cool experience to get the chance to present a breakout session at VMworld. If you have any other questions about key takeaways we got from this project, leave them in the comment section. As always, if you’d like more information you can contact us. I also just finished an ebook on “The Evolution of the Corporate IT Department” so be sure to check that out as well!

The Evolution of Your Corporate IT Department

By John Dixon, Consulting Architect, LogicsOne

 

Corporate IT departments have progressed from keepers of technology to providers of complex solutions that businesses truly rely on. Even a business with an especially strong core competency simply cannot compete without information systems to provide key pieces of technology such as communication and collaboration systems (e.g., email). Many corporate IT departments have become adept providers of technology solutions. We, at GreenPages, think that corporate IT departments should be recognized as providers of services. Also, we think that emerging technology and management techniques are creating an especially competitive market of IT service providers. Professional business managers will no doubt recognize that their internal IT department is perhaps another competitor in this market for IT services. Could the business choose to source their systems to a provider of services other than internal corporate IT?

IT departments large and small already have services deployed to the cloud. We think that organizations should prepare to deploy services to the cloud provider that meets their requirements most efficiently, and eventually, move services between providers to continually optimize the environment. As we’ll show, one of the first steps to enabling this Cloud Management is to use a tool that can manage resources in different environments as if they are running on the same platform. Corporate IT departments can prepare for cloud computing without taking the risk of moving infrastructure or changing any applications.

In this piece, I will describe the market for IT service providers, the progression of corporate IT departments from technology providers to brokers of IT services, and how organizations can take advantage of behavior emerging in the market for IT services. This is not a cookbook of how to build a private cloud for your company—this instead offers a perspective on how tools and management techniques, namely Cloud Management as a Service (CMaaS), can be adopted to take advantage of cloud computing, whatever it turns out to become. In the following pages, we’ll answer these questions:

  1. Why choose a single cloud provider? Why not position your IT department to take advantage of any of them?
  2. Why not manage your internal IT department as if it is already a cloud environment?
  3. Can your corporate IT department compete with a firm whose core competency is providing infrastructure?
  4. When should your company seriously evaluate an application for deployment to an external cloud service provider? Which applications are suitable to deploy to the cloud?

 

To finish reading, download John’s free ebook

 

 

 

 

 

 

How IT Operations is Like Auto Racing

By John Dixon, Consulting Architect, LogicsOne

 

If you’ve ever tried your hand at auto racing like I did recently at Road Atlanta, you’ll know that putting up a great lap time is all about technique. If you’ve ever been to a racing school, you’ll also remember that being proactive and planning your corners is absolutely critical in driving safely. Lets compare IT operations to auto racing now. Everyone knows how to, essentially, drive a car. Just as every company, essentially, knows how to run IT. What separates a good driver from a great driver? Technique, preparation, and knowing the capabilities of your driver and equipment.

 

The driver = your capabilities

The car = your technology

The track = your operations as the business changes

 

Preparation

Lets spend a little bit of time on “preparation.” As we all know, preparation time is often a luxury. From what I have seen consulting over the past few years, preparation is not just installed in the culture of IT. But we’d all agree that more preparation leads to better outcomes (for almost everything, really). So, how do we get more preparation time? This is where the outsourcing trend gained momentum – outsource the small stuff to get more time back to work on strategic projects. Well, this didn’t always work out very well, as typical outsourcing arrangements moved large chunks of IT to an outside provider. Why didn’t we move smaller chunks first? That’s what we do in auto racing – the reconnaissance lap! Now we have the technology and arrangements to do a reconnaissance lap of sorts. For example, our Cloud Management as a Service (CMaaS) has this philosophy built-in – we can manage certain parts of infrastructure that you select, and leave others alone. Maybe you’d like to have your Exchange environment fully managed but not your SAP environment. We’ve built CMaaS with the flexible technology and arrangements to do just that.

Technique

 

Auto Racing IT Operations
Safety   first! Check your equipment before heading out, let the car warm up before   increasing speed Make sure   your IT shop can perform as a partner with the business
Know where   to go slow! You can’t take every turn with full throttle. Even if you can,   its worth it to “throw away” some corners in preparation for straight   sections Know where   to allocate investment in IT – its all about producing results for the   business
First lap:   reconnaissance (stay on the track) Avoid   trying to tackle very complex problems with brand new technology (e.g., did   you virtualize Exchange on your very first P2V?)
Last lap:   cool down (stay on the track) An easy   one, manage the lifecycle of your applications and middleware to avoid be   caught by a surprise required upgrade
Know where   to go fast! You can be at full throttle without any brake or steering inputs   (as in straight sections), so dig in! Recognize   established techniques and technologies and use them to the max advantage
Smooth =   fast. Never stab the throttle or the brakes! Sliding all over the track with   abrupt steering and throttle inputs is not the fastest way (but it IS fun and   looks cool) Build   capabilities gradually and incrementally instead of looking to install a   single technology to solve all problems today.
Know the   capabilities of your car – brakes, tires, clutch, handling. Exceed the   capabilities of your equipment and see what happens. Take the   time to know your people, processes, and technology – which things work well   and which could be improved? This depends greatly on your business, but there   are some best practices to run a modern IT shop.
Improve   time with each lap This is   all about continuous improvement – many maneuvers in IT should be repeatable   (like handling a trouble ticket), so do it better every time.
Take a   deep breath, check your gauges, check your harnesses, check your helmet Monitoring   is important, but it is not an endgame for most of us. Be aware of things   that could go wrong, how you could mitigate risk, which workarounds you could   implement, etc.
Carry   momentum around the track. A high horsepower car with a novice driver will   always lose to a great driver in a sedan Technology   doesn’t solve everything. You need proper technique and preparation.
Learn from   your mistakes – they aren’t the end of the world With   well-instrumented monitoring, performance blips or mistakes are opportunities   to improve

 

Capabilities

A word on capabilities. Capabilities are not something you simply install with software or infrastructure. Just as an aspiring racecar driver can’t simply obtain the capability required to win a professional F1 race with a weekend class. You need assets (e.g., infrastructure, applications, data) and resources (e.g., dollars) to build capabilities. What exactly is a capability? In racing, it’s the ability to get around a track, any track, quickly and safely. In IT, this would be the ability to handle a helpdesk call and resolve the issue to completion, for a basic example. An advanced IT capability in a retail setting might be to produce a report on how frequently shoppers from a particular zip code purchase a certain product. Or, perhaps, it’s an IT governance capability to understand the costs of providing a particular IT service. One thing I’ve seen in consulting with various shops is that organizations could do a better job of understanding their capabilities.

Now picture yourself in the in the driver’s seat (of your IT shop). Know your capabilities, but really think about your technique and continuously improving your “lap times.”

  1. Where are your straight sections – where you can just “floor it” and hang on? These might be well-established processes, projects, or tasks that pay obvious benefits. Can you take some time to create more straight sections?
  2. How much time do you have for preparation? How much time do you spend “studying the track” and “knowing your equipment?” Do you know your capabilities? Can you create time that you can use for preparation?
  3. Where are your slow sections? The processes that require careful attention to detail. This is probably budget planning time for many of us. Hiring time is probably another slow section.
  4. Do you understand your capabilities? Defining the IT services that you provide your customer is a great place to start. If you haven’t done this yet, you should — especially if you’re looking at cloud computing. GreenPages and our partners have some well-established techniques to help you do this successfully.

 

As always, feel free to reach out if you’d like to have a conversation just to toss around some ideas on this topic.

 

Now for the fun part, a video that a classmate of mine recorded of a hot lap around Road Atlanta. The video begins in turn 11 (under the bridge in this video).

  1. Turn 11 is important because it is a setup to the front straight section. BUT, it is pretty dangerous too as it leads downhill to turn 12 (the entrance to the straight). Position the car under the RED box on the bridge and give a small amount of right steering input. Build speed down the hill.
  2. Clip the apex of turn 11 and pull the car into turn 12. Be gentle with turn 12 – upset the car over the gators and you could easily lose control.
  3. Under the second bridge and onto the front straight section. Grab 5th gear if you can. Up to ~110mph. Position the car out to the extreme left side of the track for turn 1.
  4. Show no mercy to the brakes for turn 1! Engage ABS, downshift, then trail brake into the right hander, pull the car in to the apex of the turn in 4th gear, carrying 70-80mph.
  5. Uphill for turn 2. Aim the nose of the car at the telephone pole in the distance, as turn 2 is blind. Easy on the throttle!
  6. Collect the apex at turn 2 and downhill for turn 3. Use a dab of brakes to adjust speed as you turn slight right for turn 3.
  7. Turn slight left for turn 4, hug the inside
  8. Track out and downhill for “the esses” – roll on the throttle easily, you’ve got to keep momentum for the uphill section at turn 5.
  9. The esses are a fast part of the track but be careful not to upset the car
  10. Brake slightly uphill for turn 5. It is the entrance to a short straight section where you can gain some speed
  11. Stay in 4th gear for turn 6 and bring the car to the inside of the turn
  12. Track way out to the left for the crucial turn 7 – a slow part of the track. Brake hard and downshift to third gear. Get this one right as it is the entrance to the back straight section.
  13. Build speed on the straight – now is the time to floor it!
  14. Grab 5th gear midway down the straight for 110+ mph. Take a deep breath! Check your gauges and harnesses.
  15. No mercy for the brakes at turn 10a! Downshift to 4th gear, downshift to 3rd gear and trail brake as you turn left
  16. Slight right turn for turn 10b and head back uphill to the bridge – position the car under the RED box and take another lap!

 

Moving Email to the Cloud, Part 1

By Chris Chesley, Solutions Architect

Many of our clients are choosing to not manage Exchange day to day and not to upgrade it every 3-5 years.  They do this by choosing to have Microsoft host their mail in Office 365.  Is this right for your business?  How do you tie this into your existing infrastructure and still have access to email regardless of the status of your onsite services?

The different plans for Microsoft Office 365 can be confusing. Regardless of what plan you get, the Exchange Online choices boil down to two options.  Exchange Plan 1 offers you 50GB mailboxes per user, ActiveSync, Outlook Web Access, Calendar and all of the other features you are currently getting with an on premises Exchange implementation.  Additionally you also get antivirus and antispam protection.  All of this for 4 dollars a month per user.

Exchange Plan 2 offers the exact same features as plan 1, with the additions of unlimited archiving, legal hod capabilities, compliance support tools and advanced voice support.  This plan is 8 dollars a user per month.

All of the other Office 365 plans that include Exchange are either plan 1 or plan 2.  For example, the E3 plan (Enterprise plan 3) includes Exchange plan 2, SharePoint Plan 2, Lync Plan 2 and Office Professional Plus for 5 devices per user.  You can take any plan and break it down to the component part and fully understand what you’re getting.

If you are looking to move email to the cloud and are currently using Exchange, who better to host your Exchange than Microsoft?  Office 365 is an even better choice if you are using, or plan on using, SharePoint or Lync.  All of these technologies are available in the current plans or individually through Office 365.

I’ve helped many clients make this transition so if you have any questions or if there’s any confusion around the Office 365 plans feel free to reach out.

My next blog will be on the 3 different authentication methods in Office 365.

Journey to the Cloud: An Insider’s Perspective

By Ben Stephenson, Journey to the Cloud

Our Journey to the Cloud blog has been live for a little over two years now, and I’ve had the privilege of running and managing it from the start. I wanted to touch base about the site, share my unique perspective from managing it, and hear from our readers about what we can do to make it even better.

Our goal from the very beginning was to establish ourselves as thought leaders in the industry by providing high quality content that was relevant and beneficial to IT decision makers. We wanted to make sure we let our authors keep their opinions and voice, while at the same time taking an unbiased, agnostic approach. The last thing we wanted to do was start blathering on about what a great company GreenPages is or bragging about the most recent award we won (it was being named to the Talkin’ Cloud 100 if you were wondering…).  Over the course of the two years, we’ve posted over 200 blogs and seen the number of page views and shares across various social media sites increase drastically. We’ve brought in some big time guest bloggers such as ConnectEDU CTO Rick Blaisdell, CA’s Andi Mann, the Director, Advanced Analytics and Sr. Research Scientist at Gravitant, and more. We’ve incorporated a lot of video as well – in fact for whatever strange reason someone thought it was a good idea to let me host our Cloud Corner Series. We’ve covered topics ranging from cloud, virtualization, end user computing, BYOD, network infrastructure, storage, disaster recovery, shadow IT, project management, and much more.

Have there been challenges along the way? Absolutely. Have I had to go after people and chase them down, scratching and clawing until I get a blog to post? Yes. Have tears been shed? Has blood been shed? We’ll keep that to ourselves as it’s generally frowned upon by HR. And, yes, I have had to give William Wallace-like speeches to attempt to rally the troops. While there have been some challenges, all in all there’s been a great amount of enthusiasm and support from our writers to produce a high quality publication. For me, being in the industry for two years now with no previous technological background, the amount I’ve learned is ridiculous. Before starting at GreenPages, I would have rather listened to a Ben Stein Lecture or Bill Lumbergh explaining TPS Reports than read an article on software defined networking and the impact it will have on businesses in the next 5-10 years. I can see why our customers get excited to work with our consultants because they truly love and believe in the technology they talk about. I completely buy into their enthusiasm and passion and it makes me genuinely interested in the topics we cover. I’m in my mid-twenties and have, sadly, found myself out drinking at a bar with my friends having a great time before somehow winding up in a heated debate over the pros and cons of moving to a hybrid cloud architecture.

 

So, in case, for whatever deranged reason, you haven’t read all 200 of our posts, I’m going to list out my top ten from the past two years (in no particular order). Take a look and let me know what you think:

 

 

To close this out…I want to hear from you. What can we do to make Journey to the Cloud better? Are there any specific topics you’d like to hear more about? Any specific authors you’d like to hear more from? How about any features or functionality of the site you’d like added, changed or improved? What have you seen on other sites that you like that we don’t have? Leave a comment here or tweet us at @GreenPagesIT or @benstephenson1