Archivo de la categoría: Cloud computing

The PaaS Market as We Know it Will Not Die Off

I’ve been hearing a lot about Platform as a Service (PaaS) lately as part of the broader discussion of cloud computing from both customers and in articles across the web. In this post, I’ll describe PaaS, discuss a recent article that came out on the subject, and take a shot at sorting out IaaS, PaaS, and SaaS.

What is PaaS?

First a quick trip down memory lane for me. As an intern in college, one of my tours of duty was through the manufacturing systems department at an automaker. I came to work the first day to find a modest desktop computer loaded with all of the applications I needed to look busy, and a nicely printed sheet with logins to various development systems. My supervisor called the play: “I tell you what I want, you code it up, I’ll take a look at it, and move it to test if it smells ok.” I and ten other ambitious interns were more than happy to spend the summer with what the HR guy called “javaweb.” The next three months went something like this:

Part I: Setup the environment…

  1. SSH to abcweb01dev.company.com, head over to /opt/httpd/conf/httpd.conf, configure AJP to point to the abcapp01 and 02dev.company.com
  2. SSH to abcapp01.dev.company.com, reinstall the Java SDK to the right version, install the proper database JARs, open /opt/tomcat/conf/context.xml with the JDBC connection pool
  3. SSH to abcdb01dev.company.com, create a user and rights for the app server to talk to the web server
  4. Write something simple to test everything out
  5. Debug the environment to make sure everything works

Part II: THEN start coding…

  1. SSH to abcweb01dev.company.com, head over to /var/www/html and work on my HTML login page for starters, other things down the road
  2. SSH to devapp01dev.company.com, head over to /opt/tomcat/webapps/jpdwebapp/servlet, and code up my Java servlet to process my logins
  3. Open another window, login to abcweb01dev and tail –f /var/www/access_log to see new connections being made to the web server
  4. Open another window, login to abcapp01dev and tail –f /opt/tomcat/logs/catalina.out to see debug output from my servlet
  5. Open another window, login to abcdevapp01 and just keep /opt/tomcat/conf/context.xml open
  6. Open another window, login to abcdevapp01 and /opt/tomcat/bin/shutdown.sh; sleep 5; /opt/tomcat/bin/startup.sh (every time I make a change to the servlet)

(Host names and directory names have been changed to protect the innocent)

Setting up the environment was a little frustrating. And I knew that there was more to the story; some basic work, call it Part 0, to get some equipment in the datacenter, the OS installed, and IP addresses assigned. Part I, setting up the environment, is the work you would do to setup a PaaS platform. As a developer, the work in Part I was to enable me and my department to do the job in Part II – and we had a job to do – to get information to the guys in the plants who were actually manufacturing product!

 

So, here’s a rundown:

Part 0: servers, operating systems, patches, IPs… IaaS

Part I: middleware, configuration, basic testing… PaaS

Part II: application development

So, to me, PaaS is all about using the bits and pieces provided by IaaS, configuring them in a usable platform, delivering that platform to a developer so that they can deliver software to the business. And, hopefully the business is better off because of our software. In this case, our software helped the assembly plant identify and reduce “in-system damage” to vehicles – damage to vehicles that happens as a result of the manufacturing process.

Is the PaaS market as we know it dead?

I’ve read articles predicting the demise of PaaS altogether and others just asking the question about its future. There was a recent Networkworld article entitled “Is the PaaS market as we know it dying?” that discussed the subject. The article makes three main points, referring to 451 Research, Gartner, and other sources.

  1. PaaS features are being swallowed up by IaaS providers
  2. The PaaS market has settled down while the IaaS and SaaS markets have exploded
  3. Pure-play PaaS providers may be squeezed from the market by IaaS and SaaS

 

I agree with point #1. The evidence is in Amazon Web Services features like autoscaling, RDS, SQS, etc. These are fantastic features but interfacing to them locks developers in to using AWS as their single IaaS provider. The IaaS market is still very active, and I think there is a lot to come even though AWS is ahead of other providers at this point. IaaS is commodity, and embedding specialized (read: PaaS) features in an otherwise IaaS system is a tool to get customers to stick around.

I disagree with point #2. The PaaS market has not settled down – it hasn’t even started yet! The spotlight has been on IaaS and SaaS because these things are relatively simple to understand, considering the recent boom in server virtualization. SaaS also used to be known as something that was provided by ASPs (Application Service Providers), so many people are already familiar with this. I think PaaS and the concepts are still finding their place.

Also disagree with point #3, the time and opportunity for pure-play PaaS providers is now. IaaS is becoming sorted out, and it is clearly a commodity item. As we highlighted earlier, solutions from PaaS providers can ride on top of IaaS. I think that PaaS will be the key to application portability amongst different IaaS providers – kind of like Java: write once, run on any JVM (kind of). As you might know, portability is one of NIST’s key characteristics of cloud computing.

Portability is key. I think PaaS will remain its own concept apart from IaaS and SaaS and that we’ll see some emergence of PaaS in 2014. Why? PaaS is the key to portable applications — once written to a PaaS platform, it can be deployed on different IaaS platforms. It’s also important to note that AWS is almost always associated with IaaS, but they have started to look a lot like a PaaS provider (I touched on this in a blog earlier this month). An application written to use AWS features like AutoScaling is great, but not very portable. Lastly, the PaaS market is ripe for innovation. Barriers to entry are low as is required startup capital (there is no need to build a datacenter to build a useful PaaS platform).

This is just my opinion on PaaS — I think the next few years will see a growing interest in PaaS, possibly even over IaaS. I’m interested in hearing what you think about PaaS, feel free to leave me a comment here, find me on twitter at @dixonjp90, or reach out to us at socialmedia@greenpages.com

To hear more from John, download his whitepaper on hybrid cloud computing or his ebook on the evolution of the corporate IT department!

 

 

5 Cloud Predictions for 2014

By John Dixon, LogicsOne

 

Here are my 5 Cloud Predictions for 2014. As always, leave a comment below and let me know what you think!

1. IaaS prices will drop by at least 20%

Amazon has continued to reduce its pricing since it first launched its cloud services back in 2006. In February of last year, Amazon dropped its price for the 25th time. By April prices dropped for the 30th time and by the summer it was up to 37 times. Furthermore, there was a 37% drop in hourly costs for dedicated on-demand instances. Microsoft announced that they will follow AWS’s lead with regard to price cuts. I expect this trend to continue in 2014 and likely 2015. I highlight some of these price changes and the impact it will have on the market as more organizations embrace the public cloud in more detail in my eBook.

2. We’ll see signs of the shift to PaaS

Amazon is already starting to look more like a PaaS provider than an IaaS provider. Just consider pre-packaged, pre-engineered features like Auto Scaling, CloudWatch, SQS, RDS among other services. An application hosted with AWS that uses all of these features looks more like an AWS application and less like a cloud application. Using proprietary features is very convenient, but don’t forget how application portability is impacted. I expect continued innovation in the PaaS market with new providers and technology, while downward price pressures in the IaaS market remain high. Could AWS (focusing on PaaS innovation) one day source its underlying infrastructure to a pure IaaS provider? This is my prediction for the long term — large telecoms like AT&T, Verizon, BT, et al. will eventually own the IaaS market, Amazon, Google, Microsoft will focus on PaaS innovation, and use infrastructure provided by those telecoms. This of course leaves room for startup, niche PaaS providers to build something innovative and leverage quality infrastructure delivered from the telecoms. This is already happening with smaller PaaS providers. Look for signs of this continuing in 2014.

3. “The cloud” will not be regulated

Recently, there have been rumblings of regulating “the cloud” especially in Europe, and that European clouds are safer than American clouds. If we stick with the concept that cloud computing is just another way of running IT (I call it the supply chain for IT service delivery), then the same old data classification and security rules apply. Only now, if you use cloud computing concepts, the need to classify and secure your data appropriately becomes more important. An attempt to regulate cloud computing would certainly have far reaching economic impacts. This is one to watch, but I don’t expect any legislative action to happen here in 2014.

4. More organizations will look to cloud as enabling DevOps

It’s relatively easy for developers to head out to the cloud, procure needed infrastructure, and get to work quickly. When developers behave like this, they not only write code and test new products, but they become the administrators of the platforms they own (all the way from underlying code to patching the OS) — development and operations come together. This becomes a bit stickier as things move to production, but the same concept can work (see prediction #5).

5. More organizations will be increasingly interested in governance as they build a DevOps culture

As developers can quickly bypass traditional procurement processes and controls, new governance concepts will be needed. Notice how I wrote “concepts” and not “controls.” Part of the new role of the IT department is to stay a step ahead of these movements, and offer developers new ways to govern their own platforms. For example, a real time chart showing used vs. budgeted resources will influence a department’s behavior much more effectively than a cold process that ends with “You’re over budget, you need to get approval from an SVP (expected wait time: 2-8 weeks).”

DevOps CIO Dashboard

 Service Owner Dashboard

The numbers pictured are fictitious. With the concept of Service Owners, the owner of collaboration services can get a view of the applications and systems that provide the service. The owner can then see how VoIP spending is a little above the others, and drill down to see where resources are being spent (on people, processes, or technology). Different ITBM applications display these charts differently, but the premise is the same – real time visibility into spend. With cloud usage in general gaining steam, it is now possible to adjust the resources allocated to these services. With this type of information available to developers, it is possible to take proactive steps to avoid compromising the budget allocated to a particular application or service. On the same token, opportunities to make informed investments in certain areas will become exposed with this information.

So there you have it, my 2014 cloud predictions. What other predictions do you have?

To hear more from John, download his eBook “The Evolution of Your Corporate IT Department” or his Whitepaper “Cloud Management, Now

 

 

Cloud Management, Business Continuity & Other 2013 Accomplishments

By Matt Mock, IT Director

It was a very busy year at GreenPages for our internal IT department. With 2013 coming to a close, I wanted to highlight some of the major projects we worked on over the course of the year. The four biggest projects we tackled were using a cloud management solution, improving our business continuity plan, moving our datacenter, and creating and implementing a BYOD policy.

Cloud Management as a Service

GreenPages now offers a Cloud Management as a Service (CMaaS) solution to our clients. We implemented the solution internally late last year, but really started utilizing it as a customer would this year by increasing what was being monitored and managed. We decided to put Exchange under the “Fully Managed” package of CMaaS. Exchange requires a lot of attention and effort. Instead of hiring a full time Exchange admin, we were able to offload that piece with CMaaS as our Managed Services team does all the health checks to make sure any new configuration changes are correct. This resulted in considerable cost savings. Having access to the team 24/7 is a colossal luxury. Before using CMaaS, if an issue popped up at 3 in the morning we would find out about it the next morning. This would require us to try and fix the problem during business hours. I don’t think I need to explain to anyone the hassle of trying to fix an issue with frustrated coworkers who are unable to do their jobs. If an issue arises now in the middle of the night, the problem has already been fixed before anyone shows up to start working. The Managed Services team does research and remediates bugs that come up. This happened to us when we ran into some issues with Apple iOS calendaring. The Managed Services team did the research to determine the cause and went in and fixed the problem. If my team tried to do this it would have taken us 2-3 days of wasted time. Instead, we could be focusing on some of our other strategic projects. In fact, we are holding a webinar on December 19th that will cover strategies and benefits to being the ‘first-to-know,’ and we will also provide a demo of the CMaaS Enterprise Command Center. We also went live with fully automated patching, which requires zero intervention from my team. Furthermore, we leveraged CMaaS to allow us to spin up a fully managed Linux environment. It’s safe to say that if we didn’t implement CMaaS we would not have been able to accomplish all of our strategic goals for this year.

{Download this free whitepaper to learn more about how organizations can revolutionize the way they manage hybrid cloud environments}

Business Plan

We also determined that we needed to update our disaster recovery plan to a true robust business continuity plan. A main driver of this was because of our more diverse office model. Not only were more people working remotely as our workforce expanded, but we now have office locations up and down the east coast in Kittery, Boston, Attleboro, New York City, Atlanta, and Tampa. We needed to ensure that we could continue to provide top quality service to our customers if an event were to occur. My team took a careful look at our then current infrastructure set up. After examining our policies and plans, we generated new ones around the optimal outcome we wanted and then adjusted the infrastructure to match. A large part of this included changing providers for our data and voice, which included moving our datacenter.

Datacenter Move

In 2013 we wanted to have more robust datacenter facilities. Ultimately, we were able to get into an extremely redundant and secure datacenter at the Markley Group in Boston that provided us with cost savings. Furthermore, Markley is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7. Another benefit our new datacenter offered was excess office space. That way, if there ever was an event at one of our GreenPages locations we could have a place to send people to work. I recently wrote a post which describes the datacenter move in more details.

BYOD Policy

As 2013 ends, we are finishing our first full year with our BYOD policy. We are taking this time to look back and see where there were any issues with the policies or procedures and adjusting for the next year. Our plan is to ensure that year two is even more streamlined. I answered questions in a recent Q & A explaining our BYOD initiative in more detail.

I’m pretty happy looking back at the work we accomplished in 2013. As with any year, there were bumps along the way and things we didn’t get to that we wanted to. All in all though, we accomplished some very strategic projects that have set us up for success in the future. I think that we will start out 2014 with increased employee satisfaction, increased productivity of our IT department, and of course noticeable cost savings. Here’s to a successful 2014!

Is your IT team the first-to-know when an IT outage happens? Or, do you find out about it from your end users? Is your expert IT staff stretched thin doing first-level incident support? Could they be working on strategic IT projects that generate revenue? Register for our upcoming webinar to learn more!

 

Why Automate? What to Automate? How to Automate?

By John Dixon, Consulting Architect

Automation is extremely beneficial to organizations. However, the questions often come up around why to automate, what to automate, and how to automate.

Why automate?

There are several key benefits surrounding automation. They include:

  • Saving time
  • Employees can be retrained to focus on other (hopefully more strategic) tasks
  • Removing human intervention reduces errors
  • Troubleshooting and support is improved when everything is deployed the same way

What to automate?

Organizations should always start with the voice of the customer (VoC). IT departments need to factor in what the end user wants and what the end user expects to improve their experience. If you can’t trace back something you’re automating to an improved customer experience, that’s usually a good warning sign that you should not be automating it. In addition, you need to be able to track back to how automation has provided a benefit to the organization. The benefit should always be measurable and always financial.

What are companies automating?

Requests management is the hot one because that’s a major component of cloud computing. This includes service catalogues and self-service portals. Providing a self-service portal, sending the request for approval based on the dollar amount requested, and fulfilling the order through one or more systems is something that is commonly automated today. My advice here is to automate tasks through a general purpose orchestrator tool (such as CA Process Automation or similar tools) so that automated jobs can be managed from a single console. This is instead of stitching together disparate systems that call each other in a “rat’s nest” of automation. The general purpose orchestrator also allows for easier troubleshooting when an automated task does not complete successfully.

How to automate?

There are some things to consider when sitting down to automate a task, or even determining the best things to automate. Here are a few key points:

  1. Start with the VoC or Voice of the Customer, and work backwards to identify the systems that are needed to automate a particular task. For example, maybe the customer is the Human Resources department, and they want to automate the onboarding of a new employee. It may have to setup user accounts, order a new cell phone, order a new laptop, and schedule the new employee on their manager’s calendar on their first day of work. Map out the systems that are required to accomplish this, and integrate those – and no more. You may find that some parts of the procedure may already be automated; perhaps your phone provider already has an interface to programmatically request new equipment. Take every advantage of these components.
  2. Don’t automate things that you can’t trace back to a benefit for the organization. Just because you can automate something doesn’t mean that you should. Again, use the voice of the customer and user stories here. A common user story is structure as follows:
    1. “As a [role],
    2. I want to [get something done]
    3. So that I can [benefit in the following way]”
  3. Start small and work upwards to automate more and more complex tasks. Remember the HR onboarding procedure in point #1? I wouldn’t suggest beginning your automation journey there. Pick out one thing to automate from a larger story, and get it working properly. Maybe you begin by automating the scheduling of an appointment in Outlook or your calendaring system, or creating a user in Active Directory. Those pieces become components in the HR onboarding story, but perhaps other stories as well.
  4. Use a general purpose orchestrator instead of stitching together different systems. As in point #3, using an orchestrator will allow you to build reusable components that are useful to automate different tasks. A general purpose orchestrator also allows for easier troubleshooting when things go wrong, tracking of automation jobs in the environment, and more advanced conditional logic. Troubleshooting automation any other way can be very difficult.
  5. You’ll need someone with software development experience. Some automation packages claim that even non-developers can build robust automation with “no coding required.” In some cases, that may be true. However, the experience that a developer brings to the table is an absolute must have when automating complex tasks like the HR onboarding example in point #1.

 

What has your organization automated? How have the results been?

 

Cloud Spending Will Increase 1 Billion% by 2014

By Ben Stephenson, Journey to the Cloud

It seems like every week a new study comes out analyzing cloud computing growth. Whether it’s that Public Cloud Services Spending will reach $47.4B in 2013, Global SaaS spending projected to grow from $13.5B in 2011 to $32.8B in 2016, the public cloud services market is forecast to grow 18.5 percent in 2013, or cloud spending at Dunder Mifflin will increase 200% by 2020, the indication is that cloud adoption and spending are on the rise. But how is that relevant to you?

Does it matter to the everyday CIO that cloud spending at midsized companies west of the Mississippi is going to increase by 15% over the next 3 years? The relevant question isn’t how much will cloud adoption and spending increase, but why will it do so? It’s the “why” that matters to the business. If you understand the why, it becomes easier to put context around the statistics coming out of these studies. It comes down to a shift in the industry – a shift in the economics of how a modern day business operates. This shift revolves around the way IT services are being delivered.

To figure out where the industry is going, and why spending and adoption are increasing, you need to look at where the industry has come from. The shift from on-premise IT to public cloud began with SaaS based technologies. Companies like Salesforce.com realized that organizations were wasting a lot of time and money buying and deploying hardware for their CRM solutions. Why not use the internet to be able to allow organizations to pay a subscription fee instead of owning their entire infrastructure? This, however, was not true cloud computing. Next came IaaS with Amazon’s EC3 initiative. Essentially, Amazon realized it had excess compute capacity and decided to rent it out to people who needed the extra space. IaaS put an enormous amount of pressure on corporate IT because App Dev. teams no longer had to wait weeks or months to test and deploy environments. Instead, they could start up right away and become much more efficient. Finally, PaaS came about with initiatives such as Microsoft Azure.

{Free ebook: The Evolution of Your Corporate IT Department}

The old IT paradigm, or a private cloud environment, consists of organizations buying hardware and software and keeping it in their datacenter behind their own firewalls. While a private cloud environment doesn’t need to be fully virtualized, it does need to be automated and very few organizations are actually operating in a true private cloud environment. Ideally, a true private cloud environment is supposed to let internal IT compete with public cloud providers by providing a similar amount of speed and agility that a public cloud allows. While the industry is starting to shift towards public cloud, the private cloud is not going away. Public cloud will not be the only way to operate IT, or even the majority of the way, for a long time. This brings us to the hybrid cloud computing model; the direct result of this shift. Hybrid cloud is the combination of private and public cloud architectures. It’s about the ability to be able to seamlessly transition workloads between private and public, or, in other words, moving on-premise workloads to rented platforms where you don’t own anything in order to leverage services.

So why are companies shifting towards a hybrid cloud model? It all comes down to velocity, agility, efficiency, and elasticity. IT delivery methodology is no longer a technology discussion, but, rather, it’s become a business discussion. CIOs and CFOs are starting to scratch their heads wondering why so much money is being put towards purchasing hardware and software when all they are reading about is cloud this and cloud that.

{Free Whitepaper: Revolutionizing the Way Organizations Manage Hybrid Cloud Environments}

The spending and adoption rates of cloud computing are increasing because the shift in the industry is no longer just talk – it’s real and it’s here now. The bottom line? We’re past hypothetical discussions. There is a major shift in the industry that business decision makers need to be taking seriously. If you’re not modernizing your IT operations by moving towards a hybrid cloud model, you’re going to be missing out on the agility and cost savings that can give your organization a substantial competitive advantage.  This is why cloud adoption and spending are on the rise. This is why you’re seeing a new study every month on the topic.

Moving Our Datacenter: An IT Director’s Take

An Interview with Matt Mock, IT Director, GreenPages Technology Solutions

Journey to the Cloud’s Ben Stephenson sat down with GreenPages’ IT Director Matt Mock to discuss GreenPages’ recent datacenter move.

Ben: Why did GreenPages decide to move its datacenter?

Matt: Our current contract was up so we started evaluating new facilities looking for a robust, redundant facility to house our equipment in. We needed a facility to meet specific objectives around our business continuity plan. In addition, we were also looking for cost savings.

Ben: Where did you move the datacenter to and from?

Matt: Geographically, we stayed in a close area. We moved it from Charlestown, MA a couple of miles down the road into downtown Boston. Staying within a close area certainly made the physical move quicker and easier.

Ben: What were the benefits of moving the datacenter?

Matt: Ultimately, we were able to get into an extremely redundant and secure datacenter that provided us with cost savings. Furthermore, the datacenter is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7.

{Register for our upcoming webinar on 11/7 on key announcements from VMworld 2013}

Ben: Tell us about the process of the move? What had to happen ahead of time to ensure a smooth transition?

Matt: The most important parts were planning, testing, and communication. We put together an extremely detailed plan that broke out every phase of the move down to 15 minute increments. We devised teams for the specific phases that had a communication plan for each team. We also devised a backup emergency plan in the event that we hit any issues the night of the move.

Ben: What happened the night of the move?

Matt: The night of the move we leveraged the excellent facilities at Markley to be able to run a command center that was run by one of our project managers. In the room, we had multiple conference bridges to run the different work streams to ensure smooth and constant communication. We also utilized Huddle, our internal collaboration tool, to communicate as our internal systems were down during the move.

Ben: Anything else you had to factor in?

Matt: Absolutely. The same night of the move we were also changing both voice and data providers at three different locations, which added another layer of complexity. We had to work closely with our new providers to ensure a smooth transition. Because we have a 24/7 Managed Services division at GreenPages, we needed to continue to offer customers the same support during the move that we do on a day-to-day basis.

Ben: Did you experience unexpected events during the move? If so, what were they and how did you handle them?

Matt: With any complex IT project you’re going to experience unexpected events. A couple that we experienced were some hardware failures and unforeseen configuration issues. Fortunately, our detailed plan accounted for these issues, and we were able to address them with the teams on hand and remain on schedule.

Ben: You used an all GreenPages team to accomplish this, right?

Matt: Correct. We did not use any outside vendors for this move – all services were rendered by the GreenPages team. Last time we used outside providers and this time we had a much better experience. I’m in the unique position where I have access to an entire team of project managers and technical resources that made doing this possible. In fact, this is something we offer our customers (from consulting to project management to the actual move) so our team is very, very good at it.

Ben: What advice do you have for other IT Directors who are considering moving their datacenters?

Matt: Detailed planning and constant communication is critical, having a plan in place for every possible scenario, and having an emergency plan ready so that in the middle of the night you’re not scrambling with how to address those unforeseen issues.

Ben: Congratulations on the successful move. See you Monday after the Patriots crush your Steelers.

Would you like to learn more about how GreenPages can help you with your datacenter needs?

Moving Email to the Cloud Part 2

By Chris Chesley, Solutions Architect

My last blog post was part 1 of moving your Email to the Cloud with Office 365.  Here’s the next installment in the series in which I will be covering the 3 methods of authenticating your users for Office 365.  This is a very important consideration and will have a large impact on your end users and their day to day activities.

The first method of authenticating your users into Office 365 is to do so directly.  This has no ties to your Active Directory.  The benefits here are that your users get mail, messages and SharePoint access regardless of your site’s online status.  The downside is that your users may have a different password than they use to get into their desktop/laptops and this can get very messy if you have a large number of users.

The second way of authenticating your users is full Active Directory integration.  I will refer to this as the “Single Sign On” method.  In this method, your Active Directory is the authoritative source of authentication for your users.  Users log into their desktop/laptop and can access all of the Office 365 applications without typing their password again, which is convenient. You DO need a few servers running locally to make this happen.  You need an Active Directory Federation Server (ADFS) and an Azure Active Directory Sync Sever. Both of these services are needed to sync your AD and user information to Office 365. The con of this method is that you need a redundant AD setup because if it’s down your users are not going to be able to access mail or anything else in the cloud.  You can do this by hosting a Domain Controller, and the other 2 systems I mentioned, in a cloud or at one of your other locations, if you have one.

The third option is what I will refer to as “Single Password.”  In this setup, you install an Azure Active Directory Sync server in your environment but do not need an ADFS server.  The Sync tool will hash your user’s passwords and send them to Office 365.  When a user tries to access any of the Office 365 services, they are asked to type in their password.  The password is then hashed and compared to the stored hash and they are let in if they match.  This does require the users to type their password again, but it allows them to use their existing Active Directory password and anytime this password changes, it is synced to the cloud.

The choice of which method you use has a big impact on your users as well as how you manage them.  Knowing these choices and choosing one that meets your business goals will set you on the path of successfully moving your services to the cloud.

 

Download this free ebook on the evolution of the corporate IT department

 

My VMworld Breakout Session: Key Lessons Learned from Deploying a Private Cloud Service Catalog

By John Dixon, Consulting Architect, LogicsOne

 

Last month, I had the special privilege of co-presenting a breakout session at VMworld with our CTO Chris Ward. The session’s title was “Key Lessons Learned from Deploying a Private Cloud Service Catalog,” and we had a full house for it. Overall, the session went great and we had a lot of good questions. In fact, due to demand, we ended up giving the presentation twice.

In the session, Chris and I discussed a recent project we did for a financial services firm where we built a private cloud, front-ended by a service catalogue. A service catalog really enables self-service – it is one component of corporate IT’s opportunity to partner with the business. In a service catalog, the IT department can publish the menu of services that it is willing to provide and (sometimes) the price that it charges for those services. For example, we published a “deploy VM” service in the catalog, and the base offering was priced at $8.00 per day. Additional storage or memory from the basic spec was available at an additional charge. When the customer requests “deploy VM,” the following happens:

  1. The system checks to see if there is capacity available on the system to accommodate the request
  2. The request is forwarded to the individual’s manager for approval
  3. The manager approves or denies the request
  4. The requestor is notified of the approval status
  5. The system fulfills the request – a new VM is deployed
  6. A change record and a new configuration item is created to document the new VM
  7. The system emails the requestor with the hostname, IP address, and login credentials for the new VM

This sounds fairly straightforward, and it is. Implementation is another matter however. It turns out that we had to integrate with vCenter, Active Directory, the client’s ticketing system, and client’s CMDB, an approval system, and the provisioned OS in order to automate the fulfillment of this simple request. As you might guess, documenting this workflow upfront was incredibly important to the project’s success. We documented the workflow and assessed it against the request-approval-fulfillment theoretical paradigm to identify the systems we needed to integrate. One of the main points that Chris and I made at VMworld was to build this automation incrementally instead of tackling it all at once. That is, just get automation suite to talk to vCenter before tying in AD, the ticketing system, and all the rest.

Download this on-demand webinar to learn more about how you can securely enable BYOD with VMware’s Horizon Suite

Self-service, automation, and orchestration all drove real value during this deployment. We were able to eliminate or reduce at least three manual handoffs via this single workflow. Previously, these handoffs were made either by phone or through the client’s ticketing system.

During the presentation we also addressed which systems we integrated, which procedures we selected to automate, and what we plan to have the client automate next. You can check out the actual VMworld presentation here. (If you’re looking for more information around VMworld in general, Chris wrote a recap blog of Pat Gelsinger’s opening keynote as well as one on Carl Eschenbach’s General Session.)

Below are some of the questions we got from the audience:

Q: Did the organization have ITSM knowledge beforehand?

A:The group had very limited knowledge of ITSM but left our project with real-world perspective on ITIL and ITSM

Q: What did we do if we needed a certain system in place to automate something

A: We did encounter this and either labeled it as a risk or used “biomation” (self-service is available, fulfillment is manual, customer doesn’t know the difference) until the necessary systems were made available

Q: Were there any knowledge gaps at the client? If so, what were they?

A: Yes, the developer mentality and service management mentality are needed to complete a service catalog project effectively. Traditional IT engineering and operations do not typically have a developer mentality or experience with languages like Javascript.

Q: Who was the primary group at the client driving the project forward?

A: IT engineering and operations were involved with IT engineering driving most of the requirements.

Q: At which level was the project sponsored?

A: VP of IT Engineering with support from the CIO

All in all, it was a very cool experience to get the chance to present a breakout session at VMworld. If you have any other questions about key takeaways we got from this project, leave them in the comment section. As always, if you’d like more information you can contact us. I also just finished an ebook on “The Evolution of the Corporate IT Department” so be sure to check that out as well!

The Evolution of Your Corporate IT Department

By John Dixon, Consulting Architect, LogicsOne

 

Corporate IT departments have progressed from keepers of technology to providers of complex solutions that businesses truly rely on. Even a business with an especially strong core competency simply cannot compete without information systems to provide key pieces of technology such as communication and collaboration systems (e.g., email). Many corporate IT departments have become adept providers of technology solutions. We, at GreenPages, think that corporate IT departments should be recognized as providers of services. Also, we think that emerging technology and management techniques are creating an especially competitive market of IT service providers. Professional business managers will no doubt recognize that their internal IT department is perhaps another competitor in this market for IT services. Could the business choose to source their systems to a provider of services other than internal corporate IT?

IT departments large and small already have services deployed to the cloud. We think that organizations should prepare to deploy services to the cloud provider that meets their requirements most efficiently, and eventually, move services between providers to continually optimize the environment. As we’ll show, one of the first steps to enabling this Cloud Management is to use a tool that can manage resources in different environments as if they are running on the same platform. Corporate IT departments can prepare for cloud computing without taking the risk of moving infrastructure or changing any applications.

In this piece, I will describe the market for IT service providers, the progression of corporate IT departments from technology providers to brokers of IT services, and how organizations can take advantage of behavior emerging in the market for IT services. This is not a cookbook of how to build a private cloud for your company—this instead offers a perspective on how tools and management techniques, namely Cloud Management as a Service (CMaaS), can be adopted to take advantage of cloud computing, whatever it turns out to become. In the following pages, we’ll answer these questions:

  1. Why choose a single cloud provider? Why not position your IT department to take advantage of any of them?
  2. Why not manage your internal IT department as if it is already a cloud environment?
  3. Can your corporate IT department compete with a firm whose core competency is providing infrastructure?
  4. When should your company seriously evaluate an application for deployment to an external cloud service provider? Which applications are suitable to deploy to the cloud?

 

To finish reading, download John’s free ebook

 

 

 

 

 

 

How IT Operations is Like Auto Racing

By John Dixon, Consulting Architect, LogicsOne

 

If you’ve ever tried your hand at auto racing like I did recently at Road Atlanta, you’ll know that putting up a great lap time is all about technique. If you’ve ever been to a racing school, you’ll also remember that being proactive and planning your corners is absolutely critical in driving safely. Lets compare IT operations to auto racing now. Everyone knows how to, essentially, drive a car. Just as every company, essentially, knows how to run IT. What separates a good driver from a great driver? Technique, preparation, and knowing the capabilities of your driver and equipment.

 

The driver = your capabilities

The car = your technology

The track = your operations as the business changes

 

Preparation

Lets spend a little bit of time on “preparation.” As we all know, preparation time is often a luxury. From what I have seen consulting over the past few years, preparation is not just installed in the culture of IT. But we’d all agree that more preparation leads to better outcomes (for almost everything, really). So, how do we get more preparation time? This is where the outsourcing trend gained momentum – outsource the small stuff to get more time back to work on strategic projects. Well, this didn’t always work out very well, as typical outsourcing arrangements moved large chunks of IT to an outside provider. Why didn’t we move smaller chunks first? That’s what we do in auto racing – the reconnaissance lap! Now we have the technology and arrangements to do a reconnaissance lap of sorts. For example, our Cloud Management as a Service (CMaaS) has this philosophy built-in – we can manage certain parts of infrastructure that you select, and leave others alone. Maybe you’d like to have your Exchange environment fully managed but not your SAP environment. We’ve built CMaaS with the flexible technology and arrangements to do just that.

Technique

 

Auto Racing IT Operations
Safety   first! Check your equipment before heading out, let the car warm up before   increasing speed Make sure   your IT shop can perform as a partner with the business
Know where   to go slow! You can’t take every turn with full throttle. Even if you can,   its worth it to “throw away” some corners in preparation for straight   sections Know where   to allocate investment in IT – its all about producing results for the   business
First lap:   reconnaissance (stay on the track) Avoid   trying to tackle very complex problems with brand new technology (e.g., did   you virtualize Exchange on your very first P2V?)
Last lap:   cool down (stay on the track) An easy   one, manage the lifecycle of your applications and middleware to avoid be   caught by a surprise required upgrade
Know where   to go fast! You can be at full throttle without any brake or steering inputs   (as in straight sections), so dig in! Recognize   established techniques and technologies and use them to the max advantage
Smooth =   fast. Never stab the throttle or the brakes! Sliding all over the track with   abrupt steering and throttle inputs is not the fastest way (but it IS fun and   looks cool) Build   capabilities gradually and incrementally instead of looking to install a   single technology to solve all problems today.
Know the   capabilities of your car – brakes, tires, clutch, handling. Exceed the   capabilities of your equipment and see what happens. Take the   time to know your people, processes, and technology – which things work well   and which could be improved? This depends greatly on your business, but there   are some best practices to run a modern IT shop.
Improve   time with each lap This is   all about continuous improvement – many maneuvers in IT should be repeatable   (like handling a trouble ticket), so do it better every time.
Take a   deep breath, check your gauges, check your harnesses, check your helmet Monitoring   is important, but it is not an endgame for most of us. Be aware of things   that could go wrong, how you could mitigate risk, which workarounds you could   implement, etc.
Carry   momentum around the track. A high horsepower car with a novice driver will   always lose to a great driver in a sedan Technology   doesn’t solve everything. You need proper technique and preparation.
Learn from   your mistakes – they aren’t the end of the world With   well-instrumented monitoring, performance blips or mistakes are opportunities   to improve

 

Capabilities

A word on capabilities. Capabilities are not something you simply install with software or infrastructure. Just as an aspiring racecar driver can’t simply obtain the capability required to win a professional F1 race with a weekend class. You need assets (e.g., infrastructure, applications, data) and resources (e.g., dollars) to build capabilities. What exactly is a capability? In racing, it’s the ability to get around a track, any track, quickly and safely. In IT, this would be the ability to handle a helpdesk call and resolve the issue to completion, for a basic example. An advanced IT capability in a retail setting might be to produce a report on how frequently shoppers from a particular zip code purchase a certain product. Or, perhaps, it’s an IT governance capability to understand the costs of providing a particular IT service. One thing I’ve seen in consulting with various shops is that organizations could do a better job of understanding their capabilities.

Now picture yourself in the in the driver’s seat (of your IT shop). Know your capabilities, but really think about your technique and continuously improving your “lap times.”

  1. Where are your straight sections – where you can just “floor it” and hang on? These might be well-established processes, projects, or tasks that pay obvious benefits. Can you take some time to create more straight sections?
  2. How much time do you have for preparation? How much time do you spend “studying the track” and “knowing your equipment?” Do you know your capabilities? Can you create time that you can use for preparation?
  3. Where are your slow sections? The processes that require careful attention to detail. This is probably budget planning time for many of us. Hiring time is probably another slow section.
  4. Do you understand your capabilities? Defining the IT services that you provide your customer is a great place to start. If you haven’t done this yet, you should — especially if you’re looking at cloud computing. GreenPages and our partners have some well-established techniques to help you do this successfully.

 

As always, feel free to reach out if you’d like to have a conversation just to toss around some ideas on this topic.

 

Now for the fun part, a video that a classmate of mine recorded of a hot lap around Road Atlanta. The video begins in turn 11 (under the bridge in this video).

  1. Turn 11 is important because it is a setup to the front straight section. BUT, it is pretty dangerous too as it leads downhill to turn 12 (the entrance to the straight). Position the car under the RED box on the bridge and give a small amount of right steering input. Build speed down the hill.
  2. Clip the apex of turn 11 and pull the car into turn 12. Be gentle with turn 12 – upset the car over the gators and you could easily lose control.
  3. Under the second bridge and onto the front straight section. Grab 5th gear if you can. Up to ~110mph. Position the car out to the extreme left side of the track for turn 1.
  4. Show no mercy to the brakes for turn 1! Engage ABS, downshift, then trail brake into the right hander, pull the car in to the apex of the turn in 4th gear, carrying 70-80mph.
  5. Uphill for turn 2. Aim the nose of the car at the telephone pole in the distance, as turn 2 is blind. Easy on the throttle!
  6. Collect the apex at turn 2 and downhill for turn 3. Use a dab of brakes to adjust speed as you turn slight right for turn 3.
  7. Turn slight left for turn 4, hug the inside
  8. Track out and downhill for “the esses” – roll on the throttle easily, you’ve got to keep momentum for the uphill section at turn 5.
  9. The esses are a fast part of the track but be careful not to upset the car
  10. Brake slightly uphill for turn 5. It is the entrance to a short straight section where you can gain some speed
  11. Stay in 4th gear for turn 6 and bring the car to the inside of the turn
  12. Track way out to the left for the crucial turn 7 – a slow part of the track. Brake hard and downshift to third gear. Get this one right as it is the entrance to the back straight section.
  13. Build speed on the straight – now is the time to floor it!
  14. Grab 5th gear midway down the straight for 110+ mph. Take a deep breath! Check your gauges and harnesses.
  15. No mercy for the brakes at turn 10a! Downshift to 4th gear, downshift to 3rd gear and trail brake as you turn left
  16. Slight right turn for turn 10b and head back uphill to the bridge – position the car under the RED box and take another lap!