Archivo de la categoría: Cloud computing

Are We All Cloud Service Brokers Now?

By John Dixon, Consulting Architect

 

Robin Meehan of Smart421 recently wrote a couple of great posts on cloud service brokers (CSBs) and the role that they play for consumers of cloud services. (http://smart421.wordpress.com/2014/02/24/were-mostly-all-cloud-services-brokers-now/ and http://smart421.wordpress.com/2014/02/25/cloud-brokerage-and-dynamic-it-workload-migration/). I’m going to write two blogs about the topic. The first will be a background on my views and interpretations around cloud service brokers. In the second post, I will break down some of Robin’s points and explain why I agree or disagree.

Essentially, a cloud broker offers consumers three key things that a single cloud provider does not (these are from the NIST definition of a Cloud Service Broker):

  • Intermediation
  • Aggregation
  • Arbitrage (run-time, deployment-time, plan-time)

My interpretation of these is as follows. We’ll use Amazon Web Services as the example IaaS cloud provider and GreenPages as the example of the cloud broker:

Intermediation. As a cloud broker, GreenPages, sits between you, the consumer, and AWS. GreenPages and other CSBs do this so they can add value to the core AWS offering. Why? Billing and chargeback is a great example. A bill from AWS includes line item charges for EC2, S3, and whichever other services you used during the past month – so you would be able to see that EC2 charges for January were $12,502.90 in total. GreenPages takes this bill and processes it so that you would be able to get more granular information about your spend in January. We would be able to show you:

  • Spend per application
  • Spend per environment (development, test, production)
  • Spend per tier (web, application, database)
  • Spend per resource (CPU, memory, storage, managed services)
  • Compare January 2014 to December, or even January 2013
  • Estimate the spend for February 2014

So, going directly to AWS, you’d be able to answer a question like, “how much did I spend in total for compute in January?”

And, going through GreenPages as a cloud broker, you’d be able to answer a question like, “how much did the development environment for Application X cost in January, and how does that compare with the spend in December?”

I think you’d agree that it is easier to wrap governance around the spend information from a cloud service broker rather than directly from AWS. This is just one of the advantages of using a CSB in front of a cloud provider – even if you’re like many customers out there and choose to use only one provider.

Aggregation. As a CSB, GreenPages aggregates the offerings from many providers and provides a simple interface to provision resources to any of them. Whether you choose AWS, Terremark, Savvis, or even your internal vSphere environment, you’d use the same procedure to provision resources. On the provider side, CSBs also aggregate demand from consumers and are able to negotiate rates. Why is this important? A CSB can add value in three ways here:

1) By allowing you to compare the offerings of different providers – in terms of pricing, SLA guarantees, service credits, supported configurations, etc.

2) By placing a consistent approval framework in front of requests to any provider.

3) By using aggregated demand to negotiate special pricing and terms with providers – terms that may not be available to an individual consumer of cloud services

The approval framework is of course optional – if you wish, you could choose to allow any user to provision infrastructure to any provider. Either way, a CSB can establish a request management framework in front of “the cloud” and can, in turn, provide things like an audit trail of requests and approvals. Perhaps you want to raise an ITIL-style change whenever a cloud request is fulfilled? A CSB can integrate with existing systems like Remedy or ServiceNow for that.

Arbitrage. Robin Meehan has a follow-on post that alludes to cloud arbitrage and workload migration. Cloud arbitrage is somewhat science fiction at this time, but let’s look forward to the not-too-distant future.

First, what is arbitrage and cloud arbitrage? NIST says it is an environment where the flexibility to CSB has the flexibility to choose, on the customer’s behalf, where to best run the customer’s workload. In theory, the CSB would always be on the lookout for a beneficial arrangement, automatically migrate the workload, and likely capture the financial benefit of doing so. This is a little bit like currency arbitrage, where a financial institution is looking for discrepancies in the market for various currencies, and makes various transactions to come up with a beneficial situation. If you’ve ever seen the late-night infomercials for forex.com, don’t believe the easy money hype. You need vast sums of money and perfect market information (e.g., you’re pretty much a bank) to play in that game.

So, cloud arbitrage and “just plain currency arbitrage” are really only similar when it comes to identifying a good idea. This is where we break it down cloud arbitrage into three areas:

  • Run-time arbitrage
  • Deployment-time arbitrage
  • Plan-time arbitrage

In my next post, I will break down cloud arbitrage as well as go over some specific points Robin makes in his posts and offer my opinions on them.

 

To learn more about transforming your IT Department to a broker of IT services download this ebook

 

 

The Big Shift: From Cloud Skeptics & Magic Pills to ITaaS Nirvana

By Ron Dupler, CEO GreenPages Technology Solutions

Over the last 4-6 quarters, we have seen a significant market evolution, with our customers and the overall market moving from theorizing about cloud computing to defining strategies and plans to reap the benefits of cloud computing solutions and implement hybrid cloud models. In a short period of time we’ve seen IT thought leaders move from debating the reality and importance of cloud computing, to trying to understand how to most effectively grasp the benefits of cloud computing to improve organizational efficiency, velocity, and line of business empowerment. Today, we see the leading edge of the market aggressively rationalizing their application architectures and driving to hybrid cloud computing models.

Internally, we call this phenomenon The Big Shift. Let’s discuss what we know about The Big Shift. First for all of the cloud skeptics reading this, it is an undeniable fact that corporate application workloads are moving from customer owned architectures to public cloud computing platforms. RW Baird released an interesting report in Q’4 of 2013 that included the following observations:

  • Corporate workloads are moving to the public cloud.
  • Much of the IT industry has been asleep at the wheel as Big Shift momentum has accelerated due to the fact that public cloud spending still represents a small portion of overall IT spend.
  • Traditional IT spending is growing in the low single digits. 2-3% per year is a good approximation.
  • Cloud spending is growing at 40% plus per year.
  • What we call The Big Shift is accelerating and is going to have a tremendous impact on the traditional IT industry in the coming years. For every $1.00 increase in public cloud spending, there is a corresponding $3.00-$4.00 decrease in customer-owned IT spend.

There are some other things we know about The Big Shift:

The Big Shift is disrupting old industry paradigms and governance models. We see market evidence of this in traditional IT industry powerhouses like HP and Dell struggling to adapt and reinvent themselves and to maintain relevance and dominance in the new ITaaS era. We even saw perennial powerhouse Cisco lower its 5 year growth forecast during last calendar Q’4 due to the forces at play in the market. In short, the Big Shift is driving disruption throughout the entire IT supply chain. Companies tied to the traditional, customer-owned IT world are finding themselves under financial pressures and are struggling to adapt. Born in the cloud companies like Amazon are seeing tremendous and accelerating growth as the market embraces ITaaS.

In corporate America, the Big Shift is causing inertia as corporate IT leaders and their staffs reassess their IT strategies and strive to determine how best to execute their IT initiatives in the context of the tremendous market change going on around them. We see many clients who understand the need to drive to an ITaaS model and embrace hybrid cloud architectures but do not know how best to attack that challenge and prepare to manage in a hybrid cloud world. This lack of clarity is causing delays in decision making and stalling important IT initiatives.

Let’s discuss cloud for a bit. Cloud computing is a big topic that elicits emotional reactions. Cloud-speak is pervasive in our industry. By this point, the vast majority of your IT partners and vendors are couching their solutions as cloud, or as-a-service, solutions. Some folks in the industry are bold enough to tell you that they have the magic cloud pill that will lead you to ITaaS nirvana. Due to this, many IT professionals that I speak with are sick of talking about cloud and shy away from the topic. My belief is that this avoidance is counterproductive and driven by cloud pervasiveness, lack of precision and clarity when discussing cloud, and the change pressure the cloud revolution is imposing on all professional technologists. The age old mandate to embrace change or die has never been more relevant. Therefore, we feel it is imperative to tackle the cloud discussion head on.

Download our free whitepaper “Cloud Management, Now

Let me take a stab at clarifying the cloud discussion. Figure 1 below represents the Big Shift. As noted above, it is undeniable that workloads are shifting from private, customer owned IT architectures, to public, customer rented platforms, i.e. the public cloud. We see three vectors of change in the industry that are defining the cloud revolution.

Cloud Change Vectors

The first vector is the modernization of legacy, customer-owned architectures. The dominant theme here over the past 5-7 years has been the virtualization of the compute layer. The dominant player during this wave of transformation has been VMware. The first wave of virtualization has slowed in the past 4-6 quarters as the compute virtualization market has matured and the vast majority of x86 workloads have been virtualized. There is a new second wave that is just forming and that will be every bit as powerful and important as the first wave. This wave is represented by new, advanced forms of virtualization and the continued abstraction of more complex components of traditional IT infrastructure: networking, storage, and ultimately entire datacenters as we move to a world of software defined datacenter (SDDC) in the coming years.

The second vector of change in the cloud era involves deploying automation, orchestration, and service catalogues to enable private cloud computing environments for internal users and lines of business. Private cloud environments are the industry and corporate IT’s reaction to the public cloud providers’ ability to provide faster, cheaper, better service levels to corporate end users and lines of business. In short, the private cloud change vector is driven by the fact that internal IT now has competition. Their end users and lines of business, development teams in particular, have new service level expectations based on their consumer experiences and their ability to get fast, cheap, commodity compute from the likes of Amazon. To compete, corporate IT staffs must enable self-service functionality for their lines of business and development teams by deploying advanced management tools that provide automation, orchestration, and service catalogue functionality.

The third vector of change in the cloud era involves tying the inevitable blend of private, customer-owned architectures together with the public cloud platforms in use today at most companies. The result is a true hybrid cloud architectural model that can be managed, preserving the still valid command and control mandates of traditional corporate IT,  and balancing those mandates with the end user empowerment and velocity expected in today’s cloud world.

In the context of these three change vectors we see several approaches within our customer base. We see some customers taking a “boil the ocean” approach and striving to rationalize their entire application portfolios to determine best execution venues and define a path to a true hybrid cloud architecture. We see other customers taking a much more cautious approach and leveraging cloud-based point solutions like desktop and disaster recovery as-a-service to solve old business problems in new ways. Both approaches are valid and depend on uses cases, budgets, and philosophical approach (aggressive, leading-edge, versus conservative follow-the-market thinking).

GreenPages business strategy in the context of the ITaaS and cloud revolution is simple. We have built an organization that has the people, process, and technologies to provide expert strategic guidance and proven cloud-era solutions for our clients through a historical inflection point in the way that information technology is delivered to corporate end users and lines of business. Our cloud management as a service offering (CMaaS) provides a technology platform that helps customers integrate the disparate management tools deployed in their environments and federate alerts through an enterprise command center approach that gives a singular view into physical, virtual, and public cloud workloads. CMaaS also provides cloud service brokerage and governance capabilities allowing our customers to view price-performance analytics across private and public cloud environments, design service models and view the related bills of material, and view and consolidate billings across multiple public cloud providers. What are your thoughts on the Big Shift? How is your organization addressing the changes in the IT landscape?

The PaaS Market as We Know it Will Not Die Off

I’ve been hearing a lot about Platform as a Service (PaaS) lately as part of the broader discussion of cloud computing from both customers and in articles across the web. In this post, I’ll describe PaaS, discuss a recent article that came out on the subject, and take a shot at sorting out IaaS, PaaS, and SaaS.

What is PaaS?

First a quick trip down memory lane for me. As an intern in college, one of my tours of duty was through the manufacturing systems department at an automaker. I came to work the first day to find a modest desktop computer loaded with all of the applications I needed to look busy, and a nicely printed sheet with logins to various development systems. My supervisor called the play: “I tell you what I want, you code it up, I’ll take a look at it, and move it to test if it smells ok.” I and ten other ambitious interns were more than happy to spend the summer with what the HR guy called “javaweb.” The next three months went something like this:

Part I: Setup the environment…

  1. SSH to abcweb01dev.company.com, head over to /opt/httpd/conf/httpd.conf, configure AJP to point to the abcapp01 and 02dev.company.com
  2. SSH to abcapp01.dev.company.com, reinstall the Java SDK to the right version, install the proper database JARs, open /opt/tomcat/conf/context.xml with the JDBC connection pool
  3. SSH to abcdb01dev.company.com, create a user and rights for the app server to talk to the web server
  4. Write something simple to test everything out
  5. Debug the environment to make sure everything works

Part II: THEN start coding…

  1. SSH to abcweb01dev.company.com, head over to /var/www/html and work on my HTML login page for starters, other things down the road
  2. SSH to devapp01dev.company.com, head over to /opt/tomcat/webapps/jpdwebapp/servlet, and code up my Java servlet to process my logins
  3. Open another window, login to abcweb01dev and tail –f /var/www/access_log to see new connections being made to the web server
  4. Open another window, login to abcapp01dev and tail –f /opt/tomcat/logs/catalina.out to see debug output from my servlet
  5. Open another window, login to abcdevapp01 and just keep /opt/tomcat/conf/context.xml open
  6. Open another window, login to abcdevapp01 and /opt/tomcat/bin/shutdown.sh; sleep 5; /opt/tomcat/bin/startup.sh (every time I make a change to the servlet)

(Host names and directory names have been changed to protect the innocent)

Setting up the environment was a little frustrating. And I knew that there was more to the story; some basic work, call it Part 0, to get some equipment in the datacenter, the OS installed, and IP addresses assigned. Part I, setting up the environment, is the work you would do to setup a PaaS platform. As a developer, the work in Part I was to enable me and my department to do the job in Part II – and we had a job to do – to get information to the guys in the plants who were actually manufacturing product!

 

So, here’s a rundown:

Part 0: servers, operating systems, patches, IPs… IaaS

Part I: middleware, configuration, basic testing… PaaS

Part II: application development

So, to me, PaaS is all about using the bits and pieces provided by IaaS, configuring them in a usable platform, delivering that platform to a developer so that they can deliver software to the business. And, hopefully the business is better off because of our software. In this case, our software helped the assembly plant identify and reduce “in-system damage” to vehicles – damage to vehicles that happens as a result of the manufacturing process.

Is the PaaS market as we know it dead?

I’ve read articles predicting the demise of PaaS altogether and others just asking the question about its future. There was a recent Networkworld article entitled “Is the PaaS market as we know it dying?” that discussed the subject. The article makes three main points, referring to 451 Research, Gartner, and other sources.

  1. PaaS features are being swallowed up by IaaS providers
  2. The PaaS market has settled down while the IaaS and SaaS markets have exploded
  3. Pure-play PaaS providers may be squeezed from the market by IaaS and SaaS

 

I agree with point #1. The evidence is in Amazon Web Services features like autoscaling, RDS, SQS, etc. These are fantastic features but interfacing to them locks developers in to using AWS as their single IaaS provider. The IaaS market is still very active, and I think there is a lot to come even though AWS is ahead of other providers at this point. IaaS is commodity, and embedding specialized (read: PaaS) features in an otherwise IaaS system is a tool to get customers to stick around.

I disagree with point #2. The PaaS market has not settled down – it hasn’t even started yet! The spotlight has been on IaaS and SaaS because these things are relatively simple to understand, considering the recent boom in server virtualization. SaaS also used to be known as something that was provided by ASPs (Application Service Providers), so many people are already familiar with this. I think PaaS and the concepts are still finding their place.

Also disagree with point #3, the time and opportunity for pure-play PaaS providers is now. IaaS is becoming sorted out, and it is clearly a commodity item. As we highlighted earlier, solutions from PaaS providers can ride on top of IaaS. I think that PaaS will be the key to application portability amongst different IaaS providers – kind of like Java: write once, run on any JVM (kind of). As you might know, portability is one of NIST’s key characteristics of cloud computing.

Portability is key. I think PaaS will remain its own concept apart from IaaS and SaaS and that we’ll see some emergence of PaaS in 2014. Why? PaaS is the key to portable applications — once written to a PaaS platform, it can be deployed on different IaaS platforms. It’s also important to note that AWS is almost always associated with IaaS, but they have started to look a lot like a PaaS provider (I touched on this in a blog earlier this month). An application written to use AWS features like AutoScaling is great, but not very portable. Lastly, the PaaS market is ripe for innovation. Barriers to entry are low as is required startup capital (there is no need to build a datacenter to build a useful PaaS platform).

This is just my opinion on PaaS — I think the next few years will see a growing interest in PaaS, possibly even over IaaS. I’m interested in hearing what you think about PaaS, feel free to leave me a comment here, find me on twitter at @dixonjp90, or reach out to us at socialmedia@greenpages.com

To hear more from John, download his whitepaper on hybrid cloud computing or his ebook on the evolution of the corporate IT department!

 

 

5 Cloud Predictions for 2014

By John Dixon, LogicsOne

 

Here are my 5 Cloud Predictions for 2014. As always, leave a comment below and let me know what you think!

1. IaaS prices will drop by at least 20%

Amazon has continued to reduce its pricing since it first launched its cloud services back in 2006. In February of last year, Amazon dropped its price for the 25th time. By April prices dropped for the 30th time and by the summer it was up to 37 times. Furthermore, there was a 37% drop in hourly costs for dedicated on-demand instances. Microsoft announced that they will follow AWS’s lead with regard to price cuts. I expect this trend to continue in 2014 and likely 2015. I highlight some of these price changes and the impact it will have on the market as more organizations embrace the public cloud in more detail in my eBook.

2. We’ll see signs of the shift to PaaS

Amazon is already starting to look more like a PaaS provider than an IaaS provider. Just consider pre-packaged, pre-engineered features like Auto Scaling, CloudWatch, SQS, RDS among other services. An application hosted with AWS that uses all of these features looks more like an AWS application and less like a cloud application. Using proprietary features is very convenient, but don’t forget how application portability is impacted. I expect continued innovation in the PaaS market with new providers and technology, while downward price pressures in the IaaS market remain high. Could AWS (focusing on PaaS innovation) one day source its underlying infrastructure to a pure IaaS provider? This is my prediction for the long term — large telecoms like AT&T, Verizon, BT, et al. will eventually own the IaaS market, Amazon, Google, Microsoft will focus on PaaS innovation, and use infrastructure provided by those telecoms. This of course leaves room for startup, niche PaaS providers to build something innovative and leverage quality infrastructure delivered from the telecoms. This is already happening with smaller PaaS providers. Look for signs of this continuing in 2014.

3. “The cloud” will not be regulated

Recently, there have been rumblings of regulating “the cloud” especially in Europe, and that European clouds are safer than American clouds. If we stick with the concept that cloud computing is just another way of running IT (I call it the supply chain for IT service delivery), then the same old data classification and security rules apply. Only now, if you use cloud computing concepts, the need to classify and secure your data appropriately becomes more important. An attempt to regulate cloud computing would certainly have far reaching economic impacts. This is one to watch, but I don’t expect any legislative action to happen here in 2014.

4. More organizations will look to cloud as enabling DevOps

It’s relatively easy for developers to head out to the cloud, procure needed infrastructure, and get to work quickly. When developers behave like this, they not only write code and test new products, but they become the administrators of the platforms they own (all the way from underlying code to patching the OS) — development and operations come together. This becomes a bit stickier as things move to production, but the same concept can work (see prediction #5).

5. More organizations will be increasingly interested in governance as they build a DevOps culture

As developers can quickly bypass traditional procurement processes and controls, new governance concepts will be needed. Notice how I wrote “concepts” and not “controls.” Part of the new role of the IT department is to stay a step ahead of these movements, and offer developers new ways to govern their own platforms. For example, a real time chart showing used vs. budgeted resources will influence a department’s behavior much more effectively than a cold process that ends with “You’re over budget, you need to get approval from an SVP (expected wait time: 2-8 weeks).”

DevOps CIO Dashboard

 Service Owner Dashboard

The numbers pictured are fictitious. With the concept of Service Owners, the owner of collaboration services can get a view of the applications and systems that provide the service. The owner can then see how VoIP spending is a little above the others, and drill down to see where resources are being spent (on people, processes, or technology). Different ITBM applications display these charts differently, but the premise is the same – real time visibility into spend. With cloud usage in general gaining steam, it is now possible to adjust the resources allocated to these services. With this type of information available to developers, it is possible to take proactive steps to avoid compromising the budget allocated to a particular application or service. On the same token, opportunities to make informed investments in certain areas will become exposed with this information.

So there you have it, my 2014 cloud predictions. What other predictions do you have?

To hear more from John, download his eBook “The Evolution of Your Corporate IT Department” or his Whitepaper “Cloud Management, Now

 

 

Cloud Management, Business Continuity & Other 2013 Accomplishments

By Matt Mock, IT Director

It was a very busy year at GreenPages for our internal IT department. With 2013 coming to a close, I wanted to highlight some of the major projects we worked on over the course of the year. The four biggest projects we tackled were using a cloud management solution, improving our business continuity plan, moving our datacenter, and creating and implementing a BYOD policy.

Cloud Management as a Service

GreenPages now offers a Cloud Management as a Service (CMaaS) solution to our clients. We implemented the solution internally late last year, but really started utilizing it as a customer would this year by increasing what was being monitored and managed. We decided to put Exchange under the “Fully Managed” package of CMaaS. Exchange requires a lot of attention and effort. Instead of hiring a full time Exchange admin, we were able to offload that piece with CMaaS as our Managed Services team does all the health checks to make sure any new configuration changes are correct. This resulted in considerable cost savings. Having access to the team 24/7 is a colossal luxury. Before using CMaaS, if an issue popped up at 3 in the morning we would find out about it the next morning. This would require us to try and fix the problem during business hours. I don’t think I need to explain to anyone the hassle of trying to fix an issue with frustrated coworkers who are unable to do their jobs. If an issue arises now in the middle of the night, the problem has already been fixed before anyone shows up to start working. The Managed Services team does research and remediates bugs that come up. This happened to us when we ran into some issues with Apple iOS calendaring. The Managed Services team did the research to determine the cause and went in and fixed the problem. If my team tried to do this it would have taken us 2-3 days of wasted time. Instead, we could be focusing on some of our other strategic projects. In fact, we are holding a webinar on December 19th that will cover strategies and benefits to being the ‘first-to-know,’ and we will also provide a demo of the CMaaS Enterprise Command Center. We also went live with fully automated patching, which requires zero intervention from my team. Furthermore, we leveraged CMaaS to allow us to spin up a fully managed Linux environment. It’s safe to say that if we didn’t implement CMaaS we would not have been able to accomplish all of our strategic goals for this year.

{Download this free whitepaper to learn more about how organizations can revolutionize the way they manage hybrid cloud environments}

Business Plan

We also determined that we needed to update our disaster recovery plan to a true robust business continuity plan. A main driver of this was because of our more diverse office model. Not only were more people working remotely as our workforce expanded, but we now have office locations up and down the east coast in Kittery, Boston, Attleboro, New York City, Atlanta, and Tampa. We needed to ensure that we could continue to provide top quality service to our customers if an event were to occur. My team took a careful look at our then current infrastructure set up. After examining our policies and plans, we generated new ones around the optimal outcome we wanted and then adjusted the infrastructure to match. A large part of this included changing providers for our data and voice, which included moving our datacenter.

Datacenter Move

In 2013 we wanted to have more robust datacenter facilities. Ultimately, we were able to get into an extremely redundant and secure datacenter at the Markley Group in Boston that provided us with cost savings. Furthermore, Markley is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7. Another benefit our new datacenter offered was excess office space. That way, if there ever was an event at one of our GreenPages locations we could have a place to send people to work. I recently wrote a post which describes the datacenter move in more details.

BYOD Policy

As 2013 ends, we are finishing our first full year with our BYOD policy. We are taking this time to look back and see where there were any issues with the policies or procedures and adjusting for the next year. Our plan is to ensure that year two is even more streamlined. I answered questions in a recent Q & A explaining our BYOD initiative in more detail.

I’m pretty happy looking back at the work we accomplished in 2013. As with any year, there were bumps along the way and things we didn’t get to that we wanted to. All in all though, we accomplished some very strategic projects that have set us up for success in the future. I think that we will start out 2014 with increased employee satisfaction, increased productivity of our IT department, and of course noticeable cost savings. Here’s to a successful 2014!

Is your IT team the first-to-know when an IT outage happens? Or, do you find out about it from your end users? Is your expert IT staff stretched thin doing first-level incident support? Could they be working on strategic IT projects that generate revenue? Register for our upcoming webinar to learn more!

 

Why Automate? What to Automate? How to Automate?

By John Dixon, Consulting Architect

Automation is extremely beneficial to organizations. However, the questions often come up around why to automate, what to automate, and how to automate.

Why automate?

There are several key benefits surrounding automation. They include:

  • Saving time
  • Employees can be retrained to focus on other (hopefully more strategic) tasks
  • Removing human intervention reduces errors
  • Troubleshooting and support is improved when everything is deployed the same way

What to automate?

Organizations should always start with the voice of the customer (VoC). IT departments need to factor in what the end user wants and what the end user expects to improve their experience. If you can’t trace back something you’re automating to an improved customer experience, that’s usually a good warning sign that you should not be automating it. In addition, you need to be able to track back to how automation has provided a benefit to the organization. The benefit should always be measurable and always financial.

What are companies automating?

Requests management is the hot one because that’s a major component of cloud computing. This includes service catalogues and self-service portals. Providing a self-service portal, sending the request for approval based on the dollar amount requested, and fulfilling the order through one or more systems is something that is commonly automated today. My advice here is to automate tasks through a general purpose orchestrator tool (such as CA Process Automation or similar tools) so that automated jobs can be managed from a single console. This is instead of stitching together disparate systems that call each other in a “rat’s nest” of automation. The general purpose orchestrator also allows for easier troubleshooting when an automated task does not complete successfully.

How to automate?

There are some things to consider when sitting down to automate a task, or even determining the best things to automate. Here are a few key points:

  1. Start with the VoC or Voice of the Customer, and work backwards to identify the systems that are needed to automate a particular task. For example, maybe the customer is the Human Resources department, and they want to automate the onboarding of a new employee. It may have to setup user accounts, order a new cell phone, order a new laptop, and schedule the new employee on their manager’s calendar on their first day of work. Map out the systems that are required to accomplish this, and integrate those – and no more. You may find that some parts of the procedure may already be automated; perhaps your phone provider already has an interface to programmatically request new equipment. Take every advantage of these components.
  2. Don’t automate things that you can’t trace back to a benefit for the organization. Just because you can automate something doesn’t mean that you should. Again, use the voice of the customer and user stories here. A common user story is structure as follows:
    1. “As a [role],
    2. I want to [get something done]
    3. So that I can [benefit in the following way]”
  3. Start small and work upwards to automate more and more complex tasks. Remember the HR onboarding procedure in point #1? I wouldn’t suggest beginning your automation journey there. Pick out one thing to automate from a larger story, and get it working properly. Maybe you begin by automating the scheduling of an appointment in Outlook or your calendaring system, or creating a user in Active Directory. Those pieces become components in the HR onboarding story, but perhaps other stories as well.
  4. Use a general purpose orchestrator instead of stitching together different systems. As in point #3, using an orchestrator will allow you to build reusable components that are useful to automate different tasks. A general purpose orchestrator also allows for easier troubleshooting when things go wrong, tracking of automation jobs in the environment, and more advanced conditional logic. Troubleshooting automation any other way can be very difficult.
  5. You’ll need someone with software development experience. Some automation packages claim that even non-developers can build robust automation with “no coding required.” In some cases, that may be true. However, the experience that a developer brings to the table is an absolute must have when automating complex tasks like the HR onboarding example in point #1.

 

What has your organization automated? How have the results been?

 

Cloud Spending Will Increase 1 Billion% by 2014

By Ben Stephenson, Journey to the Cloud

It seems like every week a new study comes out analyzing cloud computing growth. Whether it’s that Public Cloud Services Spending will reach $47.4B in 2013, Global SaaS spending projected to grow from $13.5B in 2011 to $32.8B in 2016, the public cloud services market is forecast to grow 18.5 percent in 2013, or cloud spending at Dunder Mifflin will increase 200% by 2020, the indication is that cloud adoption and spending are on the rise. But how is that relevant to you?

Does it matter to the everyday CIO that cloud spending at midsized companies west of the Mississippi is going to increase by 15% over the next 3 years? The relevant question isn’t how much will cloud adoption and spending increase, but why will it do so? It’s the “why” that matters to the business. If you understand the why, it becomes easier to put context around the statistics coming out of these studies. It comes down to a shift in the industry – a shift in the economics of how a modern day business operates. This shift revolves around the way IT services are being delivered.

To figure out where the industry is going, and why spending and adoption are increasing, you need to look at where the industry has come from. The shift from on-premise IT to public cloud began with SaaS based technologies. Companies like Salesforce.com realized that organizations were wasting a lot of time and money buying and deploying hardware for their CRM solutions. Why not use the internet to be able to allow organizations to pay a subscription fee instead of owning their entire infrastructure? This, however, was not true cloud computing. Next came IaaS with Amazon’s EC3 initiative. Essentially, Amazon realized it had excess compute capacity and decided to rent it out to people who needed the extra space. IaaS put an enormous amount of pressure on corporate IT because App Dev. teams no longer had to wait weeks or months to test and deploy environments. Instead, they could start up right away and become much more efficient. Finally, PaaS came about with initiatives such as Microsoft Azure.

{Free ebook: The Evolution of Your Corporate IT Department}

The old IT paradigm, or a private cloud environment, consists of organizations buying hardware and software and keeping it in their datacenter behind their own firewalls. While a private cloud environment doesn’t need to be fully virtualized, it does need to be automated and very few organizations are actually operating in a true private cloud environment. Ideally, a true private cloud environment is supposed to let internal IT compete with public cloud providers by providing a similar amount of speed and agility that a public cloud allows. While the industry is starting to shift towards public cloud, the private cloud is not going away. Public cloud will not be the only way to operate IT, or even the majority of the way, for a long time. This brings us to the hybrid cloud computing model; the direct result of this shift. Hybrid cloud is the combination of private and public cloud architectures. It’s about the ability to be able to seamlessly transition workloads between private and public, or, in other words, moving on-premise workloads to rented platforms where you don’t own anything in order to leverage services.

So why are companies shifting towards a hybrid cloud model? It all comes down to velocity, agility, efficiency, and elasticity. IT delivery methodology is no longer a technology discussion, but, rather, it’s become a business discussion. CIOs and CFOs are starting to scratch their heads wondering why so much money is being put towards purchasing hardware and software when all they are reading about is cloud this and cloud that.

{Free Whitepaper: Revolutionizing the Way Organizations Manage Hybrid Cloud Environments}

The spending and adoption rates of cloud computing are increasing because the shift in the industry is no longer just talk – it’s real and it’s here now. The bottom line? We’re past hypothetical discussions. There is a major shift in the industry that business decision makers need to be taking seriously. If you’re not modernizing your IT operations by moving towards a hybrid cloud model, you’re going to be missing out on the agility and cost savings that can give your organization a substantial competitive advantage.  This is why cloud adoption and spending are on the rise. This is why you’re seeing a new study every month on the topic.

Moving Our Datacenter: An IT Director’s Take

An Interview with Matt Mock, IT Director, GreenPages Technology Solutions

Journey to the Cloud’s Ben Stephenson sat down with GreenPages’ IT Director Matt Mock to discuss GreenPages’ recent datacenter move.

Ben: Why did GreenPages decide to move its datacenter?

Matt: Our current contract was up so we started evaluating new facilities looking for a robust, redundant facility to house our equipment in. We needed a facility to meet specific objectives around our business continuity plan. In addition, we were also looking for cost savings.

Ben: Where did you move the datacenter to and from?

Matt: Geographically, we stayed in a close area. We moved it from Charlestown, MA a couple of miles down the road into downtown Boston. Staying within a close area certainly made the physical move quicker and easier.

Ben: What were the benefits of moving the datacenter?

Matt: Ultimately, we were able to get into an extremely redundant and secure datacenter that provided us with cost savings. Furthermore, the datacenter is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7.

{Register for our upcoming webinar on 11/7 on key announcements from VMworld 2013}

Ben: Tell us about the process of the move? What had to happen ahead of time to ensure a smooth transition?

Matt: The most important parts were planning, testing, and communication. We put together an extremely detailed plan that broke out every phase of the move down to 15 minute increments. We devised teams for the specific phases that had a communication plan for each team. We also devised a backup emergency plan in the event that we hit any issues the night of the move.

Ben: What happened the night of the move?

Matt: The night of the move we leveraged the excellent facilities at Markley to be able to run a command center that was run by one of our project managers. In the room, we had multiple conference bridges to run the different work streams to ensure smooth and constant communication. We also utilized Huddle, our internal collaboration tool, to communicate as our internal systems were down during the move.

Ben: Anything else you had to factor in?

Matt: Absolutely. The same night of the move we were also changing both voice and data providers at three different locations, which added another layer of complexity. We had to work closely with our new providers to ensure a smooth transition. Because we have a 24/7 Managed Services division at GreenPages, we needed to continue to offer customers the same support during the move that we do on a day-to-day basis.

Ben: Did you experience unexpected events during the move? If so, what were they and how did you handle them?

Matt: With any complex IT project you’re going to experience unexpected events. A couple that we experienced were some hardware failures and unforeseen configuration issues. Fortunately, our detailed plan accounted for these issues, and we were able to address them with the teams on hand and remain on schedule.

Ben: You used an all GreenPages team to accomplish this, right?

Matt: Correct. We did not use any outside vendors for this move – all services were rendered by the GreenPages team. Last time we used outside providers and this time we had a much better experience. I’m in the unique position where I have access to an entire team of project managers and technical resources that made doing this possible. In fact, this is something we offer our customers (from consulting to project management to the actual move) so our team is very, very good at it.

Ben: What advice do you have for other IT Directors who are considering moving their datacenters?

Matt: Detailed planning and constant communication is critical, having a plan in place for every possible scenario, and having an emergency plan ready so that in the middle of the night you’re not scrambling with how to address those unforeseen issues.

Ben: Congratulations on the successful move. See you Monday after the Patriots crush your Steelers.

Would you like to learn more about how GreenPages can help you with your datacenter needs?

Moving Email to the Cloud Part 2

By Chris Chesley, Solutions Architect

My last blog post was part 1 of moving your Email to the Cloud with Office 365.  Here’s the next installment in the series in which I will be covering the 3 methods of authenticating your users for Office 365.  This is a very important consideration and will have a large impact on your end users and their day to day activities.

The first method of authenticating your users into Office 365 is to do so directly.  This has no ties to your Active Directory.  The benefits here are that your users get mail, messages and SharePoint access regardless of your site’s online status.  The downside is that your users may have a different password than they use to get into their desktop/laptops and this can get very messy if you have a large number of users.

The second way of authenticating your users is full Active Directory integration.  I will refer to this as the “Single Sign On” method.  In this method, your Active Directory is the authoritative source of authentication for your users.  Users log into their desktop/laptop and can access all of the Office 365 applications without typing their password again, which is convenient. You DO need a few servers running locally to make this happen.  You need an Active Directory Federation Server (ADFS) and an Azure Active Directory Sync Sever. Both of these services are needed to sync your AD and user information to Office 365. The con of this method is that you need a redundant AD setup because if it’s down your users are not going to be able to access mail or anything else in the cloud.  You can do this by hosting a Domain Controller, and the other 2 systems I mentioned, in a cloud or at one of your other locations, if you have one.

The third option is what I will refer to as “Single Password.”  In this setup, you install an Azure Active Directory Sync server in your environment but do not need an ADFS server.  The Sync tool will hash your user’s passwords and send them to Office 365.  When a user tries to access any of the Office 365 services, they are asked to type in their password.  The password is then hashed and compared to the stored hash and they are let in if they match.  This does require the users to type their password again, but it allows them to use their existing Active Directory password and anytime this password changes, it is synced to the cloud.

The choice of which method you use has a big impact on your users as well as how you manage them.  Knowing these choices and choosing one that meets your business goals will set you on the path of successfully moving your services to the cloud.

 

Download this free ebook on the evolution of the corporate IT department