- Google harmonized its cloud computing business to a single entity, with a pricing model intended to hold customers by enticing them to build ever cheaper and more complex software.
- Cisco announced it would spend $1 billion on a “cloud of clouds” project.
- Microsoft’s new CEO made his first big public appearance, offering Office for the Apple iPad, partly as a way to sell more of its cloud-based Office 365 product.
- Amazon Web Services announced the general release of its cloud-based desktop computing business, as well as a deal with to offer cloud-based enterprise software tools to industries like healthcare and manufacturing.
Category Archives: Cloud computing
Google & Amazon Cut Prices & Microsoft is Next. Why Not Take Advantage of Them All?
By Ben Stephenson, Journey to the Cloud
There’s been a lot of talk this week about price cuts coming from cloud providers. First Google announced several price reductions for most of its cloud services. In response, Amazon announced a round of price cuts as well. This marked the 42nd time AWS has reduced prices since 2006. This means that Microsoft Azure will most likely get in on the action as well. Last April, Microsoft pledged that it would match any price drops from AWS. In early 2014, Microsoft did just that when it lowered prices to match a reduction made by Amazon. TechCrunch has nice write-ups on the specifics of the Google & Amazon price reductions.
Obviously price cuts are beneficial to organizations using these platforms, but wouldn’t it make sense to take advantage of price cuts from multiple providers at the same time to maximize cost savings and performance? What if you moved different applications to different clouds – or even different parts of an application to different clouds?
Let’s say you have some applications for your database that require high-end performance, and you’re willing to pay more for performance. But if you use a more expensive provider exclusively, you may be overspending in other areas that do not require as high performance. So, instead of running all your apps on the same provider, you could move some, say, commodity web-based applications that don’t require as much performance to the cheapest provider. You also have to keep in mind that the best option could be to keep the application on premise. This is only one example. John Dixon wrote a great ebook about the evolution of the corporate IT department and gives a more in depth look at the “which app, which cloud” philosophy that I highly recommend downloading.
So why don’t more companies split applications across multiple cloud providers? It’s simple; it’s complex and painful to manage. Furthermore, price cuts can happen at the spur of the moment so you need to be able to take advantage in real time to maximize savings.
This is where you need a management platform like GreenPages’ Cloud Management as a Service (CMaaS) Brokerage and Governance offering. CMaaS gives you the ability to match the right applications to the right cloud providers and compare the true cost of running your resources at a CSP before even placing an order. The platform eliminates cloud sourcing complexity with a central portal where business and IT users can quickly and easily aggregate, procure, and pay for cloud solutions. It answers the “which app, which cloud?” question across both internal private and public cloud environments.
Has your organization looked into spreading different applications across different clouds? What are your thoughts?
Download whitepaper: Cloud Management, Now
Are We All Cloud Service Brokers Now? Part II
By John Dixon, Consulting Architect
In my last post, I discussed Cloud Service Brokers and some of their benefits after reading a couple of articles from Robin Meehan (Article 1 here and Article 2 here). In this post, I will break down some of Robin’s points and explain why I agree or disagree with each.
At the end of last post, I was breaking down cloud arbitrage into three areas (run-time, deployment-time, plan-time). Credit to Robin for run-time and deployment-time arbitrage. I really like those terms, and I think it illuminates the conversation. So, run-time cloud arbitrage is really science fiction right now – this is where the CSB moves running workloads around on the fly to find the best benefit for the customer. I haven’t seen any technology (yet) that does this. However, VMware does deployment-time and run-time arbitrage with VMotion and Distributed Resource Scheduling – albeit, in a single virtual datacenter, with individual VMs, and with a single policy objective to balance a cluster’s load across vSphere nodes. See Duncan Epping’s excellent write up on DRS here. Even 10 years ago, this was not possible. 15 years ago, this was certainly science fiction. Now, it’s pretty common to have DRS enabled for all of your vSphere clusters.
A few of Robin’s points…
Point 1:
“The ability to migrate IT workloads dynamically (i.e. at run-time, not at deployment time) is something I sometimes see as a capability under the ‘cloud broker’ banner, but in my view it really just doesn’t make sense – at least not at the moment.”
I agree. Run-time cloud arbitrage and workload migration ala vMotion is not possible today in cloud. Will it be possible within the next few years? Absolutely. I think it will first manifest itself in a VMware High Availability-like scenario. Again, see Duncan Epping’s fantastic deep-dive into HA. If cloud provider X drops off of the internet suddenly, then restart the resources and application at cloud provider Y (where cloud provider Y might even be your own datacenter). This is sometimes known as DR as a service, or DRaaS. And even now, there are some DRaaS solutions that are coming onto the market.
Point 2:
“The rate of innovation in the IaaS/PaaS/DaaS market is such that most of the other vendors are playing catch-up with AWS, as AWS continue to differentiate themselves from the following pack. This shows no sign of slowing down over the next couple of years – so the only way a migrated workload is going to work across multiple cloud vendors is if it only relies on the lowest common denominator functionality across the vendors, which is typically basic storage, virtualised compute and connectivity.”
Also agree, the rate of innovation in the market for cloud computing is rapid as specialization sets in at an industrial level. This also means that downward price pressures are enormous for vendors in the cloud space, even today as vendors vie for market share. As switching costs decrease (e.g., portability of applications increases), prices for IaaS will decrease even more. Now, wouldn’t you, as a customer, like to take advantage of this market behavior? Take in to consideration that CSBs aggregate providers but they also aggregate customer demand. If you believe this interpretation of the market for IaaS, then you’ll want to position yourself to take advantage of it by planning portability for your applications. A CSB can help you do this.
Point 3:
“The bottom line is that if you are going to architect your applications so they can run on any cloud service provider, then you can’t easily use any of the good bits and hence your value in migrating to a cloud solution is diminished. Not ruined, just reduced.”
Disagree. To take advantage of market behavior, customers should look to avoid using proprietary features of IaaS platforms because they compromise portability. Like we noted earlier, increased portability of applications means more flexibility to take advantage of market behavior that leads to decreasing prices.
This is where perspective on cloud becomes really important. For example, GreenPages has a customer with a great use case for commodity IaaS. They may deploy ~800 machines in a cluster at AWS for only a matter of hours to run a simulation or solve a problem. After the result is read, these machines are completely destroyed—even the data. So, it makes no difference to this customer where they do this work. AWS happens to be the convenient choice right now. Next quarter, it may be Azure, who knows? I’m absolutely certain that this customer sees more benefit in avoiding the use of propriety features (a.k.a., the “good bits” of cloud) in a cloud provider rather than using them.
What is your perspective on cloud?
• A means to improve time to market and agility
• A way to transform capex into opex
• Simply a management paradigm – you can have cloud anywhere, even internally as long as you have self-service and infinite resources
• An enabler for a new methodology like DevOps
• Simply a destination for applications
I think that a good perspective may include all of these things. Leave a comment and let me know your thoughts.
Interested in learning more? Download this free whitepaper ‘Cloud Management, Now!’
Are We All Cloud Service Brokers Now?
By John Dixon, Consulting Architect
Robin Meehan of Smart421 recently wrote a couple of great posts on cloud service brokers (CSBs) and the role that they play for consumers of cloud services. (http://smart421.wordpress.com/2014/02/24/were-mostly-all-cloud-services-brokers-now/ and http://smart421.wordpress.com/2014/02/25/cloud-brokerage-and-dynamic-it-workload-migration/). I’m going to write two blogs about the topic. The first will be a background on my views and interpretations around cloud service brokers. In the second post, I will break down some of Robin’s points and explain why I agree or disagree.
Essentially, a cloud broker offers consumers three key things that a single cloud provider does not (these are from the NIST definition of a Cloud Service Broker):
- Intermediation
- Aggregation
- Arbitrage (run-time, deployment-time, plan-time)
My interpretation of these is as follows. We’ll use Amazon Web Services as the example IaaS cloud provider and GreenPages as the example of the cloud broker:
Intermediation. As a cloud broker, GreenPages, sits between you, the consumer, and AWS. GreenPages and other CSBs do this so they can add value to the core AWS offering. Why? Billing and chargeback is a great example. A bill from AWS includes line item charges for EC2, S3, and whichever other services you used during the past month – so you would be able to see that EC2 charges for January were $12,502.90 in total. GreenPages takes this bill and processes it so that you would be able to get more granular information about your spend in January. We would be able to show you:
- Spend per application
- Spend per environment (development, test, production)
- Spend per tier (web, application, database)
- Spend per resource (CPU, memory, storage, managed services)
- Compare January 2014 to December, or even January 2013
- Estimate the spend for February 2014
So, going directly to AWS, you’d be able to answer a question like, “how much did I spend in total for compute in January?”
And, going through GreenPages as a cloud broker, you’d be able to answer a question like, “how much did the development environment for Application X cost in January, and how does that compare with the spend in December?”
I think you’d agree that it is easier to wrap governance around the spend information from a cloud service broker rather than directly from AWS. This is just one of the advantages of using a CSB in front of a cloud provider – even if you’re like many customers out there and choose to use only one provider.
Aggregation. As a CSB, GreenPages aggregates the offerings from many providers and provides a simple interface to provision resources to any of them. Whether you choose AWS, Terremark, Savvis, or even your internal vSphere environment, you’d use the same procedure to provision resources. On the provider side, CSBs also aggregate demand from consumers and are able to negotiate rates. Why is this important? A CSB can add value in three ways here:
1) By allowing you to compare the offerings of different providers – in terms of pricing, SLA guarantees, service credits, supported configurations, etc.
2) By placing a consistent approval framework in front of requests to any provider.
3) By using aggregated demand to negotiate special pricing and terms with providers – terms that may not be available to an individual consumer of cloud services
The approval framework is of course optional – if you wish, you could choose to allow any user to provision infrastructure to any provider. Either way, a CSB can establish a request management framework in front of “the cloud” and can, in turn, provide things like an audit trail of requests and approvals. Perhaps you want to raise an ITIL-style change whenever a cloud request is fulfilled? A CSB can integrate with existing systems like Remedy or ServiceNow for that.
Arbitrage. Robin Meehan has a follow-on post that alludes to cloud arbitrage and workload migration. Cloud arbitrage is somewhat science fiction at this time, but let’s look forward to the not-too-distant future.
First, what is arbitrage and cloud arbitrage? NIST says it is an environment where the flexibility to CSB has the flexibility to choose, on the customer’s behalf, where to best run the customer’s workload. In theory, the CSB would always be on the lookout for a beneficial arrangement, automatically migrate the workload, and likely capture the financial benefit of doing so. This is a little bit like currency arbitrage, where a financial institution is looking for discrepancies in the market for various currencies, and makes various transactions to come up with a beneficial situation. If you’ve ever seen the late-night infomercials for forex.com, don’t believe the easy money hype. You need vast sums of money and perfect market information (e.g., you’re pretty much a bank) to play in that game.
So, cloud arbitrage and “just plain currency arbitrage” are really only similar when it comes to identifying a good idea. This is where we break it down cloud arbitrage into three areas:
- Run-time arbitrage
- Deployment-time arbitrage
- Plan-time arbitrage
In my next post, I will break down cloud arbitrage as well as go over some specific points Robin makes in his posts and offer my opinions on them.
To learn more about transforming your IT Department to a broker of IT services download this ebook
The Big Shift: From Cloud Skeptics & Magic Pills to ITaaS Nirvana
By Ron Dupler, CEO GreenPages Technology Solutions
Over the last 4-6 quarters, we have seen a significant market evolution, with our customers and the overall market moving from theorizing about cloud computing to defining strategies and plans to reap the benefits of cloud computing solutions and implement hybrid cloud models. In a short period of time we’ve seen IT thought leaders move from debating the reality and importance of cloud computing, to trying to understand how to most effectively grasp the benefits of cloud computing to improve organizational efficiency, velocity, and line of business empowerment. Today, we see the leading edge of the market aggressively rationalizing their application architectures and driving to hybrid cloud computing models.
Internally, we call this phenomenon The Big Shift. Let’s discuss what we know about The Big Shift. First for all of the cloud skeptics reading this, it is an undeniable fact that corporate application workloads are moving from customer owned architectures to public cloud computing platforms. RW Baird released an interesting report in Q’4 of 2013 that included the following observations:
- Corporate workloads are moving to the public cloud.
- Much of the IT industry has been asleep at the wheel as Big Shift momentum has accelerated due to the fact that public cloud spending still represents a small portion of overall IT spend.
- Traditional IT spending is growing in the low single digits. 2-3% per year is a good approximation.
- Cloud spending is growing at 40% plus per year.
- What we call The Big Shift is accelerating and is going to have a tremendous impact on the traditional IT industry in the coming years. For every $1.00 increase in public cloud spending, there is a corresponding $3.00-$4.00 decrease in customer-owned IT spend.
There are some other things we know about The Big Shift:
The Big Shift is disrupting old industry paradigms and governance models. We see market evidence of this in traditional IT industry powerhouses like HP and Dell struggling to adapt and reinvent themselves and to maintain relevance and dominance in the new ITaaS era. We even saw perennial powerhouse Cisco lower its 5 year growth forecast during last calendar Q’4 due to the forces at play in the market. In short, the Big Shift is driving disruption throughout the entire IT supply chain. Companies tied to the traditional, customer-owned IT world are finding themselves under financial pressures and are struggling to adapt. Born in the cloud companies like Amazon are seeing tremendous and accelerating growth as the market embraces ITaaS.
In corporate America, the Big Shift is causing inertia as corporate IT leaders and their staffs reassess their IT strategies and strive to determine how best to execute their IT initiatives in the context of the tremendous market change going on around them. We see many clients who understand the need to drive to an ITaaS model and embrace hybrid cloud architectures but do not know how best to attack that challenge and prepare to manage in a hybrid cloud world. This lack of clarity is causing delays in decision making and stalling important IT initiatives.
Let’s discuss cloud for a bit. Cloud computing is a big topic that elicits emotional reactions. Cloud-speak is pervasive in our industry. By this point, the vast majority of your IT partners and vendors are couching their solutions as cloud, or as-a-service, solutions. Some folks in the industry are bold enough to tell you that they have the magic cloud pill that will lead you to ITaaS nirvana. Due to this, many IT professionals that I speak with are sick of talking about cloud and shy away from the topic. My belief is that this avoidance is counterproductive and driven by cloud pervasiveness, lack of precision and clarity when discussing cloud, and the change pressure the cloud revolution is imposing on all professional technologists. The age old mandate to embrace change or die has never been more relevant. Therefore, we feel it is imperative to tackle the cloud discussion head on.
Download our free whitepaper “Cloud Management, Now“
Let me take a stab at clarifying the cloud discussion. Figure 1 below represents the Big Shift. As noted above, it is undeniable that workloads are shifting from private, customer owned IT architectures, to public, customer rented platforms, i.e. the public cloud. We see three vectors of change in the industry that are defining the cloud revolution.
The first vector is the modernization of legacy, customer-owned architectures. The dominant theme here over the past 5-7 years has been the virtualization of the compute layer. The dominant player during this wave of transformation has been VMware. The first wave of virtualization has slowed in the past 4-6 quarters as the compute virtualization market has matured and the vast majority of x86 workloads have been virtualized. There is a new second wave that is just forming and that will be every bit as powerful and important as the first wave. This wave is represented by new, advanced forms of virtualization and the continued abstraction of more complex components of traditional IT infrastructure: networking, storage, and ultimately entire datacenters as we move to a world of software defined datacenter (SDDC) in the coming years.
The second vector of change in the cloud era involves deploying automation, orchestration, and service catalogues to enable private cloud computing environments for internal users and lines of business. Private cloud environments are the industry and corporate IT’s reaction to the public cloud providers’ ability to provide faster, cheaper, better service levels to corporate end users and lines of business. In short, the private cloud change vector is driven by the fact that internal IT now has competition. Their end users and lines of business, development teams in particular, have new service level expectations based on their consumer experiences and their ability to get fast, cheap, commodity compute from the likes of Amazon. To compete, corporate IT staffs must enable self-service functionality for their lines of business and development teams by deploying advanced management tools that provide automation, orchestration, and service catalogue functionality.
The third vector of change in the cloud era involves tying the inevitable blend of private, customer-owned architectures together with the public cloud platforms in use today at most companies. The result is a true hybrid cloud architectural model that can be managed, preserving the still valid command and control mandates of traditional corporate IT, and balancing those mandates with the end user empowerment and velocity expected in today’s cloud world.
In the context of these three change vectors we see several approaches within our customer base. We see some customers taking a “boil the ocean” approach and striving to rationalize their entire application portfolios to determine best execution venues and define a path to a true hybrid cloud architecture. We see other customers taking a much more cautious approach and leveraging cloud-based point solutions like desktop and disaster recovery as-a-service to solve old business problems in new ways. Both approaches are valid and depend on uses cases, budgets, and philosophical approach (aggressive, leading-edge, versus conservative follow-the-market thinking).
GreenPages business strategy in the context of the ITaaS and cloud revolution is simple. We have built an organization that has the people, process, and technologies to provide expert strategic guidance and proven cloud-era solutions for our clients through a historical inflection point in the way that information technology is delivered to corporate end users and lines of business. Our cloud management as a service offering (CMaaS) provides a technology platform that helps customers integrate the disparate management tools deployed in their environments and federate alerts through an enterprise command center approach that gives a singular view into physical, virtual, and public cloud workloads. CMaaS also provides cloud service brokerage and governance capabilities allowing our customers to view price-performance analytics across private and public cloud environments, design service models and view the related bills of material, and view and consolidate billings across multiple public cloud providers. What are your thoughts on the Big Shift? How is your organization addressing the changes in the IT landscape?
Infographic: Demystifying the Cloud
Presented By Telx Data Centers
Looking for more information on cloud computing? Download this free whitepaper on hybrid cloud management!
The PaaS Market as We Know it Will Not Die Off
I’ve been hearing a lot about Platform as a Service (PaaS) lately as part of the broader discussion of cloud computing from both customers and in articles across the web. In this post, I’ll describe PaaS, discuss a recent article that came out on the subject, and take a shot at sorting out IaaS, PaaS, and SaaS.
What is PaaS?
First a quick trip down memory lane for me. As an intern in college, one of my tours of duty was through the manufacturing systems department at an automaker. I came to work the first day to find a modest desktop computer loaded with all of the applications I needed to look busy, and a nicely printed sheet with logins to various development systems. My supervisor called the play: “I tell you what I want, you code it up, I’ll take a look at it, and move it to test if it smells ok.” I and ten other ambitious interns were more than happy to spend the summer with what the HR guy called “javaweb.” The next three months went something like this:
Part I: Setup the environment…
- SSH to abcweb01dev.company.com, head over to /opt/httpd/conf/httpd.conf, configure AJP to point to the abcapp01 and 02dev.company.com
- SSH to abcapp01.dev.company.com, reinstall the Java SDK to the right version, install the proper database JARs, open /opt/tomcat/conf/context.xml with the JDBC connection pool
- SSH to abcdb01dev.company.com, create a user and rights for the app server to talk to the web server
- Write something simple to test everything out
- Debug the environment to make sure everything works
Part II: THEN start coding…
- SSH to abcweb01dev.company.com, head over to /var/www/html and work on my HTML login page for starters, other things down the road
- SSH to devapp01dev.company.com, head over to /opt/tomcat/webapps/jpdwebapp/servlet, and code up my Java servlet to process my logins
- Open another window, login to abcweb01dev and tail –f /var/www/access_log to see new connections being made to the web server
- Open another window, login to abcapp01dev and tail –f /opt/tomcat/logs/catalina.out to see debug output from my servlet
- Open another window, login to abcdevapp01 and just keep /opt/tomcat/conf/context.xml open
- Open another window, login to abcdevapp01 and /opt/tomcat/bin/shutdown.sh; sleep 5; /opt/tomcat/bin/startup.sh (every time I make a change to the servlet)
(Host names and directory names have been changed to protect the innocent)
Setting up the environment was a little frustrating. And I knew that there was more to the story; some basic work, call it Part 0, to get some equipment in the datacenter, the OS installed, and IP addresses assigned. Part I, setting up the environment, is the work you would do to setup a PaaS platform. As a developer, the work in Part I was to enable me and my department to do the job in Part II – and we had a job to do – to get information to the guys in the plants who were actually manufacturing product!
So, here’s a rundown:
Part 0: servers, operating systems, patches, IPs… IaaS
Part I: middleware, configuration, basic testing… PaaS
Part II: application development
So, to me, PaaS is all about using the bits and pieces provided by IaaS, configuring them in a usable platform, delivering that platform to a developer so that they can deliver software to the business. And, hopefully the business is better off because of our software. In this case, our software helped the assembly plant identify and reduce “in-system damage” to vehicles – damage to vehicles that happens as a result of the manufacturing process.
Is the PaaS market as we know it dead?
I’ve read articles predicting the demise of PaaS altogether and others just asking the question about its future. There was a recent Networkworld article entitled “Is the PaaS market as we know it dying?” that discussed the subject. The article makes three main points, referring to 451 Research, Gartner, and other sources.
- PaaS features are being swallowed up by IaaS providers
- The PaaS market has settled down while the IaaS and SaaS markets have exploded
- Pure-play PaaS providers may be squeezed from the market by IaaS and SaaS
I agree with point #1. The evidence is in Amazon Web Services features like autoscaling, RDS, SQS, etc. These are fantastic features but interfacing to them locks developers in to using AWS as their single IaaS provider. The IaaS market is still very active, and I think there is a lot to come even though AWS is ahead of other providers at this point. IaaS is commodity, and embedding specialized (read: PaaS) features in an otherwise IaaS system is a tool to get customers to stick around.
I disagree with point #2. The PaaS market has not settled down – it hasn’t even started yet! The spotlight has been on IaaS and SaaS because these things are relatively simple to understand, considering the recent boom in server virtualization. SaaS also used to be known as something that was provided by ASPs (Application Service Providers), so many people are already familiar with this. I think PaaS and the concepts are still finding their place.
Also disagree with point #3, the time and opportunity for pure-play PaaS providers is now. IaaS is becoming sorted out, and it is clearly a commodity item. As we highlighted earlier, solutions from PaaS providers can ride on top of IaaS. I think that PaaS will be the key to application portability amongst different IaaS providers – kind of like Java: write once, run on any JVM (kind of). As you might know, portability is one of NIST’s key characteristics of cloud computing.
Portability is key. I think PaaS will remain its own concept apart from IaaS and SaaS and that we’ll see some emergence of PaaS in 2014. Why? PaaS is the key to portable applications — once written to a PaaS platform, it can be deployed on different IaaS platforms. It’s also important to note that AWS is almost always associated with IaaS, but they have started to look a lot like a PaaS provider (I touched on this in a blog earlier this month). An application written to use AWS features like AutoScaling is great, but not very portable. Lastly, the PaaS market is ripe for innovation. Barriers to entry are low as is required startup capital (there is no need to build a datacenter to build a useful PaaS platform).
This is just my opinion on PaaS — I think the next few years will see a growing interest in PaaS, possibly even over IaaS. I’m interested in hearing what you think about PaaS, feel free to leave me a comment here, find me on twitter at @dixonjp90, or reach out to us at socialmedia@greenpages.com
To hear more from John, download his whitepaper on hybrid cloud computing or his ebook on the evolution of the corporate IT department!
5 Cloud Predictions for 2014
By John Dixon, LogicsOne
Here are my 5 Cloud Predictions for 2014. As always, leave a comment below and let me know what you think!
1. IaaS prices will drop by at least 20%
Amazon has continued to reduce its pricing since it first launched its cloud services back in 2006. In February of last year, Amazon dropped its price for the 25th time. By April prices dropped for the 30th time and by the summer it was up to 37 times. Furthermore, there was a 37% drop in hourly costs for dedicated on-demand instances. Microsoft announced that they will follow AWS’s lead with regard to price cuts. I expect this trend to continue in 2014 and likely 2015. I highlight some of these price changes and the impact it will have on the market as more organizations embrace the public cloud in more detail in my eBook.
2. We’ll see signs of the shift to PaaS
Amazon is already starting to look more like a PaaS provider than an IaaS provider. Just consider pre-packaged, pre-engineered features like Auto Scaling, CloudWatch, SQS, RDS among other services. An application hosted with AWS that uses all of these features looks more like an AWS application and less like a cloud application. Using proprietary features is very convenient, but don’t forget how application portability is impacted. I expect continued innovation in the PaaS market with new providers and technology, while downward price pressures in the IaaS market remain high. Could AWS (focusing on PaaS innovation) one day source its underlying infrastructure to a pure IaaS provider? This is my prediction for the long term — large telecoms like AT&T, Verizon, BT, et al. will eventually own the IaaS market, Amazon, Google, Microsoft will focus on PaaS innovation, and use infrastructure provided by those telecoms. This of course leaves room for startup, niche PaaS providers to build something innovative and leverage quality infrastructure delivered from the telecoms. This is already happening with smaller PaaS providers. Look for signs of this continuing in 2014.
3. “The cloud” will not be regulated
Recently, there have been rumblings of regulating “the cloud” especially in Europe, and that European clouds are safer than American clouds. If we stick with the concept that cloud computing is just another way of running IT (I call it the supply chain for IT service delivery), then the same old data classification and security rules apply. Only now, if you use cloud computing concepts, the need to classify and secure your data appropriately becomes more important. An attempt to regulate cloud computing would certainly have far reaching economic impacts. This is one to watch, but I don’t expect any legislative action to happen here in 2014.
4. More organizations will look to cloud as enabling DevOps
It’s relatively easy for developers to head out to the cloud, procure needed infrastructure, and get to work quickly. When developers behave like this, they not only write code and test new products, but they become the administrators of the platforms they own (all the way from underlying code to patching the OS) — development and operations come together. This becomes a bit stickier as things move to production, but the same concept can work (see prediction #5).
5. More organizations will be increasingly interested in governance as they build a DevOps culture
As developers can quickly bypass traditional procurement processes and controls, new governance concepts will be needed. Notice how I wrote “concepts” and not “controls.” Part of the new role of the IT department is to stay a step ahead of these movements, and offer developers new ways to govern their own platforms. For example, a real time chart showing used vs. budgeted resources will influence a department’s behavior much more effectively than a cold process that ends with “You’re over budget, you need to get approval from an SVP (expected wait time: 2-8 weeks).”
The numbers pictured are fictitious. With the concept of Service Owners, the owner of collaboration services can get a view of the applications and systems that provide the service. The owner can then see how VoIP spending is a little above the others, and drill down to see where resources are being spent (on people, processes, or technology). Different ITBM applications display these charts differently, but the premise is the same – real time visibility into spend. With cloud usage in general gaining steam, it is now possible to adjust the resources allocated to these services. With this type of information available to developers, it is possible to take proactive steps to avoid compromising the budget allocated to a particular application or service. On the same token, opportunities to make informed investments in certain areas will become exposed with this information.
So there you have it, my 2014 cloud predictions. What other predictions do you have?
To hear more from John, download his eBook “The Evolution of Your Corporate IT Department” or his Whitepaper “Cloud Management, Now“
Cloud Management, Business Continuity & Other 2013 Accomplishments
By Matt Mock, IT Director
It was a very busy year at GreenPages for our internal IT department. With 2013 coming to a close, I wanted to highlight some of the major projects we worked on over the course of the year. The four biggest projects we tackled were using a cloud management solution, improving our business continuity plan, moving our datacenter, and creating and implementing a BYOD policy.
Cloud Management as a Service
GreenPages now offers a Cloud Management as a Service (CMaaS) solution to our clients. We implemented the solution internally late last year, but really started utilizing it as a customer would this year by increasing what was being monitored and managed. We decided to put Exchange under the “Fully Managed” package of CMaaS. Exchange requires a lot of attention and effort. Instead of hiring a full time Exchange admin, we were able to offload that piece with CMaaS as our Managed Services team does all the health checks to make sure any new configuration changes are correct. This resulted in considerable cost savings. Having access to the team 24/7 is a colossal luxury. Before using CMaaS, if an issue popped up at 3 in the morning we would find out about it the next morning. This would require us to try and fix the problem during business hours. I don’t think I need to explain to anyone the hassle of trying to fix an issue with frustrated coworkers who are unable to do their jobs. If an issue arises now in the middle of the night, the problem has already been fixed before anyone shows up to start working. The Managed Services team does research and remediates bugs that come up. This happened to us when we ran into some issues with Apple iOS calendaring. The Managed Services team did the research to determine the cause and went in and fixed the problem. If my team tried to do this it would have taken us 2-3 days of wasted time. Instead, we could be focusing on some of our other strategic projects. In fact, we are holding a webinar on December 19th that will cover strategies and benefits to being the ‘first-to-know,’ and we will also provide a demo of the CMaaS Enterprise Command Center. We also went live with fully automated patching, which requires zero intervention from my team. Furthermore, we leveraged CMaaS to allow us to spin up a fully managed Linux environment. It’s safe to say that if we didn’t implement CMaaS we would not have been able to accomplish all of our strategic goals for this year.
{Download this free whitepaper to learn more about how organizations can revolutionize the way they manage hybrid cloud environments}
Business Plan
We also determined that we needed to update our disaster recovery plan to a true robust business continuity plan. A main driver of this was because of our more diverse office model. Not only were more people working remotely as our workforce expanded, but we now have office locations up and down the east coast in Kittery, Boston, Attleboro, New York City, Atlanta, and Tampa. We needed to ensure that we could continue to provide top quality service to our customers if an event were to occur. My team took a careful look at our then current infrastructure set up. After examining our policies and plans, we generated new ones around the optimal outcome we wanted and then adjusted the infrastructure to match. A large part of this included changing providers for our data and voice, which included moving our datacenter.
Datacenter Move
In 2013 we wanted to have more robust datacenter facilities. Ultimately, we were able to get into an extremely redundant and secure datacenter at the Markley Group in Boston that provided us with cost savings. Furthermore, Markley is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7. Another benefit our new datacenter offered was excess office space. That way, if there ever was an event at one of our GreenPages locations we could have a place to send people to work. I recently wrote a post which describes the datacenter move in more details.
BYOD Policy
As 2013 ends, we are finishing our first full year with our BYOD policy. We are taking this time to look back and see where there were any issues with the policies or procedures and adjusting for the next year. Our plan is to ensure that year two is even more streamlined. I answered questions in a recent Q & A explaining our BYOD initiative in more detail.
I’m pretty happy looking back at the work we accomplished in 2013. As with any year, there were bumps along the way and things we didn’t get to that we wanted to. All in all though, we accomplished some very strategic projects that have set us up for success in the future. I think that we will start out 2014 with increased employee satisfaction, increased productivity of our IT department, and of course noticeable cost savings. Here’s to a successful 2014!
Is your IT team the first-to-know when an IT outage happens? Or, do you find out about it from your end users? Is your expert IT staff stretched thin doing first-level incident support? Could they be working on strategic IT projects that generate revenue? Register for our upcoming webinar to learn more!
Why Automate? What to Automate? How to Automate?
By John Dixon, Consulting Architect
Automation is extremely beneficial to organizations. However, the questions often come up around why to automate, what to automate, and how to automate.
Why automate?
There are several key benefits surrounding automation. They include:
- Saving time
- Employees can be retrained to focus on other (hopefully more strategic) tasks
- Removing human intervention reduces errors
- Troubleshooting and support is improved when everything is deployed the same way
What to automate?
Organizations should always start with the voice of the customer (VoC). IT departments need to factor in what the end user wants and what the end user expects to improve their experience. If you can’t trace back something you’re automating to an improved customer experience, that’s usually a good warning sign that you should not be automating it. In addition, you need to be able to track back to how automation has provided a benefit to the organization. The benefit should always be measurable and always financial.
What are companies automating?
Requests management is the hot one because that’s a major component of cloud computing. This includes service catalogues and self-service portals. Providing a self-service portal, sending the request for approval based on the dollar amount requested, and fulfilling the order through one or more systems is something that is commonly automated today. My advice here is to automate tasks through a general purpose orchestrator tool (such as CA Process Automation or similar tools) so that automated jobs can be managed from a single console. This is instead of stitching together disparate systems that call each other in a “rat’s nest” of automation. The general purpose orchestrator also allows for easier troubleshooting when an automated task does not complete successfully.
How to automate?
There are some things to consider when sitting down to automate a task, or even determining the best things to automate. Here are a few key points:
- Start with the VoC or Voice of the Customer, and work backwards to identify the systems that are needed to automate a particular task. For example, maybe the customer is the Human Resources department, and they want to automate the onboarding of a new employee. It may have to setup user accounts, order a new cell phone, order a new laptop, and schedule the new employee on their manager’s calendar on their first day of work. Map out the systems that are required to accomplish this, and integrate those – and no more. You may find that some parts of the procedure may already be automated; perhaps your phone provider already has an interface to programmatically request new equipment. Take every advantage of these components.
- Don’t automate things that you can’t trace back to a benefit for the organization. Just because you can automate something doesn’t mean that you should. Again, use the voice of the customer and user stories here. A common user story is structure as follows:
- “As a [role],
- I want to [get something done]
- So that I can [benefit in the following way]”
- Start small and work upwards to automate more and more complex tasks. Remember the HR onboarding procedure in point #1? I wouldn’t suggest beginning your automation journey there. Pick out one thing to automate from a larger story, and get it working properly. Maybe you begin by automating the scheduling of an appointment in Outlook or your calendaring system, or creating a user in Active Directory. Those pieces become components in the HR onboarding story, but perhaps other stories as well.
- Use a general purpose orchestrator instead of stitching together different systems. As in point #3, using an orchestrator will allow you to build reusable components that are useful to automate different tasks. A general purpose orchestrator also allows for easier troubleshooting when things go wrong, tracking of automation jobs in the environment, and more advanced conditional logic. Troubleshooting automation any other way can be very difficult.
- You’ll need someone with software development experience. Some automation packages claim that even non-developers can build robust automation with “no coding required.” In some cases, that may be true. However, the experience that a developer brings to the table is an absolute must have when automating complex tasks like the HR onboarding example in point #1.
What has your organization automated? How have the results been?