Category Archives: Cloud computing

Guest Post: Why Midmarket Business Needs Cloud Services in 2013

Guest Post: Grant Davis

This is a guest post and does not necessarily reflect the views or opinions of GreenPages Technology Solutions.

The global market is becoming more and more competitive by the second, thus requiring businesses to operate very efficiently with regards to organizational structure. Businesses, specifically midmarket size, are faced with tall tasks in 2013. With a growing enterprise, information increases as the operations do. A growing company requires higher level data management, and this leads to more intricate demands when it comes to IT organization and communication.

If a midmarket is firing on all cylinders, acquiring new clients and consumers by the day, there is only so much that an unorganized or incapable operations model can withstand. The IT department can only cope with so many networks and so much data. A commonality among growing business in America throughout modern society is the implementation of cloud services. Cloud offers a way to outsource data and network management with the ability to focus resources and time on more intricate and fundamental aspects of the business.

Below I list the main ways that midmarket businesses can benefit from the utilization of cloud services in 2013, and the critical reasons for the argument.

1.       Cost.

Cloud services can be financially viable in the right situation. Using an outsourced data storage center can decrease the cost of real estate, software and employee payroll. For one, a midmarket that works with a cloud vendor does not have to physically house as much data. This is substantial benefit, mainly because of the physical space but also the operational costs of a larger company with high energy consumption.

Secondly, a cloud provider would be responsible for software agreements and also network operations. This is a huge burden off of a midmarket, as serious growth tends to take focus away from standard processing issues.  This responsibility being shifted to the cloud provider alleviates cost in the sense that a business can reduce or relocate IT staff for better efficiency. It can also benefit an enterprise to not have to worry about multiple SLAs with various software providers. It can save money to have the agreement consolidated and maintained by the cloud vendor.

Why this is crucial: Midmarket business can only reach maximum efficiency if all of the parts are in place. Part of this is allocating resources in a way that gets the most out of each aspect of the company. If a Data Modeler or System Admin can be utilized more effectively in this crucial phase of business development, maybe it’s better to outsource their daily role to a cloud vendor. It’s possible that their creativity and focus needs to be distributed in another area of the business different than process management. Innovation is key right now, and this is part of the process.

2.       Flexibility.

Cloud services may be a good idea for IT decision makers within a midmarket because employees are able to be more flexible. For instance, a cloud vendor allows for immediate access to business information from various portals, including mobile devices. In 2013 a typical cloud vendor seamlessly supplies business leadership and operations teams with the ability to access information from all angles of daily routines. This is a huge benefit in modern society where nearly everything is immediate and in constant real-time.

Also, because midmarket business is often trying to compete and outreach in a competitive market, traveling off location will be much less detrimental to work efficiency. If the company CIO is going to a tradeshow in Phoenix, they should still be able to access any processes being maintained by the cloud vendor.

Why this is crucial: Midmarket business in 2013 requires collaboration to be successful. With information being stored in a cloud storage center, information can be accessed from diverse locations. This increases both internal and external business collaboration. Modern society is far too demanding to have anything that is inefficient, and flexibility is directly related to efficiency when it comes to a growing business and data management.

3.       Scalability.

IT is the backbone of most business operations.  Modern information is too complex to handle manually, and we rely on computers and networks to transport and maintain data. An additional advantage of a midmarket acquiring cloud services is that the business can upscale or downscale IT services based on specific need. For instance, if the midmarket has stagnant growth over the holiday season, they can scale back their service agreement with the vendor to save money during that time. Similarly, if business continues to grow, the cloud service can easily expand and accommodate the new volume of data management that it performs for the business. This is not as viable with internal data management, as new software and hardware will need to be purchased with each major alteration in IT requirements. This can lead to wasted money and lost resources.

Why this is crucial: Business in general is too unpredictable in the current economy to assume anything, even when it comes to IT requirements. Cloud vendors allow for leeway with regards to data storage and this is important when a midmarket is concerned because often times these businesses don’t have the margin of error to make up for any inefficiency. They need the exact amount of storage they need, when they need it.

Conclusions

Midmarket business is important to the United States economy and affects the lives of many people. Usually these operations are on the cusp of doing something significant, and proper organization within IT can help allocate resources in the right areas for maximum production and business model maintenance. It’s time for IT and business leadership to make note of this and move to action in early 2013.

 

Grant Davis is a Data Modeler by day and a writer by night. His passion for computers started when he discovered instant messaging in junior high school. When Grant isn’t trying to climb through the computer screen he writes for BMC, a leading mainframe management provider.

Are you a midmarket organization looking to bring your IT environment to the next level? Click to learn more about how GreenPages can help!

 

Wired Profiles a New Breed of Internet Hero, the Data Center Guru

The whole idea of cloud computing is that mere mortals can stop worrying about hardware and focus on delivering applications. But cloud services like Amazon’s AWS, and the amazingly complex hardware and software that underpins all that power and flexibility, do not happen by chance. This Wired article about James Hamilton paints of a picture of a new breed of folks the Internet has come to rely on:

…with this enormous success comes a whole new set of computing problems, and James Hamilton is one of the key thinkers charged with solving such problems, striving to rethink the data center for the age of cloud computing. Much like two other cloud computing giants — Google and Microsoft — Amazon says very little about the particulars of its data center work, viewing this as the most important of trade secrets, but Hamilton is held in such high regard, he’s one of the few Amazon employees permitted to blog about his big ideas, and the fifty-something Canadian has developed a reputation across the industry as a guru of distributing systems — the kind of massive online operations that Amazon builds to support thousands of companies across the globe.

Read the article.

 

WSO2 Opens Beta for Stratos 2.0 PaaS Offering

Today WSO2 unveiled the beta release of WSO2 Stratos 2.0. Newly re-architected, WSO2 Stratos 2.0 is a foundation for implementing a platform-as-a-service (PaaS) that combines support for heterogeneous applications and service-oriented architecture (SOA) platform runtimes with native, secure multi-tenancy. WSO2 Stratos 2.0 also adds the capability to run on any cloud infrastructure, including VMware, Eucalyptus, Amazon and OpenStack.

WSO2 previewed the latest release of WSO2 Stratos at WSO2Con 2013, which runs February 13-14 in London. The company also announced that it has begun accepting customers for the WSO2 Stratos 2.0 beta program.

“Increasingly enterprises view the cloud as a platform for innovation. However, until now they’ve had to make trade-offs between using their favorite development tools and middleware or capitalizing on the multi-tenancy of a native cloud environment. With WSO2 Stratos 2.0, these organizations no longer have to compromise,” said Dr. Sanjiva Weerawarana, WSO2 founder and CEO.

The Stratos 2.0 approach to multi-tenancy goes beyond other PaaS environments to support multiple levels of virtualization—from standard virtual machines, via Linux Containers to intra-JVM isolation. This choice of sharing resources while providing the correct isolation level for multiple tenants is a significant factor in enabling lower costs, greater flexibility, and easier on-ramping into a private or public cloud environment.

The new tenant-aware elastic load balancer in Stratos 2.0 is a first-of-a-kind capability that allows the environment to provide highly tunable performance to different tenants, ranging from “economy class” for low priority workloads up to “private jet” mode for workloads that require dedicated resources.

At the heart of Version 2.0 is a new cartridge architecture for plugging software into WSO2 Stratos to take advantage of cloud-native capabilities, such as multi-tenancy, elastic scaling, self-service provisioning, metering, billing, and resource pooling, among others. As a result, WSO2 Stratos 2.0 is able to run, not only 13 WSO2 Carbon enterprise middleware products, but also a choice of frameworks, databases, and other application services.

WSO2 Stratos 2.0 also significantly enhances PaaS deployment through an integration layer that uses the popular jclouds technology to allow it to run on any infrastructure-as-a-service (IaaS) including OpenStack, VMware, Eucalyptus and CloudStack. Additionally, use of the Puppet open source tool for infrastructure deployment in this release makes it easier than ever to install and configure Stratos in a private or public cloud environment. Like all WSO2 software, WSO2 Stratos 2.0 is 100% open source and will be made available under the Apache License 2.0.

Ideal candidates for the WSO2 Stratos 2.0 beta program are enterprise IT professionals who are planning or evaluating ways to deliver new applications and/or migrate existing ones to the cloud. Participants also must be committed to participating and giving feedback. For more information and to contact WSO2 about joining the beta program, please visit the product Web page: http://wso2.com/cloud/stratos.

Guest Post: Who Controls the Cloud Market – Providers or Consumers?

Guest Post: Ilyas Iyoob, Director, Advanced Analytics and Sr. Research Scientist, PhD at Gravitant

We first went from reserving cloud capacity to securing capacity on-demand, and then we even started to bid for unused capacity in the spot market – all in an effort to decrease cost in the cloud.  Can we take this one step further?  Instead of us bidding for capacity, wouldn’t it be interesting if we can get providers to bid for our demand?

Retail Supply Chain Market Analogy

In fact, this is a common phenomena in the retail supply chain industry.  For example, Walmart has a large amount of freight that needs to be shipped between different cities over the course of the year.  So, every year an auction is conducted in which Walmart lists all their shipments, and carriers such as JB Hunt, Schneider, Yellow etc. bid for the opportunity to carry these shipments using their fleet of trucks.  The reason carriers are bidding for retailer demand is because in general, capacity exceeds demand in the retail industry.

Cloud Computing Market

Keeping this in mind, let us now take a look at the Cloud Computing Market.  Does capacity exceed demand or is it the other way around?  A quick way to find out is by observing spot prices in the cloud market.  In today’s market, Amazon’s Spot Instances are 86% cheaper than their on-demand instances, and Enomaly’s SpotCloud also shows lower spot prices across the board.  This leads us to believe that capacity exceeds demand in the cloud market as well.  A related indicator is the predominance of data center consolidation initiatives in both the commercial and government marketplaces.

Since capacity exceeds demand, consumers have an upper hand and are in control of the cloud market at the moment.  Moreover, they should be able to replicate what is being done in the retail supply chain industry.  In other words, cloud consumers should be able to auction off their demand to the best fit lowest price cloud provider.

So, …

Consumers should seize the opportunity and control the market while the odds are in their favor i.e. Demand < Capacity.  At the same time, Service Integrators and Value Added Resellers can help Enterprise IT consumers in this process by conducting Primary-Market auctions using Cloud Service Brokerage technology.

This post was originally published on Gravitant’s blog.

Be Nimble, Be Quick: A CRN Interview with GreenPages’ CEO

CRN Senior Editor and industry veteran Steve Burke sat down with GreenPages’ CEO Ron Dupler to discuss shifts in ideology in the industry as well as GreenPages new Cloud Management as a Service (CMaaS) offering. The interview, which was originally posted on CRN.com, is below. What are your thoughts on Ron’s views of the changing dynamics of IT?

 

CRN:Talk about your new cloud offering.

Dupler:It is available today. We can support physical, virtual and cloud-based infrastructure through a single pane of glass today. We are actually using the technology internally as well.

There is another part of CMaaS that goes into cloud governance and governance models in a cloud world and cloud services brokerage. That is what we are integrating and bringing to market very soon.

CRN:How big a game-changer is CMaaS?

Dupler:I think we are going to be well out in front of the market with this. I personally believe we can go have discussions right now and bring technologies to bear to support those discussions that no one else in the industry can right now.

That said, we know that the pace of innovation is rapid and we expect other organizations are trying to work on these types of initiatives as well. But we believe we’ll be out front certainly for this year.

CRN:How does the solution provider business model change from 2013 to 2018?

Dupler:The way we are looking at our job and the job of the solution provider channel over the next several years through 2018 is to provide IT plan, build, run and governance services for the cloud world.

The big change is that the solution provider channel for many years has made their money off the fact that infrastructure fundamentally doesn’t work very well. And it has been all about architecting and integrating physical technologies and software platforms to support the apps and data that really add value for the business.

When we move to the cloud world, this is now about integrating service platforms as opposed to physical technologies. So it is about architecting and integrating on-premise and cloud service platforms really to create IT-as-a-Service to support the apps and data for the platform. That is the transition that is under way.

CRN:Does the GreenPages brand become bigger than the vendor brand and how does that affect vendor relations in the CMaaS era?

Dupler:We continue to closely evaluate all our key partner relationships. That is managed very closely. What we try to do is make sure we are partnered with the right companies that are really leading this transformation. And our number one partner because they are driving this transformation is VMware. With this whole software-defined data center concept and initiative, VMware has really laid out a great vision for where this market is going.

NEXT: Does Size Matter?

CRN:There is a prevailing view that solution providers need to go big or go home, with many solution providers selling their businesses. Do you see scale becoming more important — that you need to scale?

Dupler:No. People have been saying that for years. It is all about customer value and the talent of your team, if you are adding value for clients. You need to be able to service the client community. And they care about quality of service and the ability of your team. Not necessarily that you are huge. I have been down the M&A road and, as you know, we do M&A here on a smaller scale. And I will tell you there are pros and cons to it. You aggregate talent, but you also have got the inertia of pulling companies together and integrating companies and people and executive teams and getting through that.

I absolutely do not subscribe and never have subscribed to the fact that size in itself gives competitive advantage. There are some advantages, but there are also costs to doing that.

CRN:What is the ultimate measure for success in this new world?

Dupler:It is a combination of three things: technology, and I will firmly say it doesn’t have to be homegrown. It could be homegrown or it could be commercial off-the-shelf. It is the way the technology is leveraged and having the technologies with the ability to drive the services you are trying to provide. What we are trying to do with CMaaS is single pane of glass management for the physical, virtual and cloud infrastructure, which I have mentioned, as well as cloud service brokerage and cloud governance services. You can either develop those on your own or integrate partner technologies or both, but you need the supporting technology base and you need people and you need process.

CRN:How big a transition is this and what percentage of VARs do you think will make it to 2018?

Dupler:The companies that I think are going to have a huge challenge are the big product-centric organizations right now. The DMR [direct marketer] community. They have some big challenges ahead of them over time. All these guys are trying to come up with cloud strategies as well.

Right now there is a premium on being nimble. That is the word of the day for me in 2013. Nimble. You need nimble people and you need a nimble business organization because things are moving faster than they ever have. You just have to have a culture and people that can change quickly.

Going back to is it good just to be big? Sometimes it is hard to maintain [that agility] as you get really big. The magnitude of the change that is required to succeed over the next five years is extremely significant. And people that aren’t already under way with that change have a big challenge ahead of them.

CRN:What is the pace of change like managing in this business as a CEO vs. five years ago?

Dupler:It is exponential.

CRN:Is it tougher to manage in an environment like this?

Dupler:You say it is tougher, but there is more opportunity than ever because of the pace of change to really differentiate yourself. So it can be challenging but it is also very stimulating and exciting.

CRN:Give me five tips you need to thrive in 2018.

Dupler:First of all, you need hybrid cloud management capabilities.

Number two, you need cloud services brokerage capabilities. It is ultimately an ability to provide a platform for clients to acquire as-a-service technologies from GreenPages. To be able to sell the various forms of infrastructure, platform and software as a service.

Number three is cloud architecture and integration capabilities.

Fourth is product revenue and profit streams are not central to supporting the business. The service model needs to become a profitable, thriving stand-alone entity without the product revenue streams.

The fifth thing and it is the biggest challenge. One thing is migrating your technology organization. Then the next thing you need to do is create a services-based sales culture.

CRN:Talk about how big a change that is.

Dupler:It is a huge change. Again, if people are not already under way with this change they have a huge challenge ahead of them. Everybody I speak with in the industry — whether it is at [UBM Tech Channel’s] BoB conference or at partner advisory councils — everybody is challenged with this right now. The sales force in the solution provider industry has been old paradigm physical-technology-based and needs to move into a world where it is leading with professional and managed services. And that game is very different. So I think there are two ways to address that: one is hiring new types of talent or helping the talent we all have transform. It is going to be a combination of both that gets us ultimately where we need to be.

CRN:What do you think is the biggest mistake being made right now by competitors or vendors?

Dupler:What I see is people that are afraid to embrace the change that is under way and are really hanging on to the past. The biggest mistake I see right now is people continuing to evangelize solutions to customers that aren’t necessarily right by the customer, but conform to what they know and drive the most profit for their organizations.

Short-term gain isn’t going to drive long-term customer value. And we need to lead the customers forward through this transformation as opposed to perpetuating the past. The market needs leadership right now. The biggest challenge for people is not moving fast enough to transform their businesses.

This interview was originally posted on CRN.com

To learn more about GreenPages’ CMaaS offering click here!

Guest Post: A Wrinkle in the IT Universe

By Kai Gray, VP of Operations at Carbonite

I feel like tectonic plates are shifting beneath the IT world. I’ve been struggling to put my finger on what it is that is making me feel this way, but slowly things have started to come into focus. These are my thoughts on how cloud computing has forever changed the economics of IT by shifting the balance of power.

The cloud has fundamentally changed business models; it has shifted time-to-market, entry points and who can do what. These byproducts of massive elasticity are wrapped up in an even greater evolutionary change that is occurring right now: The cloud is having a pronounced impact on the supply chain, which will amount to a tidal wave of changes in the near-term that will cause huge pain for some and spawn incredible innovation and wealth for others. As I see it, the cloud has started a chain of events that will change our industry forever:

1) Big IT used to rule the datacenter. Not long ago, large infrastructure companies were at the heart of IT. The EMCs, Dells, Ciscos, HPs and IBMs were responsible for designing, sourcing, supplying and configuring the hardware that was behind nearly all of the computing and storage power in the world. Every server closest was packed full of name-brand equipment and the datacenter was no different. A quick tour of any datacenter would – and still will – showcase the wares of these behemoths of the IT world. These companies developed sophisticated supply and sales channels that produced great margins businesses built on some very good product. This included the OEMs and ODMs that produced bent metal to the VARs and distributors who then sold their finish products. Think of DeBeers, the diamond mine owner and distributor. What are the differences between a company like HP and DeBeers? Not very much, but the cloud began to change all that.

2) Cloud Computing. Slowly we got introduced to the notion of cloud computing. We started using products that put the resource away from us, and (slowly) we became comfortable with not needing to touch the hardware. Our email “lived” somewhere else, our backups “lived” somewhere else and our computing cycles “lived” somewhere else. With each incremental step, our comfort levels rose until it stopped being a question and turned into an expectation. This process set off a dramatic shift in supply chain economics.

3) Supply Chain Economics. The confluence of massive demand coupled with near-free products (driven by a need to expand customer acquisition) changed how people had to think about infrastructure. All of a sudden, cloud providers had to think about infrastructure in terms of true scalability. This meant acquiring and managing massive amounts of infrastructure at the lowest possible cost. This was/is fundamentally different from the way the HPs and Dells and Ciscos thought about the world. All of a sudden, those providers were unable to address the needs of this new market in an effective way. This isn’t to say that the big IT companies can’t, just that it’s hard for them. It’s hard to accept shrinking margin and “openness.”  The people brave enough to promote such wild ideas are branded as heretics and accused of rocking the boat (even as the boat is sinking). Eventually the economic and scale requirements forced cloud providers to tackle the supply chain and go direct.

4) Going Direct. As cloud providers begin to develop strong supply chain relationships and build up their competencies around hardware engineering and logistics, they begin to become more ingrained with the ODMs (http://en.wikipedia.org/wiki/Original_design_manufacturer) and other primary suppliers. Huge initiatives came into existence from the likes of Amazon, Google and Facebook that are focused on driving down the cost of everything. For example, Google began working directly with Intel and AMD to develop custom chipsets that allow them to run at efficiency levels never before seen, and Facebook started the Open Compute Project that seeks to open-source design schematics that were once locked in vaults.

In short, the supply chain envelope gets pushed by anyone focused on cost and large-scale.

…and here it gets interesting.

Cloud providers now account for more supplier revenue than the Big IT companies. Or, maybe better stated — cloud providers account for more hope of revenue (HoR) than Big IT. So, what does that mean? That means that the Big IT companies no longer receive the biggest discounts available from the suppliers. The biggest discounts are going to the end users and the low-margin companies built solely on servicing the infrastructure needs of cloud providers. This means that Big IT is at even more of a competitive disadvantage than they already were. The cycle is now in full swing. If you think this isn’t what is happening, just look at HP and Dell right now. They don’t know how to interact with a huge set of end users without caving in their margins and cannibalizing their existing businesses. Some will choose to amputate while others will go down kicking, but margin declines and openness of information will take their toll with excruciating pain.

What comes of all this? I don’t know. But here are my observations:

1) Access to the commodity providers (ODMs and suppliers) is relatively closed. To be at all interesting to ODMs and suppliers you have to be doing things at enough volume that it is worthwhile for them to engage with you. That will change. The commodity suppliers will learn how to work in different markets but there will be huge opportunity for companies that help them get there. When access to ODMs and direct suppliers gets opened up to traditional Enterprise companies so they can truly and easily take advantage of commodity hardware through direct access to suppliers then, as they say, goodnight.

2) Companies that provide some basic interfaces between the suppliers and the small(er) consumers will do extremely well. For me, this means configuration management of some sort, but it could be anything that helps accelerate the linkage between supplier and end-user . The day will come when small IT shops have direct access to suppliers and are able to custom-build hardware in same way that huge cloud providers do today. Some might argue that there is no need for small shops to do this — that they can use other cloud providers, that it’s too time consuming to do it on their own, and that their needs are not unique enough to support such a relationship. Yes, yes, and yes… for right now. Make it easy for companies to realize the cost and management efficiencies of direct supplier access and I don’t know of anyone that wouldn’t take you up on that. Maybe this is the evolution of the “private cloud” concept but all I know is that, right now, the “private cloud” talk is being dominated by the Big IT folks so the conflict of interest is too great.

3) It’s all about the network. I don’t think the network is being addressed in the same way as other infrastructure components. I almost never hear about commodity “networks,” yet I constantly hear about commodity “hardware.” I’m not sure why. Maybe Cisco and Juniper and the other network providers are good at deflecting or maybe it’s too hard of a problem to be solved or maybe the cost isn’t a focal point (yet). Whatever the reason, I think this is a huge problem/opportunity. Without the network, everything else can just go away. Period. The entire conversation driving commodity-whatever is predicated around delivering lots of data to people at very low-cost. The same rules that drive commoditization need to be applied to the network and right now I only know of 1 or 2 huge companies that are even thinking in these terms.

There are always multiple themes in play at any given time that, when looking back, we summarize as change. People say that the Internet changed everything. And, before that, the PC changed everything. What we’re actually describing is a series of changes that happened over a period of time that have the cumulative effect of making us say, “How did we ever do X without Y?” I believe that the commoditization of infrastructure is just one theme among the change that will be described as Cloud Computing. I contend, however, the day is almost upon us when everybody, from giant companies to the SMB, will say, “Why did we ever buy anything but custom hardware directly from the manufacturer?”

This post originally appeared on kaigray.com.  It does not necessarily reflect the views or opinions of GreenPages Technology Solutions.

To Learn more about GreenPages Cloud Computing Practice click here.

Is Cloud Computing Ready for Prime Time?

By John Dixon, Senior Solutions Architect

 

A few weeks ago, I took part in another engaging tweetchat on Cloud Computing. The topic: is cloud computing ready for enterprise adoption? You can find the transcript here.

 

As usual with tweetchats hosted by CloudCommons, five questions are presented a few days in advance of the event. This time around, the questions were:

  1. Is Public Cloud mature enough for enterprise adoption?
  2. Should Public Cloud be a part of every business’s IT strategy?
  3. How big of a barrier are legacy applications and hardware to public cloud adoption?
  4. What’s the best way to deal with cloud security?
  5. What’s the best way to get started with public cloud?

 

As far as Question #1, the position of most people in the chat session this time was that Public Cloud is mature enough for certain applications in enterprises today. The technology certainly exists to run applications “in the cloud” but regulations and policies may not be ready to handle an application’s cloud deployment. Another interesting observation from the tweetchat was that most enterprises are indeed running applications “in the cloud” right now. GreenPages considers applications such as Concur and Salesforce.com as running “in the cloud.” And of course, many organizations large and small run these applications successfully. I’d also consider ADP as a cloud application. And of course, many organizations make use of ADP for payroll processing.

Are enterprises mature enough for cloud computing?

Much of the discussion during question #1 turned the question on end – the technology is there, but enterprises are not ready to deploy applications there. GreenPages’ position is that, even if we assume that cloud computing is not yet ready for prime time, then it certainly will be soon. Organizations should prepare for this eventuality by gaining a deep understanding of the IT services they provide, and how much a particular IT service costs. When one or more of your IT services can be substituted for one that runs (reliably and inexpensively) in the cloud, will your company be able to make the right decision to take advantage of that condition? Also, another interesting observation: some public cloud offerings may be enterprise-ready, but not all public cloud vendors are enterprise-grade. We agree.

Should every business have a public cloud strategy?

Most of the discussion here pointed to a “yes” answer. Or that an organization’s strategy will eventually, by default, include consideration for public cloud. We think of cloud computing as a sourcing strategy in and of itself – especially when thinking of IaaS and PaaS. Even now, IaaS vendors are essentially providers of commodity IT services. Most commonly, IaaS vendors can provide you with an operating system instance: Windows or Linux. For IaaS, the degree of abstraction is very high, as an operating system instance can be deployed on a wide range of systems – physical, virtual, paravirtual, etc. The consumer of these services doesn’t mind where the OS instance is running, as long as it is performing to the agreed SLA. Think of Amazon Web Services here. Depending on the application that I’m deploying, there is little difference whether I’m using infrastructure that is running physically in Northern Virginia or in Southern California. At GreenPages, we think that this degree of abstraction will move in to the enterprise as corporate IT departments evolve to behave more like service providers… and probably evolve in to brokers of IT services – supported by a public cloud strategy.

Security and legacy applications

Two questions revolved around legacy applications and security as barriers to adoption. Every organization has a particular application that will not be considered for cloud computing. The arguments are similar for the reasons why we never (or, are just beginning to) virtualize legacy applications. Sometimes, virtualizing specialized hardware is, well, really hard and just not worth the effort.

What’s the best way to get started with public cloud?

“Just go out and use Amazon,” was a common response to this question, both in this particular tweetchat and in other discussions. Indeed, trying Amazon for some development activities is not a bad way to evaluate the features of public cloud. In our view, the best way to get started with cloud is to begin managing your datacenter as if it were a cloud environment, with some tool that can manage traditional and cloud environments the same way. Even legacy applications. Even applications with specialized hardware. Virtual, physical, paravirtual, etc. Begin to monitor and measure your applications in a consistent manner. This way, when an application is deployed to a cloud provider, your organization can continue to monitor, measure, and manage that application using the same method. For those of us who are risk-averse, this is the easiest way to get started with cloud! How is this done? We think you’ll see that Cloud Management as a Service (CMaaS) is the best way.

Would you like to learn more about our new CMaaS offering? Click here to receive some more information.

Getting Out of the IT Business

Randy Weis, Director of Solutions Architecture

Strange title for a blog from an IT solutions architect? Not really.

Some of our clients—a lumber mill, a consulting firm, a hospital—are starting to ask us how to get out of “doing IT.” What do these organizations all have in common? They all have a history of challenges in effective technology implementations and application projects leading to the CIO/CTO/CFO asking, “Why are we in the IT business? What can we do to offload the work, eliminate the capital expenses, keep operating expenses down, and focus our IT efforts on making our business more responsive to shifting demands and reaching more customers with a higher satisfaction rate?”

True stories.

If you are in the business of reselling compute, network, or storage gear, this might not be the kind of question you want to hear.

If you are in the business of consulting on technology solutions to meet business requirements, this is exactly the kind of question you should be preparing to answer. If you don’t start working on those answers, your business will suffer for it.

Technology has evolved to the point where the failed marketing terms of grid or utility computing are starting to come back to life—and we are not talking about zombie technology. Cloud computing used to be about as real as grid or utility computing, but “cloud” is no longer just a marketing term. We now have new, proven, and emerging technologies that actually can support a utility model for information technology. Corporate IT executives now are starting to accept that the new cloud computing infrastructure-as-a-service is reliable (recent AWS outages not withstanding) predictable, and useful to a corporate strategy. Corporate applications still need to be evaluated for requirements that restrict deployment and implementation strategies–latency, performance, concerns over satisfying legal/privacy/regulatory issues, and so on. However, the need to have elastic, scalable, on-demand IT services that are accessible anywhere is starting to force even the most conservative executives to look at the cloud for offloading non-mission critical workloads and associated costs (staff, equipment, licensing, training and so on). Mission critical applications can still benefit from cloud technology, perhaps only as internal or private cloud, but the same factors still apply—reduce time to deploy or provision, automate workflow, scale up or down as dictated by business cycles, and push provisioning back out into the business (while holding those same units accountable for the resources they “deploy”).

Infrastructure as a service is really just the latest iteration of self-service IT. Software as a service has been with us for some time now, and in some cases is the default mode—CRM is the best example (e.g. Salesforce). Web-based businesses have been virtualizing workloads and automating deployment of capacity for some time now as well. Development and testing have also been the “low hanging fruit” of both virtualization and cloud computing. However, when the technology of virtualization reached a certain critical mass, primarily driven by VMware and Microsoft (at least at the datacenter level), then everyone started taking a second look at this new type of managed hosting. Make no mistake—IaaS is managed hosting, but New and Improved. Anyone who had to deal with provisioning and deployment at AT&T or other large colocation data centers (and no offense meant) knew that there was no “self-service” involved at all. Deployments were major projects with timelines that rivaled the internal glacial pace of most IT projects—a pace that led to the historic frustration levels that drove business units to run around their own IT and start buying IT services with a credit card at Amazon and Rack Space.

If you or your executives are starting to ask yourselves if you can get out of the day-to-day business of running an internal datacenter, you are in good company. Virtualization of compute, network and storage has led to ever-greater efficiency, helping you get more out of every dollar spent on hardware and staff. But it has also led to ever-greater complexity and a need to retrain your internal staff more frequently. Information Technology services are essential to a successful business, but they can no longer just be a cost center. They need to be a profit center; a cost of doing business for sure, but also a way to drive revenues and shorten time-to-market.

Where do you go for answers? What service providers have a good track record for uptime, customer satisfaction, support excellence and innovation? What technologies will help you integrate your internal IT with your “external” IT? Where can you turn to for management and monitoring tools? What managed services can help you with gaining visibility into all parts of your IT infrastructure, that can deal with a hybrid and distributed datacenter model, that can address everything from firewalls to backups? Who can you ask?

There is an emerging cadre of thought leaders and technologists that have been preparing for this day, laying the foundation, developing the expertise, building partner relationships with service providers and watching to see who is successful and growing…and who is not. GreenPages is in the very front line of this new cadre. We have been out in front with virtualization of servers. We have been out in front with storage and networking support for virtual datacenters. We have been out in front with private cloud implementations. We are absolutely out in front of everyone in developing Cloud Management As A Service.

We have been waiting for you. Welcome. Now let’s get to work.For more information on our Cloud Management as a Service Offering click here

IT Multi-Tasking: I Was Told There’d Be No Math

By Ben Sawyer, Solutions Engineer

 

The term “multi-tasking” basically means doing more than one thing at once.  I am writing this blog while playing Legos w/ my son & helping my daughter find New Hampshire on the map.  But I am by no means doing more than one thing at once; I’m just quickly switching back & forth between the three which is referred to ask “context switching.”  Context switching in most cases is very costly.  There is a toll to be paid in terms of productivity when ramping up on a task before you can actually tackle that task. In an ideal world (where I also have a 2 handicap) one has the luxury to do a task from start to finish before starting a new task.  My son just refuses to let me have 15 minutes to write this blog because apparently building a steam roller right now is extremely important.  There is a sense of inertia when you work on a task after a short while because you begin to really concentrate on the task at hand.  Since we know it’s nearly impossible to put ourselves in a vacuum & work on one thing only, the best we can hope for is to do “similar” things (i.e., in the same context) at the same time.  Let’s pretend I have to email my co-worker that I’m late writing a blog, shovel my driveway, buy more Legos at Amazon.com, & get the mail (okay, I’m not pretending).  Since emailing & buying stuff online both require me to be in-front of my laptop and shoveling & going to my mailbox require me to be outside my house (my physical location), it would be far more efficient to do the tasks in the same “context” at the same time.  Think of the time it takes to get all bundled up & the time it takes to power on your laptop to get online.  Doing a few things at once usually means that you will not do that task as well (its quality) as you would have had you done it uninterrupted.  The more closely, time-wise, you can do a task usually means the better you will do that task since it will be “fresher” in your mind.  So…

  • Entire Task A + Entire Task B = Great Task A & Great Task B.
  • 1/2 Task A + Entire Task B + 1/2 Task A = Okay Task A & Excellent Task B.
  • 1/2 Task A + 1/2 Task B + 1/2 Task A + 1/2 Task B = Good Task A & Good Task B

Why does this matter?  Well, because the same exact concept applies to computers & the software we write.  A single processor can do one thing at a time only (let’s forget threads), but it can context switch extremely fast which gives the illusion of multi-tasking.  But, like a human, context switching has a cost for a computer.  So, when you write code try to do many “similar” things at the same time.  If you have a bunch of SQL queries to execute then you should open a connection to the database first, execute them, & close the connection.  If you need to call some VMware APIs then you should connect to vCenter first, do them, & close the connection.  Opening & closing connections to any system is often slow so group your actions by context which, in this case, are systems.  This also makes the code easier to read.  Speaking of reading, here’s a great example of the cost of context switching.  The author Tom Clancy loves to switch characters & plot lines every chapter.  This makes following the story very hard & whenever you put the book down & start reading again it’s nearly impossible to remember where you left off b/c there’s never, ever a good stopping point.  Tom Clancy’s writing is one of the best examples of how costly context switching is.

So, what does this have to do with cloud computing?  Well, it ties in directly with automation & orchestration.  Automation is doing the work & orchestration is determining the order in which work is done.  Things can get complicated quickly when numerous tasks need to be executed & it’s not immediately apparent which need to run first & which are dependent on other tasks.  And, once that is all figured out, what happens when a task fails?  While software executes linearly, an orchestration engine provides the ability to run multiple pieces of software concurrently.  And that’s where things get complicated real fast.  Sometimes it may make sense to execute things serially (one at a time) vs. in parallel (more than one at a time) simply b/c it becomes very hand to manage more than one task at the same time.

We live in a world in which there are 10 different devices from which we can check our email and, if we want, we can talk to our smartphone & ask it to read our email to us.  Technology has made it easy for us to get information virtually any time & in any format we want.  However, it is because of this information overload that our brains have trouble separating all the useful information from the white noise.  So we try to be more productive and we multi-task but that usually means we’re becoming more busy than productive.  In blogs to follow, I will provide some best practices for determining when it makes sense to run more than one task at a time.  Now, if you don’t mind, I need to help my daughter find Maine…

 

Research and Markets: Potential of Cloud Computing

Research and Markets has announced the addition of the “Potential of Cloud Computing” report to their offering.

First there was the advent of the Internet that changed the manner in which we do business forever. Now, with the advent of cloud computing, the world is ready to undergo another major shift in terms of technology.

Cloud computing is an internet-based process that makes it possible to share information, software and even resources from computers to other devices all through the internet. The concept of cloud computing brings forth a new delivery model for IT services that are conducting businesses over the Internet. The process generally involves provision of scalable and virtualized resources over the internet. Not only does the process provide ease-of-access, but the speed and overall reliability of the entire concept of cloud computing is changing the IT industry rapidly.

Taiyou Research presents an analysis of the Potential of Cloud Computing.

Key Topics Covered:

1. Executive Summary

2. Overview of Cloud Computing

3. Market Profile

4. Benefits of Deploying the Cloud

5. Cost Benefits to Organizations from Cloud Systems

6. Cloud Computing Delivery Modes

7. Cloud Computing Deployment Models

8. Understanding the Concept behind Cloud Computing

9. Application Programming Interfaces

10. Cloud Computing Taxonomy

11. Deployment Process of the Cloud System

12. Technical Features of Cloud Systems

13. Understanding Cloud Clients

14. Regulatory Landscape & Investment

15. Commercializing of Cloud Computing

16. Concepts Related to Cloud Computing

17. Cloud Computing versus Other Computing Paradigms

18. Cloud Exchanges and Markets Worldwide

19. Research Projects on Cloud Computing

20. Cloud Computing Case Studies

21. Future of Cloud Computing

22. Market Leaders

23. Appendix

24. Glossary