Archivo de la categoría: Featured

What’s the Buzz? Recap of VMware Partner Exchange

I am just returning from the VMware Partner Exchange conference (PEX) that was held at Mandalay Bay in Las Vegas.  While this was a partner vs. customer focused event, there were a few tidbits I felt made sense to share you so here we go….

Awards:

GreenPages was given the Virtualization of Business Critical Applications (VBCA) award for the Americas this year, which was a great achievement.  For those of you not aware, the VBCA program inside of VMware is focused on virtualizing the ‘big boy’ mission critical applications such as Oracle, SQL, SAP, Exchange, etc. that have historically been both technically and politically difficult to virtualize.  In addition, GreenPages was given the same award on a Global scale which was very exciting as it marks the first time that GreenPages has won a global award from any vendor partner!  This also marks the 4th consecutive year that GreenPages has won an award from VMware (2010 – Desktop Virtualization, 2011 – Rainmaker, 2012 – Partner of the Year, and 2013 – Global VBCA). The primary reason we won the VBCA this year was due to several projects which included the virtualization of large applications including a 5500 seat Exchange environment and production databases at a major professional sporting league.

End User Computing Updates:

Horizon Suite v1 will be GA by end of Q1 and will include Workspace, Mirage, and VIEW.

The Workspace product is a combination of what was formally Horizon App Manager and Horizon Data (code name Octopus).  For those that do not know, the App Manager side is meant to be an Enterprise App Store which includes SAML integration to various SaaS applications in addition to being able to launch ThinApp apps and VIEW desktops.  Support for launching XenApp published apps will come shortly after the GA date, figure April/May timeframe.  If you are familiar with Citrix Storefront, this is a very similar concept.  The Data (Octopus) side is one of the ‘dropbox for the enterprise’ apps.

Mirage will be updated to version 4.0 at GA time.  The key feature in this release is individual application layering.  For this, think Unidesk as the concept is very similar but this solution is really meant for physical devices vs. virtual or VDI, at least for now.  There is no real VIEW integration with Mirage yet, but that is coming.  The hold-up is mainly due to performance issues with running Mirage based workloads in a shared VDI environment.  One more interesting tidbit on Mirage is that the license now also includes Fusion Pro.  The purpose behind this is to lay down a Mirage based image to a Mac for corporate use and being able to maintain control/management of that image in a BYOD environment.

VIEW will be upgraded to v5.2 and key features here are 3D graphics enhancements including compatibility with some of the new Nvidia server based cards to offload hefty graphics and improve performance.  Additionally, HTML5 rendering of the desktops will come with this release.  This is the AppBlast piece that was shown at VMWorld back in 2011.  Keep in mind that HTML5 has some pretty major limitations so in most cases you’ll still want to deploy the full VIEW client, but in a pinch if you are on some kiosk type machine where installing a client is not possible, then you’ll still be able to get to your desktop and perform basic tasks.  Lastly, scalability is enhanced and will continue to expand with future releases.

VMware also announced official Radware and Riverbed Stingray support and reference architectures for load balancing within a Horizon/VIEW environment.  F5 support has been around for some time already.

vCloud Suite Updates:

This one, I was very disappointed in as I had expected to hear some pretty significant changes to the way the suite is packaged, however this didn’t happen and seems to have been tabled for some reason so nothing to announce here :( .

Cloud Credits:

Some of you may have heard of these already, but they are basically ‘chips’ that a customer purchases through GreenPages which can then be redeemed at any one of VMware’s vCloud VSPP service provider partners for public cloud IaaS services.

Virsto Acquisition:

Not much was said about this at the event other than the initial focus and use case will be on VDI acceleration.  Stay tuned for more detail here as I find it.

BC/DR:

Big topic here was vSphere Data Protection Advanced Edition (VDP-A).  Like regular VDP, this is based on the EMC Avamar engine but is more scalable to support larger environments.  It is missing some key features currently such as replication, but VMware is diligently working to add these features throughout this calendar year.

Integration Engineering Meeting:

I had the pleasure of meeting with some folks from this internal VMware team.  I will simply say this single meeting made the entire trip worthwhile.  I learned quite a bit about the team and how it works so I’ll give you an overview.  First off, in order to be a member of this team you must have a minimum of 10yrs employment by VMware.  Given that VMware has only been around since the late 90s, that requirement greatly shrinks the potential team members, but trust me when I say the guys on this team know their s**t.  Their primary charter is take the point of view of an external customer along with going out and meeting with actual customers and providing very candid feedback to the various product management teams inside VMW.  I love this team because they are a no bull$hit group of people.  If something sucks, they will say it, likewise if something is stellar they will say that as well.  Unfortunately I cannot share details of what we discussed as the majority of it was future/NDA type material but I think it is awesome that this team exists inside of VMware because they really do help make the products better.  As an example, some of you may be aware of the tool recently made available to make the process of applying SSL Certificates to the various VMware architecture components much easier and it was this team that pushed for the tool and helped get it green lighted.

Ok, that’s it for now… Back to work!

 

To learn more about how GreenPages can help you with your VMware environment, fill out this form!

BYOD: Quick Tips and Facts

By Francis Czekalski, Enterprise Consultant

There’s no doubt that BYOD is a top buzzword and priority for IT decision maker in 2013.  This is certainly a complex issue that requires a lot of planning and commitment if your organization expects positive results. Below are a couple quick points on BYOD that your organization should keep in mind when implementing and monitoring a policy.

  • BYOD programs have the effect of increasing the lifespan of devices because people tend to take care of items better since they are choosing the type of device.
  • Security is a HUGE issue around BYOD. BYOD programs can increase security, but, when not monitored correctly can actually lead to a whole new pathway for data leakage.
  • Offline computing still tends to be an issue so some hybrid model needs to be adopted.
  • It is often believed that with a BYOD program you no longer need to support the clients- this is simply not the case. User productivity will demand that some touch is done on the end user’s computer.

A couple interesting findings from a recent study from Dell (http://tabtimes.com/news/ittech-stats-research/2013/01/22/study-it-managers-must-embrace-byod-or-risk-being-left-behind)…

  • 70% of IT Decision makers believe BYOD helps boost employee productivity and customer response time.
  • 59% of IT Decision makers believe they would be at a competitive disadvantage if they did not embrace personally-owned devices
  • 56% of IT Decision makers believe that BYOD has completely changed their company’s culture

If you’d like some more information on BYOD and mobile device management, download this free webinar.

What’s your opinion on BYOD? Has your organization implemented a policy? If not, do you plan on implementing one? Why or why not?

 

 

The Newest Data-Storage Device is DNA?

By Randy Weis

Molecular and DNA Storage Devices- “Ripped from the headlines!”

-Researchers used synthetic DNA encoded to create the zeros and ones of digital technology.

-MIT Scientists Achieve Molecular Data Storage Breakthrough

-DNA may soon be used for storage: All of the world’s information, about 1.8 zettabytes, could be stored in about four grams of DNA

Harvard stores 70 billion books using DNA: Research team stores 5.5 petabits, or 1 million gigabits, per cubic millimeter in DNA  storage medium

IBM using DNA, nanotech to build next-generation chips: DNA works with nanotubes to build more powerful, energy-efficient easy-to-manufacture chips

Don’t rush out to your reseller yet! This stuff is more in the realm of science fiction at the moment, although the reference links at the end of this post are to serious scientific journals. It is tough out here at the bleeding edge of storage technology to find commercial or even academic applications for the very latest, but this kind of storage technology, along with quantum storage and holographic storage, will literally change the world. Wearable, embedded storage technology for consumers may be a decade or more down the road, but you know that there will be military and research applications long before Apple gets this embedded in the latest 100 TB iPod. Ok, deep breath—more realistically, where will this technology be put into action first? Let’s see how this works first.

DNA is a three dimensional media, with density capabilities of up to a zettabyte in a millimeter volume. Some of this work is being done with artificial DNA, injected into genetically modified bacteria (from a Japanese research project from last year). A commercially available genetic sequencer was used for this.

More recently, researchers in Britain encoded the “I Have a Dream” speech and some Shakespeare Sonnets in synthetic DNA strands. Since DNA can be recovered from 20,000 year old wooly mammoth bones, this has far greater potential for long term retrievable storage than, say, optical disks (notorious back in the 90s for delaminating after 5 years).

Reading the DNA is more complicated and expensive, and the “recording” process is very slow. It should be noted that no one is suggesting storing data in a living creature at this point.

Molecular storage is also showing promise, in binding different molecules in a “supramolecule” to store up to 1 petabyte per square inch. But this is a storage media in two dimensions, not three. This still requires temperatures of -9 degrees, considered “room temperature” by physicists. This work was done in India and Germany. IBM is working with DNA and carbon nanotube “scaffolding” to build nano devices in their labs today.

Where would this be put to work first? Google and other search engines, for one. Any storage manufacturer would be interested—EMC DNA, anyone? Suggested use cases: globally and nationally important information of “historical value” and the medium-term future archiving of information of high personal value that you want to preserve for a couple of generations, such as wedding video for grandchildren to see.  The process to lay the data down and then to decode it makes the first use case of data archiving the most likely. The entire Library of Congress could be stored in something the size of a couple of sugar cubes, for instance.

What was once unthinkable (or at least only in the realm of science fiction) has become reality in many cases: drones, hand held computers with more processing power than that which sent man to the moon, and terabyte storage in home computers. The future of data storage is very bright and impossible to predict. Stay tuned.

Here is a graphic from Nature Journal (the Shakespeare Sonnets), “Towards practical, high-capacity, low-maintenance information storage in synthesized DNA” http://www.nature.com/nature/journal/vaop/ncurrent/full/nature11875.html#/ref10

Click here to learn more about how GreenPages can help you with your organization’s storage strategy

Other References:

Researchers used synthetic DNA encoded to create the zeros and ones of digital technology.

http://www.usatoday.com/story/news/nation/2013/01/23/dna-information-storage/1858801/

MIT Scientists Achieve Molecular Data Storage Breakthrough

http://idealab.talkingpointsmemo.com/2013/01/mit-scientists-achieve-molecular-data-storage-near-room-temperature.php

DNA may soon be used for storage

http://www.computerworld.com/s/article/9236176/DNA_may_soon_be_used_for_storage?source=CTWNLE_nlt_storage_2013-01-28

Harvard stores 70 billion books using DNA

http://www.computerworld.com/s/article/9230401/Harvard_stores_70_billion_books_using_DNA

IBM using DNA, nanotech to build next-generation chips

http://www.computerworld.com/s/article/9136744/IBM_using_DNA_nanotech_to_build_next_generation_chips

 

Be Nimble, Be Quick: A CRN Interview with GreenPages’ CEO

CRN Senior Editor and industry veteran Steve Burke sat down with GreenPages’ CEO Ron Dupler to discuss shifts in ideology in the industry as well as GreenPages new Cloud Management as a Service (CMaaS) offering. The interview, which was originally posted on CRN.com, is below. What are your thoughts on Ron’s views of the changing dynamics of IT?

 

CRN:Talk about your new cloud offering.

Dupler:It is available today. We can support physical, virtual and cloud-based infrastructure through a single pane of glass today. We are actually using the technology internally as well.

There is another part of CMaaS that goes into cloud governance and governance models in a cloud world and cloud services brokerage. That is what we are integrating and bringing to market very soon.

CRN:How big a game-changer is CMaaS?

Dupler:I think we are going to be well out in front of the market with this. I personally believe we can go have discussions right now and bring technologies to bear to support those discussions that no one else in the industry can right now.

That said, we know that the pace of innovation is rapid and we expect other organizations are trying to work on these types of initiatives as well. But we believe we’ll be out front certainly for this year.

CRN:How does the solution provider business model change from 2013 to 2018?

Dupler:The way we are looking at our job and the job of the solution provider channel over the next several years through 2018 is to provide IT plan, build, run and governance services for the cloud world.

The big change is that the solution provider channel for many years has made their money off the fact that infrastructure fundamentally doesn’t work very well. And it has been all about architecting and integrating physical technologies and software platforms to support the apps and data that really add value for the business.

When we move to the cloud world, this is now about integrating service platforms as opposed to physical technologies. So it is about architecting and integrating on-premise and cloud service platforms really to create IT-as-a-Service to support the apps and data for the platform. That is the transition that is under way.

CRN:Does the GreenPages brand become bigger than the vendor brand and how does that affect vendor relations in the CMaaS era?

Dupler:We continue to closely evaluate all our key partner relationships. That is managed very closely. What we try to do is make sure we are partnered with the right companies that are really leading this transformation. And our number one partner because they are driving this transformation is VMware. With this whole software-defined data center concept and initiative, VMware has really laid out a great vision for where this market is going.

NEXT: Does Size Matter?

CRN:There is a prevailing view that solution providers need to go big or go home, with many solution providers selling their businesses. Do you see scale becoming more important — that you need to scale?

Dupler:No. People have been saying that for years. It is all about customer value and the talent of your team, if you are adding value for clients. You need to be able to service the client community. And they care about quality of service and the ability of your team. Not necessarily that you are huge. I have been down the M&A road and, as you know, we do M&A here on a smaller scale. And I will tell you there are pros and cons to it. You aggregate talent, but you also have got the inertia of pulling companies together and integrating companies and people and executive teams and getting through that.

I absolutely do not subscribe and never have subscribed to the fact that size in itself gives competitive advantage. There are some advantages, but there are also costs to doing that.

CRN:What is the ultimate measure for success in this new world?

Dupler:It is a combination of three things: technology, and I will firmly say it doesn’t have to be homegrown. It could be homegrown or it could be commercial off-the-shelf. It is the way the technology is leveraged and having the technologies with the ability to drive the services you are trying to provide. What we are trying to do with CMaaS is single pane of glass management for the physical, virtual and cloud infrastructure, which I have mentioned, as well as cloud service brokerage and cloud governance services. You can either develop those on your own or integrate partner technologies or both, but you need the supporting technology base and you need people and you need process.

CRN:How big a transition is this and what percentage of VARs do you think will make it to 2018?

Dupler:The companies that I think are going to have a huge challenge are the big product-centric organizations right now. The DMR [direct marketer] community. They have some big challenges ahead of them over time. All these guys are trying to come up with cloud strategies as well.

Right now there is a premium on being nimble. That is the word of the day for me in 2013. Nimble. You need nimble people and you need a nimble business organization because things are moving faster than they ever have. You just have to have a culture and people that can change quickly.

Going back to is it good just to be big? Sometimes it is hard to maintain [that agility] as you get really big. The magnitude of the change that is required to succeed over the next five years is extremely significant. And people that aren’t already under way with that change have a big challenge ahead of them.

CRN:What is the pace of change like managing in this business as a CEO vs. five years ago?

Dupler:It is exponential.

CRN:Is it tougher to manage in an environment like this?

Dupler:You say it is tougher, but there is more opportunity than ever because of the pace of change to really differentiate yourself. So it can be challenging but it is also very stimulating and exciting.

CRN:Give me five tips you need to thrive in 2018.

Dupler:First of all, you need hybrid cloud management capabilities.

Number two, you need cloud services brokerage capabilities. It is ultimately an ability to provide a platform for clients to acquire as-a-service technologies from GreenPages. To be able to sell the various forms of infrastructure, platform and software as a service.

Number three is cloud architecture and integration capabilities.

Fourth is product revenue and profit streams are not central to supporting the business. The service model needs to become a profitable, thriving stand-alone entity without the product revenue streams.

The fifth thing and it is the biggest challenge. One thing is migrating your technology organization. Then the next thing you need to do is create a services-based sales culture.

CRN:Talk about how big a change that is.

Dupler:It is a huge change. Again, if people are not already under way with this change they have a huge challenge ahead of them. Everybody I speak with in the industry — whether it is at [UBM Tech Channel’s] BoB conference or at partner advisory councils — everybody is challenged with this right now. The sales force in the solution provider industry has been old paradigm physical-technology-based and needs to move into a world where it is leading with professional and managed services. And that game is very different. So I think there are two ways to address that: one is hiring new types of talent or helping the talent we all have transform. It is going to be a combination of both that gets us ultimately where we need to be.

CRN:What do you think is the biggest mistake being made right now by competitors or vendors?

Dupler:What I see is people that are afraid to embrace the change that is under way and are really hanging on to the past. The biggest mistake I see right now is people continuing to evangelize solutions to customers that aren’t necessarily right by the customer, but conform to what they know and drive the most profit for their organizations.

Short-term gain isn’t going to drive long-term customer value. And we need to lead the customers forward through this transformation as opposed to perpetuating the past. The market needs leadership right now. The biggest challenge for people is not moving fast enough to transform their businesses.

This interview was originally posted on CRN.com

To learn more about GreenPages’ CMaaS offering click here!

Disaster Recovery in the Cloud, or DRaaS: Revisited

By Randy Weis

The idea of offering Disaster Recovery services has been around as long as SunGard or IBM BCRS (Business Continuity & Resiliency Services). Disclaimer: I worked for the company that became IBM Information Protection Services in 2008, a part of BCRS.

It seems inevitable that Cloud Computing and Cloud Storage should have an impact on the kinds of solutions that small, medium and large companies would find attractive and would fit their requirements. Those cloud-based DR services are not taking the world by storm, however. Why is that?

Cloud infrastructure seems perfectly suited for economical DR solutions, yet I would bet that none of the people reading this blog has found a reasonable selection of cloud-based DR services in the market. That is not to say that there aren’t DR “As a Service” companies, but the offerings are limited. Again, why is that?

Much like Cloud Computing in general, the recent emergence of enabling technologies was preceded by a relatively long period of commercial product development. In other words, virtualization of computing resources promised “cloud” long before we actually could make it work commercially. I use the term “we” loosely…Seriously, GreenPages announced a cloud-centric solutions approach more than a year before vCloud Director was even released. Why? We saw the potential, but we had to watch for, evaluate, and observe real-world performance in the emerging commercial implementations of self-service computing tools in a virtualized datacenter marketplace. We are now doing the same thing in the evolving solutions marketplace around derivative applications such as DR and archiving.

I looked into helping put together a DR solution leveraging cloud computing and cloud storage offered by one of our technology partners that provides IaaS (Infrastructure as a Service). I had operational and engineering support from all parties in this project and we ran into a couple of significant obstacles that do not seem to be resolved in the industry.

Bottom line:

  1. A DR solution in the cloud, involving recovering virtual servers in a cloud computing infrastructure, requires administrative access to the storage as well as the virtual computing environment (like being in vCenter).
  2. Equally important, if the solution involves recovering data from backups, is the requirement that there be a high speed, low latency (I call this “back-end”) connection between the cloud storage where the backups are kept and the cloud computing environment. This is only present in Amazon at last check (a couple of months ago), and you pay extra for that connection. I also call this “locality.”
  3. The Service Provider needs the operational workflow to do this. Everything I worked out with our IaaS partners was a manual process that went way outside normal workflow and ticketing. The interfaces for the customer to access computing and storage were separate and radically different. You couldn’t even see the capacity you consumed in cloud storage without opening a ticket. From the SP side, notification of DR tasks they would need to do, required by the customer, didn’t exist. When you get to billing, forget it. Everyone admitted that this was not planned for at all in the cloud computing and operational support design.

Let me break this down:

  • Cloud Computing typically has high speed storage to host the guest servers.
  • Cloud Storage typically has “slow” storage, on separate systems and sometimes separate locations from a cloud computing infrastructure. This is true with most IaaS providers, although some Amazon sites have S3 and EC2 in the same building and they built a network to connect them (LOCALITY).

Scenario 1: Recovering virtual machines and data from backup images

Scenario 2: Replication based on virtual server-based tools (e.g. Veeam Backup & Replication) or host-based replication

Scenario 3: SRM, array or host replication

Scenario 1: Backup Recovery. I worked hard on this with a partner. This is how it would go:

  1. Back up VMs at customer site; send backup or copy of it to cloud storage.
  2. Set up a cloud computing account with an AD server and a backup server.
  3. Connect the backup server to the cloud storage backup repository (first problem)
    • Unless the cloud computing system has a back end connection at LAN speed to the cloud storage, this is a showstopper. It would take days to do this without a high degree of locality.
    • Provider solution when asked about this.
      • Open a trouble ticket to have the backups dumped to USB drives, shipped or carried to the cloud computing area and connected into the customer workspace. Yikes.
      • We will build a back end connection where we have both cloud storage and cloud computing in the same building—not possible in every location, so the “access anywhere” part of a cloud wouldn’t apply.

4. Restore the data to the cloud computing environment (second problem)

    • What is the “restore target”? If the DR site were a typical hosted or colo site, the customer backup server would have the connection and authorization to recover the guest server images to the datastores, and the ability to create additional datastores. In vCenter, the Veeam server would have the vCenter credentials and access to the vCenter storage plugins to provision the datastores as needed and to start up the VMs after restoring/importing the files. In a Cloud Computing service, your backup server does NOT have that connection or authorization.
    • How can the customer backup server get the rights to import VMs directly into the virtual VMware cluster? The process to provision VMs in most cloud computing environments is to use your templates, their templates, or “upload” an OVF or other type of file format. This won’t work with a backup product such as Veeam or CommVault.

5. Recover the restored images as running VMs in the cloud computing environment (third problem), tied to item #4.

    • Administrative access to provision datastores on the fly and to turn on and configure the machines is not there. The customer (or GreenPages) doesn’t own the multitenant architecture.
    • The use of vCloud Director ought to be an enabler, but the storage plugins, and rights to import into storage, don’t really exist for vCloud. Networking changes need to be accounted for and scripted if possible.

Scenario 2: Replication by VM. This has cost issues more than anything else.

    • If you want to replicate directly into a cloud, you will need to provision the VMs and pay for their resources as if they were “hot.” It would be nice if there was a lower “DR Tier” for pricing—if the VMs are for DR, you don’t get charged full rates until you turn them on and use for production.
      • How do you negotiate that?
      •  How does the SP know when they get turned on?
      • How does this fit into their billing cycle?
    • If it is treated as a hot site (or warm), then the cost of the DR site equals that of production until you solve these issues.
    • Networking is an issue, too, since you don’t want to turn that on until you declare a disaster.
      • Does the SP allow you to turn up networking without a ticket?
      • How do you handle DNS updates if your external access depends on root server DNS records being updated—really short TTL? Yikes, again.
    • Host-based replication (e.g. WANsync, VMware)—you need a host you can replicate to. Your own host. The issues are cost and scalability.

Scenario 3: SRM. This should be baked into any serious DR solution, from a carrier or service provider, but many of the same issues apply.

    • SRM based on host array replication has complications. Technically, this can be solved by the provider by putting (for example) EMC VPLEX and RecoverPoint appliances at every customer production site so that you can replicate from dissimilar storage to the SP IDC. But, they need to set up this many-to-one relationship on arrays that are part of the cloud computing solution, or at least a DR cloud computing cluster. Most SPs don’t have this. There are other brands/technologies to do this, but the basic configuration challenge remains—many-to-one replication into a multi-tenant storage array.
    • SRM based on VMware host replication has administrative access issues as well. SRM at the DR site has to either accommodate multi-tenancy, or each customer gets their own SRM target. Also, you need a host target. Do you rent it all the time? You have to, since you can’t do that in a multi-tenant environment. Cost, scalability, again!
    • Either way, now the big red button gets pushed. Now what?
      • All the protection groups exist on storage and in cloud computing. You are now paying for a duplicate environment in the cloud, not an economically sustainable approach unless you have a “DR Tier” of pricing (see Scenario 2).
      • All the SRM scripts kick in—VMs are coming up in order in protection groups, IP addresses and DNS are being updated, CPU loads and network traffic climb…what impact is this?
      • How does that button get pushed? Does the SP need to push it? Can the customer do it?

These are the main issues as I see it, and there is still more to it. Using vCloud Director is not the same as using vCenter. Everything I’ve described was designed to be used in a vCenter-managed system, not a multi-tenant system with fenced-in rights and networks, with shared storage infrastructure. The APIs are not there, and if they were, imagine the chaos and impact on random DR tests on production cloud computing systems, not managed and controlled by the service provider. What if a real disaster hit in New England, and a hundred customers needed to spin up all their VMs in a few hours? They aren’t all in one datacenter, but if one provider that set this up had dozens, that is a huge hit. They need to have all the capacity in reserve, or syndicate it like IBM or SunGard do. That is the equivalent of thin-provisioning your datacenter.

This conversation, as many I’ve had in the last two years, ends somewhat unsatisfactorily with the conclusion that there is no clear solution—today. The journey to discovering or designing a DRaaS is important, and it needs to be documented, as we have done here with this blog and in other presentations and meetings. The industry will overcome these obstacles, but the customer must remain informed and persistent. The goal of an economically sustainable DRaaS solution can only be achieved by market pressure and creative vendors. We will do our part by being your vigilant and dedicated cloud services broker and solution services provider.

 

 

 

 

 

 

 

 

 

 

Is Cloud Computing Ready for Prime Time?

By John Dixon, Senior Solutions Architect

 

A few weeks ago, I took part in another engaging tweetchat on Cloud Computing. The topic: is cloud computing ready for enterprise adoption? You can find the transcript here.

 

As usual with tweetchats hosted by CloudCommons, five questions are presented a few days in advance of the event. This time around, the questions were:

  1. Is Public Cloud mature enough for enterprise adoption?
  2. Should Public Cloud be a part of every business’s IT strategy?
  3. How big of a barrier are legacy applications and hardware to public cloud adoption?
  4. What’s the best way to deal with cloud security?
  5. What’s the best way to get started with public cloud?

 

As far as Question #1, the position of most people in the chat session this time was that Public Cloud is mature enough for certain applications in enterprises today. The technology certainly exists to run applications “in the cloud” but regulations and policies may not be ready to handle an application’s cloud deployment. Another interesting observation from the tweetchat was that most enterprises are indeed running applications “in the cloud” right now. GreenPages considers applications such as Concur and Salesforce.com as running “in the cloud.” And of course, many organizations large and small run these applications successfully. I’d also consider ADP as a cloud application. And of course, many organizations make use of ADP for payroll processing.

Are enterprises mature enough for cloud computing?

Much of the discussion during question #1 turned the question on end – the technology is there, but enterprises are not ready to deploy applications there. GreenPages’ position is that, even if we assume that cloud computing is not yet ready for prime time, then it certainly will be soon. Organizations should prepare for this eventuality by gaining a deep understanding of the IT services they provide, and how much a particular IT service costs. When one or more of your IT services can be substituted for one that runs (reliably and inexpensively) in the cloud, will your company be able to make the right decision to take advantage of that condition? Also, another interesting observation: some public cloud offerings may be enterprise-ready, but not all public cloud vendors are enterprise-grade. We agree.

Should every business have a public cloud strategy?

Most of the discussion here pointed to a “yes” answer. Or that an organization’s strategy will eventually, by default, include consideration for public cloud. We think of cloud computing as a sourcing strategy in and of itself – especially when thinking of IaaS and PaaS. Even now, IaaS vendors are essentially providers of commodity IT services. Most commonly, IaaS vendors can provide you with an operating system instance: Windows or Linux. For IaaS, the degree of abstraction is very high, as an operating system instance can be deployed on a wide range of systems – physical, virtual, paravirtual, etc. The consumer of these services doesn’t mind where the OS instance is running, as long as it is performing to the agreed SLA. Think of Amazon Web Services here. Depending on the application that I’m deploying, there is little difference whether I’m using infrastructure that is running physically in Northern Virginia or in Southern California. At GreenPages, we think that this degree of abstraction will move in to the enterprise as corporate IT departments evolve to behave more like service providers… and probably evolve in to brokers of IT services – supported by a public cloud strategy.

Security and legacy applications

Two questions revolved around legacy applications and security as barriers to adoption. Every organization has a particular application that will not be considered for cloud computing. The arguments are similar for the reasons why we never (or, are just beginning to) virtualize legacy applications. Sometimes, virtualizing specialized hardware is, well, really hard and just not worth the effort.

What’s the best way to get started with public cloud?

“Just go out and use Amazon,” was a common response to this question, both in this particular tweetchat and in other discussions. Indeed, trying Amazon for some development activities is not a bad way to evaluate the features of public cloud. In our view, the best way to get started with cloud is to begin managing your datacenter as if it were a cloud environment, with some tool that can manage traditional and cloud environments the same way. Even legacy applications. Even applications with specialized hardware. Virtual, physical, paravirtual, etc. Begin to monitor and measure your applications in a consistent manner. This way, when an application is deployed to a cloud provider, your organization can continue to monitor, measure, and manage that application using the same method. For those of us who are risk-averse, this is the easiest way to get started with cloud! How is this done? We think you’ll see that Cloud Management as a Service (CMaaS) is the best way.

Would you like to learn more about our new CMaaS offering? Click here to receive some more information.

Getting Out of the IT Business

Randy Weis, Director of Solutions Architecture

Strange title for a blog from an IT solutions architect? Not really.

Some of our clients—a lumber mill, a consulting firm, a hospital—are starting to ask us how to get out of “doing IT.” What do these organizations all have in common? They all have a history of challenges in effective technology implementations and application projects leading to the CIO/CTO/CFO asking, “Why are we in the IT business? What can we do to offload the work, eliminate the capital expenses, keep operating expenses down, and focus our IT efforts on making our business more responsive to shifting demands and reaching more customers with a higher satisfaction rate?”

True stories.

If you are in the business of reselling compute, network, or storage gear, this might not be the kind of question you want to hear.

If you are in the business of consulting on technology solutions to meet business requirements, this is exactly the kind of question you should be preparing to answer. If you don’t start working on those answers, your business will suffer for it.

Technology has evolved to the point where the failed marketing terms of grid or utility computing are starting to come back to life—and we are not talking about zombie technology. Cloud computing used to be about as real as grid or utility computing, but “cloud” is no longer just a marketing term. We now have new, proven, and emerging technologies that actually can support a utility model for information technology. Corporate IT executives now are starting to accept that the new cloud computing infrastructure-as-a-service is reliable (recent AWS outages not withstanding) predictable, and useful to a corporate strategy. Corporate applications still need to be evaluated for requirements that restrict deployment and implementation strategies–latency, performance, concerns over satisfying legal/privacy/regulatory issues, and so on. However, the need to have elastic, scalable, on-demand IT services that are accessible anywhere is starting to force even the most conservative executives to look at the cloud for offloading non-mission critical workloads and associated costs (staff, equipment, licensing, training and so on). Mission critical applications can still benefit from cloud technology, perhaps only as internal or private cloud, but the same factors still apply—reduce time to deploy or provision, automate workflow, scale up or down as dictated by business cycles, and push provisioning back out into the business (while holding those same units accountable for the resources they “deploy”).

Infrastructure as a service is really just the latest iteration of self-service IT. Software as a service has been with us for some time now, and in some cases is the default mode—CRM is the best example (e.g. Salesforce). Web-based businesses have been virtualizing workloads and automating deployment of capacity for some time now as well. Development and testing have also been the “low hanging fruit” of both virtualization and cloud computing. However, when the technology of virtualization reached a certain critical mass, primarily driven by VMware and Microsoft (at least at the datacenter level), then everyone started taking a second look at this new type of managed hosting. Make no mistake—IaaS is managed hosting, but New and Improved. Anyone who had to deal with provisioning and deployment at AT&T or other large colocation data centers (and no offense meant) knew that there was no “self-service” involved at all. Deployments were major projects with timelines that rivaled the internal glacial pace of most IT projects—a pace that led to the historic frustration levels that drove business units to run around their own IT and start buying IT services with a credit card at Amazon and Rack Space.

If you or your executives are starting to ask yourselves if you can get out of the day-to-day business of running an internal datacenter, you are in good company. Virtualization of compute, network and storage has led to ever-greater efficiency, helping you get more out of every dollar spent on hardware and staff. But it has also led to ever-greater complexity and a need to retrain your internal staff more frequently. Information Technology services are essential to a successful business, but they can no longer just be a cost center. They need to be a profit center; a cost of doing business for sure, but also a way to drive revenues and shorten time-to-market.

Where do you go for answers? What service providers have a good track record for uptime, customer satisfaction, support excellence and innovation? What technologies will help you integrate your internal IT with your “external” IT? Where can you turn to for management and monitoring tools? What managed services can help you with gaining visibility into all parts of your IT infrastructure, that can deal with a hybrid and distributed datacenter model, that can address everything from firewalls to backups? Who can you ask?

There is an emerging cadre of thought leaders and technologists that have been preparing for this day, laying the foundation, developing the expertise, building partner relationships with service providers and watching to see who is successful and growing…and who is not. GreenPages is in the very front line of this new cadre. We have been out in front with virtualization of servers. We have been out in front with storage and networking support for virtual datacenters. We have been out in front with private cloud implementations. We are absolutely out in front of everyone in developing Cloud Management As A Service.

We have been waiting for you. Welcome. Now let’s get to work.For more information on our Cloud Management as a Service Offering click here

2013 Outlook: A CIO’s Perspective

Journey to the Cloud recently sat down with GreenPages Chief Information and Technology Officer Kevin Hall to talk about the outlook for 2013.

JTC: As CIO at GreenPages what are your major priorities heading into 2013?

KH: As CIO, my major priorities are to continue to rationalize and prioritize within the organization. By rationalize I mean looking at what it is we think the business needs vs. what it is we have, and by prioritize I mean looking at where there are differences between what we have and what we need and then building and operationalizing to get what we need into production.  We are working through that process right now. More specifically, we’re actively trying to do all of this in a way that will simultaneously help the business have more velocity and, as a percentage of revenue, cost less. We’re trying to do more with less, faster.

JTC: What do you think will be some of the biggest IT challenges CIOs will face in 2013?

KH:  I think number one is staying relevant with their business. A huge challenge is being able to understand what it is the business actually needs.  Another big challenge is accepting the fact of life that the business has to actively participate with IT in building out IT. In other words, we have to accept the fact that our business users are oftentimes going to know about technologies that we don’t or are going to be asking questions that we don’t have the answers for. All parties will have to work together to figure it out.

JTC: Any predictions for how the IT landscape will look in 2013 and beyond?

KH: Overall, I think there is a very positive outlook for IT as we move into the future. Whether or not the economy turns around (and I believe it is going to), all businesses are seeking to leverage technology. Based on our conversations with our customers, no one has made any statements to say “hey, we’ve got it all figured out, there is nothing left to do.” Everyone is in a state of understanding that more can be done and that we aren’t at the end of driving business value for IT. More specifically, one thing I would have people keep an eye on is the software defined data center. Important companies like VMware, EMC, and Cisco, amongst others, were rapidly moving to a place that reduces datacenter icons so that just as easily as we can spin up Virtual Machines now, we will be able to spin up datacenters in the future. This will allow us to support high velocity and agility.

JTC: Anything that surprised you about the technology landscape in 2012?

KH: Given a great deal of confusion in our economy, I think I was surprised by how positive the end of the year turned out. The thought seems to be that it must be easy for anyone seeking to hire great people right now due to a high rate of unemployment, but in IT people who get it technically and from a business perspective are working, and they are highly valued by their organizations. Another thing I was surprised about is the determination businesses have to go around, or not use, IT if IT is not being responsive. Now we’re in an age where end users have more choices and a reasonably astute business person can acquire an “as a Service” technology quickly, even though it may be less than fully optimized and there may be issues (security comes to mind). Inside a company, employees may prefer to work with IT, but if IT moves too slowly or appears to just say “no,” people will figure out how to get it done without them.

JTC: What are some of the biggest misconception organizations have about the cloud heading into 2013?

KH: I think a major misconception about cloud is about the amount these technologies are actually being used in one’s organization.  It is rare to find a CIO (this included myself up to recently) who has evaluated just how much cloud technologies are truly being used in their business. Are they aware of every single app being used? How about every “as a Service” that is being procured in some way without IT involvement? Therefore, when they think of their platform, are they including in it all of the traditional IT assets as well as all the “aaS” and cloud assets that are at their company? It goes back to how we as IT professionals can’t be meaningful when we are not even positive of exactly what is going on within the walls of our own company.

JTC: Any recommendations for IT Decision makers who are trying to decide where to allocate their 2013 budgets?

KH: I think IT Decision Makers need to be working with colleagues throughout the company to see what they need to get done and then build out budgets accordingly so they truly support the goals of the business. They need to be prepared to be agile so that unexpected, yet important, business decisions that pop up throughout the year can be supported. Furthermore, they need to be prepared from a velocity standpoint so that when a decision is made, the IT department can go from thought to action very quickly.

 

 

Happy Techsgiving! Top 7 Tech Gadgets I’m Thankful for in 2012

By John Dixon

With Thanksgiving just around the corner, I decided to take a couple minutes to think about what tech gadgets I have been most thankful for in 2012. I’ve seen both consumers and corporate clients really begin to embrace a few technologies over the past year. As a consultant, a lot of these things make life easier for me, my coworkers, and my company.

1. DropBox

…and other online file-sharing/storage solutions. This is more of a selfish item. I use DropBox personally to sync documents and data across various devices. Being on the road much of the time, I’ve assembled a small arsenal of technology to try and find that allusive combination that truly helps me deliver services to my clients. More and more devices sometimes makes it more difficult to stay organized. One platform, like DropBox, that helps me access my documents from any device is a technology that I’ve grown to rely on.

2. SSD Drives

For the first time, I can run several virtual machines and my desktop OS from my laptop, without sacrificing performance. I never thought I would need to do this, but it certainly helps my day-to-day productivity. I run one desktop that is always connected to the VPN of my client project, one for my GreenPages desktop, and my desktop OS to access documents on my local machine. Being able to instantly switch between them has been a huge help to me lately. Sometimes I’ll spin up a new VM to test a concept or a piece of code. The SSD makes all of this possible.

3. Cloud Infrastructure Services

Being able to spin up a virtual environment quickly and for low cost has really been helpful for both me and some of my clients. For example, recently I was able to spin up a J2EE (Tomcat, mysql, Apache) environment for a day to test something for one of my clients. I didn’t need to keep any of the data in the environment, so I used it for a day and shut it down, all for about $2.00 or so. Once the monitoring and management is ironed out, I really think this type of IaaS will be a very attractive alternative for corporate clients, especially development groups and startups.

4. Bring Your Own Device (BYOD)

GreenPages recently adopted this option for desktop computing, and I think it is fantastic. Having an arsenal of technology available to you is one thing, but having the arsenal of technology that you are comfortable using and are excited about is another. BYOD allows employees to get the technology that best fits their working style and expertise. I think this will become more and more important as the trend advances to employ more and more remote resources. And the next generation of workers – those who have grown up with all-the-time connectivity – will almost certainly work more efficiently with BYOD policies instead of TIWYG (This Is What You Get) policies.

5. Cloud Collaboration Platforms

This one is similar to #1, but more along the lines of collaboration. I work in a distributed team where face-to-face communication is not always possible. Being able to share documents and track issues from the same platform is almost necessary when it comes to complex projects.

6. App Stores

I admit, I like Apple stuff. I’ve heard the resentment of the Apple App Store – it’s a closed system, Apple gives only a 30% cut to developers, etc. But how often do you see a small organization or even a single individual effectively competing in the same market/landscape with large corporations? The cool thing about the App Store is that it narrows the focus on quality and experience of the product (the App, in this case). Sometimes the smaller organizations, including some of our clients, can deliver higher quality products and services than the “big guys.” App Stores are cool because they basically level the playing field.

7. Bluetooth In Rental Cars

I travel a decent amount so this is obviously extremely helpful when I am on the road. Another thing I can be thankful for!

What technologies are you Thankful for this year?

 

GreenPages is holding an event in Atlanta next week on the 28th. Come listen to our experts discuss everything from clustered datacenters, to buzz from VMworld, to VDI battlefield stories.