Todas las entradas hechas por Journey to the Cloud

Five Things to Consider Before Choosing your Professional Services Automation Tool

By Alyson Gallant, PMP, Project Administrator, LogicsOne

Choosing a Professional Services Automation (PSA) tool can be an arduous task. There are a number of options out there, and everyone’s business and workflows are unique. Also, the potential cost and time of evaluating a number of tools and running a proof of concept can be overwhelming.

Why do you need a PSA tool? A PSA tool is crucial to understanding where your resources are spending their time and the profitability of projects. A suitable PSA tool should allow your organization to continuously evaluate performance in order to improve and scale.

Like any Professional Services organization (or any organization looking to track time and budget on large internal projects), we’ve evaluated and used a number of different PSA tools. During the implementation of our current PSA tool, the Project Management Office (PMO) was lucky enough to have significant input into our PSA tool selection as well as the rollout to our Professional Services organization.

There is no one size fits all when it comes to a PSA tool, and there are many methods of evaluation, which we won’t get into here. However, here’s some food for thought based on our experiences for anyone preparing to search for a new PSA tool:

1) Can you consolidate your applications?
Do you have a lot of applications? Will your new PSA tool be an additional application for your users to use? Are your users experiencing application overload?? If so, have you investigated what your current CRM application already offers? A number of PSA applications are part of a larger suite, and the more modules you use, the better your data flows through.

2) Are you trying too hard to reinvent the wheel? Customization vs. integration.
One of the potential cons of going to a tool that you cannot customize is that you may have to change your workflow to fit the tool rather than having a tool that works with your existing workflow. But is that necessarily a bad thing? Using an application that hundreds or thousands of other customers are using can help provide a baseline for what your organization should be doing if your current workflow has a number of inefficiencies. Also, if you customize your tool, how does that affect future upgrades? Be open to revaluating your workflow. Look for a tool that is open to integrations with your other applications as an alternative to customization.

3) Is there an active community of users?
A great feature of a PSA tool is the energy and enthusiasm of the community of users. We did not have this with our first PSA tool, and it wasn’t until our second PSA tool that we recognized the value of this. Through an online community, users actively discuss new releases and provide feedback on forums open to all users. Users are encouraged to enter “New Feature Requests” and vote on them. We all know our time is highly prized during the workday, but sometimes it can be a good gut check when you’re running into an issue or workflow conundrum to take a look at what others are seeing or experiencing to see if you’re on the right track.

4) Are you able to roll out your tool in a staged approach?
If you have the luxury of rolling out a PSA in a staged fashion, this may be an easier way to encourage adoption of your users, as well as ensure you’re getting accurate data entered by your users. As we all know, change can be difficult, and when users are overwhelmed and unsure of a new process, it may not be the best setting for the most accurate information to be entered. If you have the ability to roll out a single module of your new PSA at a time, your users can focus on getting each process down correctly before moving onto the next new process. A staged approach may not always work for your rollout, but it is worth considering to ensure you have “good data”.

5) Are you willing to perform constant evaluation on the new tool and provide recurring training?
As rollouts can take time, there can be quite a gap between inputting your data into your new PSA tool and evaluating the data that you extract. What happens when you extract data that isn’t useful? What if the information is incorrect? You’ll need to constantly gauge how well your workflow is providing your management with information, and changing that workflow can require new training. Make sure to factor this in with the rollout of your PSA tool – the work is never done.
Are you in the market for a new tool to track your projects? What do you use currently, and what are your pain points?

 

If you do have any questions, this is something we can help with as part of our On Demand Project Management Offering, so feel free to reach out!

The Impact of Unified Communication & Collaboration

 

http://www.youtube.com/watch?v=MLYCeloSXMk

“>http://youtu.be/MLYCeloSXMk

 

In this video, GreenPages Solutions Architect Ralph Kindred talks about the latest industry trends around unified communications and video collaboration and the positive impact it has on businesses today.

 

To learn more about how GreenPages can help your organization with unified communications & collaboration, fill out this form

How RIM Can Improve Efficiency and Add Value To Your IT Ops

This is a guest post from Chris Joseph, VP, Product Management & Marketing, NetEnrich

 

Cloud, virtualization and hybrid IT technologies are being used in small and large IT enterprises everywhere to both modernize, and achieve business goals and objectives. As such, a top concern for today’s IT leaders is whether the investments being made in these technologies are delivering on the promise of IT modernization. Another concern is finding ways to free up IT funds currently spent on routine maintenance of IT infrastructure, so that they can invest in these new and strategic IT modernization projects.

Don’t Waste Time, Money and Talent on Blinking Lights

Everyone knows that IT organizations simply can’t afford to have a team of people dedicated to watching for blinking lights and waiting for something to fix.  It’s a waste of talent and will quickly burn through even the most generous of IT budgets. Yet, according to a Gartner study, 80% of an enterprise IT budget is generally spent on routine IT, while only 20% is spent on new and strategic projects.

If this scenario sounds familiar, then you may want to consider taking a long and hard look at third-party Remote Infrastructure Management (RIM) services for your IT infrastructure management. In fact, RIM services have been shown to reduce spending on routine IT operations by 30-40%, but how is this possible?

(1)     First of all, RIM services rationalize, consolidate and integrate the tools that are used to power the functionality of the monitoring and management of IT infrastructure within an enterprise.  According to Enterprise Management Associates, a leading IT and data management research and consulting firm, a typical enterprise has nearly 11 such tools running in its environment, and these typically include IT Operations Management (ITOM) tools and IT Service Management (ITSM) tools. As any IT professional can attest to, while there is significant overlap, some of these tools tend to be deficient in their capabilities, and they can be a significant source of noise and distraction, especially when it comes to false alerts and tickets. Yet, through RIM, IT organizations can eliminate many of these tools and consolidate their IT operations into a single pane of glass view, which can result in significant cost savings.

(2)     Secondly, by leveraging RIM, IT teams can be restructured and organized into shared services delivery groups, which can result in better utilization of skilled resources, while supporting the transformation of IT into a new model that acts as a service provider to business units.  Combine these elements of RIM with remote service delivery, and not only will you improve economies of scale and scope, but you will also promote cost savings.

(3)     Thirdly, RIM services consistently look to automation, analytics, and best practices to promote cost savings in the enterprise. Manual processes and runbooks are not only costly, but also time consuming and error prone. Yet, to automate processes effectively, IT organizations must rely on methodologies, scripts, and tools. This is where RIM comes into play. In fact, within any enterprise, 60-80% of manual processes and runbooks can easily be automated with RIM.

Download this free whitepaper to learn how to avoid focusing on ”keeping the lights on” to allow your team to focus on strategic initiatives

Beyond Cost Savings and Greater Efficiency: Building a Case for RIM

In addition to reducing routine spending and improving the efficiency of your IT operations, there are several other benefits to leveraging third-party RIM services:

  • 24×7 IT operations support.  Third-party RIM services often provide 24×7 IT ops support.  IT organizations benefit from around the clock monitoring and management of their IT infrastructures without additional headcount, or straining internal resources, which saves operating costs.
  • Be the first to know. 24×7 IT operations support means that you are always the first to know when customer-facing IT systems such as the company’s website, online shopping portal, mobile apps and cloud-based solutions go down. And, the issue is resolved in many cases by RIM services teams before the end-user has time to notice.
  • Skills and expertise. Third-party RIM services can provide your IT organization with certified engineers in various IT infrastructure domains. These engineers are responsible for monitoring, alerting, triaging, ticketing, incident management, and the escalation of critical outages or errors to you and your IT staff, if they cannot be immediately resolved. In addition, they may also be available on an on-demand basis if you are looking for skills and expertise in a specific domain.

The bottom line: by leveraging RIM services, IT organizations like yours can not only enhance their service capabilities and bolster service levels, but they can also can say goodbye to the fire drills and late night calls that plague your IT staff.  Proactive management of your IT infrastructure through RIM ensures that it is always running at peak performance.

To hear more from Chris, visit the NetEnrich blog

To learn more about how GreenPages can help you monitor and manage your IT Operations fill out this form

Part 2: Want to Go Cloud? What’s the Use Case?

By Lawrence Kohan, Senior Consultant, LogicsOne

 

Recap:

In Part 1 of this blog post, I started by reiterating the importance of having a strategy for leveraging the Cloud before attempting to migrate services to it in order to achieve the best results.  Using an example use case, I showed the basic pros and cons of considering moving a company’s e-mail services to the Cloud.  Then, delving further into the additional factors to consider, based on the size and breadth of the company, I showed that in that particular scenario, that an e-mail migration to the Cloud would provide more benefit to small businesses and startups instead of medium to large enterprises; wherein such a migration may actually be more detrimental than helpful.

Use the Cloud to level the playing field!

Historically, a small business is typically at a disadvantage to their larger counterparts, as they generally have less capital to work with.  However, the Cloud Era may prove to be the great equalizer.  The nimbleness and portability of a small business may prove to be quite an advantage when it comes to reducing operating costs to give the small business a competitive edge.  A small business with a small systems footprint may be able to consider strategies for moving most—if not all—of their systems to the Cloud.  A successful migration would greatly reduce company overhead, administrative burden, and increased office space and real estate by repurposing decommissioned server rooms.  Thus, a small business is able to leverage the Cloud in a way to gain a competitive advantage in a way that is (most likely) not an option for a medium or large enterprise.

So, what is a good Cloud use case for a medium to large business?

The Cloud can’t be all things to all people.  However, the Cloud can be many things to many people.  While the enterprise may not have the same options as the small business, they still have many options available to them to reduce their costs or expand their resources to accommodate their needs in a cost-effective way.

Enterprise Use Case 1: Using IaaS for public website hosting

A good low-risk Cloud option that an enterprise can readily consider: moving non-critical, non-confidential informational data to the Cloud.  A good candidate for initial Cloud migration would be a corporate website with marketing materials or information about product or service offerings.  It is important that a company’s website containing product photos, advertising information, hours of operation and location and contact information is available 24/7 for customer and potential customer access.  In this case, the enterprise can leverage a Cloud Service Provider’s Infrastructure as a Service (IaaS) in order to host their website.  For a monthly service fee, the Cloud Service Provider will host the enterprise’s website on redundant, highly available infrastructure and proactively monitor the site to ensure maximum uptime.  (The enterprise should consider the Cloud Service Provider’s SLA when determining their uptime needs).

By this strategy, the enterprise is able to ensure maximum uptime for it important revenue-generating web materials, while offloading the costs associated with hosting and maintenance of the website.  At the same time, the data being presented online is not confidential in nature, so there is little risk in having it hosted externally.  This is an ideal use case of a Public Cloud.

In addition to the above, a Hybrid Cloud approach can also be adopted: the public-facing website could conduct e-commerce transactions by redirecting purchase requests to privately hosted e-commerce applications and customer databases that are secure and PCI compliant.  Thus, we have an effective, hybrid use of Cloud resources to leverage high availability, while still keeping confidential customer and credit card data secure and internally hosted. We’ll actually be hosting a webinar tomorrow with guest speakers from Forrester Research and Gravitant that will talk about hybrid cloud management. If you’re interested in learning more about how to properly manage your IT environment, I’d highly recommend sitting in.

Enterprise Use Case 2: Using Cloud Bursting to accommodate increased resource demands as needed

Another good Public Cloud use case: let’s say a company, operating at maximum capacity, has periodic or seasonal needs to accommodate spikes in workload.  This could either be increased demands on applications and infrastructure, or needing extra staff to perform basic clerical or administrative functions on a limited basis.  It would be a substantial investment to procure additional office space and computer hardware for limited use—not to mention the additional expenses of maintaining the hardware and office space.  In such a case, an enterprise using a Cloud Service Provider’s IaaS would be able to rapidly provision virtual servers and desktops that can be accessed via space-saving thinclients, or even remotely.  Once the project is completed, those virtual machines can be deleted.  Upon future need, new virtual machines could easily be provisioned in the same way.  And most importantly, the company only pays for what it needs, when it needs it.  This is another great way for an enterprise to leverage the Cloud’s elasticity to accommodate its dynamic needs!

Enterprise Use Case 3: Fenced testing environments for application development

Application teams often need to simulate production conditions for testing, while not effecting actual production.  When dealing with traditional hardware infrastructure, setting up a dedicated development infrastructure could be an expensive and time consuming proposition.  In addition, the Apps team may require many identical setups for multiple teams’ testing, or to simulate many scenarios using the same parameters such as IP and MAC addresses.  With traditional hardware setups, this is an extremely difficult task to achieve in a productive, isolated manner.  However, with Cloud services, such as VMware’s vCloud Suite, isolated fenced applications can be provisioned and mass-produced quickly for an Apps team’s use without affecting production, and then can be rapidly decommissioned as well.  In this particular example use case of the vCloud Suite, VMware’s Chargeback Manager can also be used to get a handle on the costs associated with development environment setup, which can then provide showback and chargeback reports to a department, organization, or other business entity.  This is yet another good example of an efficient and cost-effective use of the Cloud to solve a complex business need.

 

Consider your strategy first!  Then, use the Cloud to your advantage!

So, as we have seen, the Cloud offers various time-saving, flexible, efficient solutions, that can accommodate businesses of any size or nature.  However, the successful transition to the Cloud depends—more than anything else—on the initial planning and strategy that goes into its adoption.

Of course, there are many other options and variables to consider in a Cloud adoption strategy, such as choice of providers, consulting services, etc.  However, before even looking into the various Cloud vendors and options, start out by asking the important internal questions, first:

  • What are our business goals?
  • What are our intended use case(s) for the Cloud?
  • What are we looking to achieve from its use?
  • What is the problem that we are trying to solve?  (And is the Cloud the right choice for that particular problem?)
  • What type of Cloud service would address our need? (Public, Private, Hybrid?)
  • What is our timetable for transition to the Cloud?
  • What is our plan?  Is it feasible?
  • What is our contingency plan?  (How do we backup and/or back-out?)

When a company has solid answers for question such as the above, they are ready to begin their own journey to the cloud.

 

Last chance to register for tomorrow’s webinar on leveraging cloud brokerage. Speakers from GreenPages, Forrester Research, and Gravitant.

Is There Such a Thing as Just-In-Time IT?

By Praveen Asthana, Chief Marketing Officer, Gravitant

 

The concept of “Just-in-Time” was pioneered in the manufacturing supply chain as a critical way to reduce costs by minimizing inventory.   Implementing a just-in-time system that can handle unexpected demand is not a trivial undertaking.  It requires the confluence of a number of disciplines such as analytics, statistics, sourcing, procurement, production management, brokerage and economics.

An interesting new idea is to take this concept pioneered in manufacturing and apply it to Information Technology resources.  Doing this can provide an effective way to meet dynamically changing needs while minimizing the inventory of unused IT resources across a set of cloud services platform and providers.

Case Study:  Election Day 2012.

With the growing popularity of e-voting and use of the Internet as an information resource on candidates and issues, the Secretary of State’s office for one of the most populous U.S. states knew that demand for IT resources would go up significantly on election day.  But they didn’t know exactly how much, and they didn’t want to buy extra infrastructure for a temporary surge in demand.  Even if they could come up with a good guess for the demand, deploying the right amount of resources in a timely manner would be challenging.  Given the time it normally took (months) to deploy and provision new servers, the Secretary of State’s office knew they couldn’t use traditional means to procure compute and storage capacity to meet this demand.

As it turned out, demand went up over 1000% to over five million hits on the state voting web site by noon on Election Day.

Praveen

Fortunately the state had deployed a novel capability based on a cloud brokerage and management platform to seamlessly provision IT resources in real time from multiple public cloud sources to meet the variability in demand.  As a result, this demand was fully met without needing to do complicated planning or buy unneeded infrastructure. I’ll actually be speaking on a webinar with Chris Ward, CTO at GreenPages-LogicsOne and Dave Bartoletti, a Senior Analyst at Forrester Research on June 12th to talk about leveraging cloud brokerage and the impact it can have on managing your IT environment.

Minutes, not months—that’s what enterprise users want when it comes to having I.T. resources available to meet changing business needs or develop new applications.

However users find this to be an extraordinary challenge—most IT departments today struggle with rigid processes, a round-robin of tasks and approvals across multiple silos and departments, and manual provisioning steps.  All this adds significant time to the deployment of I.T. resources resulting in users waiting for months before the resources they need become available.

How do users respond to such delays?  By going around their IT departments and directly accessing cloud services.  Often termed ‘rogue IT’ or ‘shadow IT,’ such out of process actions expose the company to financial risk, security risks, and operational risk.

The Solution: Just-in-time IT with Real-Time Governance

Just-in-time IT is not merely about using private or public cloud services.   It is about engineering the end-to-end IT supply chain so it can be agile and respond immediately to dynamic business needs.  To achieve this in practice, you need:

  1. Effective assessment and strategy
  2. Self-service catalog of available IT resources
  3. Collaborative solution design
  4. Rapid approval work flow
  5. Sourcing platform that allows you to select the right supply chain partners for your business need or workload profile.
  6. Single button provisioning of resources
  7. Transparency across the IT supply chain
  8. Sophisticated supply-demand analytics
  9. Elastic source for resources
  10. Governance—dynamic control of resources based on goal based optimization of budget, resource usage and SLAs.

 

The first critical aspect of real time supply chain is identifying, sourcing and procurement of best fit cloud platforms and providers (internal or external) to meet your unique business needs.

The second critical aspect of ensuring just-in-time IT is effective is real-time governance, for this is the mechanism by which you truly manage the elasticity of cloud resources and ensure that IT resource inventory is minimized.   This also has the additional benefit of eliminating shadow or rogue I.T.

As I mentioned above, if you’re interested in learning more on this topic I would highly recommend registering for the upcoming webinar “What’s Missing In Today’s Hybrid Cloud Management – Leveraging Cloud Brokerage” being held on June 12th. This should be a great session and there will be time for Q & A at the end.

About the Author:

Praveen Asthana is Chief Marketing Officer of Gravitant (www.gravitant.com), a cloud services brokerage and management company.  Prior to joining Gravitant, Praveen was Vice President of Marketing and Strategy for Dell’s $13B Enterprise Solutions Division.

Catching up with Chuck Hollis: A Storage Discussion

Things are moving fast in the IT world. Recently, we caught up with Chuck Hollis (EMC’s Global Marketing CTO and popular industry blogger) to discuss a variety of topics including datacenter federation, Solid State Drives, and misperceptions surrounding cloud storage.

JTC: Let’s start off with Datacenter federation…what is coming down the road for running active/active datacenters with both HA and DR?

Chuck: I suppose the first thing that’s worth pointing out is that we’re starting to see using multiple data centers as an opportunity, as opposed to some sort of problem to overcome. Five years ago, it seems that everyone wanted to collapse into one or two data centers. Now, it’s pretty clear that the pendulum is starting to move in the other direction – using a number of smaller locations that are geographically dispersed.

The motivations are pretty clear as well: separation gives you additional protection, for certain applications users get better experiences when they’re close to their data, and so on. And, of course, there are so many options these days for hosting, managed private cloud services and the like. No need to own all your data centers anymore!

As a result, we want to think of our “pool of resources” as not just the stuff sitting in a single data center, but the stuff in all of our locations. We want to load balance, we want to failover, we want to recover from a disaster and so on – and not require separate technology stacks.

We’re now at a point where the technologies are coming together nicely to do just that. In the EMC world, that would be products like VPLEX and RecoverPoint, tightly integrated with VMware from an operations perspective. I’m impressed that we have a non-trivial number of customers that are routinely doing live migrations at metro distances using VPLEX or testing their failover capabilities (not-disruptively and at a distance) using RecoverPoint.

The costs are coming down, the simplicity and integration is moving up – meaning that these environments are far easier to justify, deploy and manage than just a few years ago. Before long, I think we’ll see active-active data centers as sort of an expected norm vs. an exception.

JTC: How is SSD being leveraged in total data solutions now, with the rollout of the various ExtremeIO products?

Chuck: Well, I think most people realize we’re in the midst of a rather substantial storage technology shift. Flash (in all its forms) is now preferred for performance, disks for capacity.

The first wave of flash adoption was combining flash and disk inside the array (using intelligent software), usually dubbed a “hybrid array”. These have proven to be very, very popular: with the right software, a little bit of flash in your array can result in an eye-popping performance boost and be far more cost effective than trying to use only physical disks to do so. In the EMC portfolio, this would be FAST on either a VNX or VMAX. The approach has proven so popular that most modern storage arrays have at least some sort of ability to mix flash and disk.

The second wave is upon us now: putting flash cards directly into the server to deliver even more cost-effective performance. With this approach, storage is accessed at bus speed, not network speed – so once again you get an incredible boost in performance, even as compared to the hybrid arrays. Keep in mind, though: today this server-based flash storage is primarily used as a cache, and not as persistent and resilient storage – there’s still a need for external arrays in most situations. In the EMC portfolio, that would be the XtremSF hardware and XxtremSW software – again, very popular with the performance-focused crowd.

The third wave will get underway later this year: all-flash array designs that leave behind the need to support spinning disks. Without dragging you through the details, if you design an array to support flash and only flash, you can do some pretty impactful things in terms of performance, functionality, cost-effectiveness and the like. I think the most exciting example right now is the XtremIO array which we’ve started to deliver to customers. Performance-wise, it spans the gap between hybrid arrays and server flash, delivering predictable performance largely regardless of how you’re accessing the data. You can turn on all the bells and whistles (snaps, etc.) and run them at full-bore. And data deduplication is assumed to be on all the time, making the economics a lot more approachable.

The good news: it’s pretty clear that the industry is moving to flash. The challenging part? Working with customers hand-in-hand to figure out how to get there in a logical and justifiable fashion. And that’s where I think strong partners like GreenPages can really help.

JTC: How do those new products tie into FAST on the array side, with software on the hosts, SSD cards for the servers and SSD arrays?

Chuck: Well, at one level, it’s important that the arrays know about the server-side flash, and vice-versa.

Let’s start with something simple like management: you want to get a single picture of how everything is connected – something we’ve put in our management products like Unisphere. Going farther, the server flash should know when to write persistent data to the array and not keep it locally – that’s what XtremSW does among other things. The array, in turn, shouldn’t be trying to cache data that’s already being cached by the server-side flash – that would be wasteful.

Another way of looking at it is that the new “storage stack” extends beyond the array, across the network and into the server itself. The software algorithms have to know this. The configuration and management tools have to know this. As a result, the storage team and the server team have to work together in new ways. Again, working with a partner that understands these issues is very, very helpful.

JTC: What’ the biggest misperception about cloud storage right now?

Chuck: Anytime you use the word “cloud,” you’re opening yourself up for all sorts of misconceptions, and cloud storage is no exception. The only reasonable way to talk about the subject is by looking at different use cases vs. attempting to establish what I believe is a non-existent category.

Here’s an example: we’ve got many customers who’ve decided to use an external service for longer-term data archiving: you know, the stuff you can’t throw away, but nobody is expected to use. They get this data out of their environment by handing it off to a service provider, and then take the bill and pass it on directly to the users who are demanding the service. From my perspective, that’s a win-win for everyone involved.

Can you call that “cloud storage”? Perhaps.

Or, more recently, let’s take Syncplicity, EMC’s product for enterprise sync-and-share. There are two options for where the user data sits: either an external cloud storage service, or an internal one based on Atmos or Isilon. Both are very specific examples of “cloud storage,” but the decision as to whether you do it internally or externally is driven by security policy, costs and a bunch of other factors.

Other examples include global enterprises that need to move content around the globe, or perhaps someone who wants to stash a safety copy of their backups at a remote location. Are these “cloud storage?”

So, to answer your question more directly, I think the biggest misconception is that – without talking about very specific use cases – we sort of devolve into a hand-waving and philosophy exercise. Is cloud a technology and operational model, or is it simply a convenient consumption model?

The technologies and operational models are identical for everyone, whether you do it yourself or purchase it as a service from an external provider.

JTC: Talk about Big Data and how EMC solutions are addressing that market (Isilon, GreenPlum, what else?).

Chuck: If you thought that “cloud” caused misperceptions, it’s even worse for “big data.” I try to break it down into the macro and the micro.

At the macro level, information is becoming the new wealth. Instead of it being just an adjunct to the business process, it *is* the business process. The more information that can be harnessed, the better your process can be. That leads us to a discussion around big data analytics, which is shaping up to be the “killer app” for the next decade. Business people are starting to realize that building better predictive models can fundamentally change how they do business, and now the race is on. Talk to anyone in healthcare, financial services, retail, etc. – the IT investment pattern has clearly started to shift as a result.

From an IT perspective, the existing challenges can get much, much more challenging. Any big data app is the new 800 pound gorilla, and you’re going to have a zoo-full of them. It’s not unusual to see a 10x or 100x spike in the demand for storage resources when this happens. All of the sudden, you start looking for new scale-out storage technologies (like Isilon, for example) and better ways to manage things. Whatever you were doing for the last few years won’t work at all going forward.

There’s a new software stack in play: think Hadoop, HDFS, a slew of analytical tools, collaborative environments – and an entirely new class of production-grade predictive analytics applications that get created. That’s why EMC and VMware formed Pivotal from existing assets like Greenplum, GemFire et. al. – there was nothing in the market that addressed this new need, and did it in a cloud-agnostic manner.

Finally, we have to keep in mind that the business wants “big answers”, and not “big data.” There’s a serious organizational journey involved in building these environments, extracting new insights, and operationalizing the results. Most customers need outside help to get there faster, and we see our partner community starting to respond in kind.

If you’d like a historical perspective, think back to where the internet was in 1995. It was new, it was exotic, and we all wondered how things would change as a result. It’s now 2013, and we’re looking at big data as a potentially more impactful example. We all can see the amazing power; how do we put it to work in our respective organizations?

Exciting time indeed ….

Chuck is the Global Marketing CTO at EMC. You can read more from Chuck on his blog and follow him on Twitter at @chuckhollis.

Cloud Security: From Hacking the Mainframe to Protecting Identity

By Andi Mann, Vice President, Strategic Solutions at CA

Cloud computing, mobility, and the Internet of Things are leading us towards a more technology-driven world. In my last blog, I wrote about how the Internet of Things will change our everyday lives, but with these new technologies comes new risks to the organization.

To understand how recent trends are shifting security, let’s revisit the golden age of hacking movies from the ‘80s and ‘90s. A recent post by Alexis Madrigal of The Atlantic sums up this era of Hollywood hackers by saying that “the mainframe was unhackable unless [the hackers] were in the room, in which case, it was simple.” That’s not far off from how IT security was structured in those years. Enterprises secured data by keeping everything inside a corporate firewall and only granting accessed to employees within the perimeter. Typically, the perimeter extended as far as the walls of the building.

When the cloud emerged on the scene, every IT professional said that it was too risky and introduced too many points of vulnerability. They weren’t wrong, but the advantages of the cloud, such as increased productivity, collaboration, and innovation, weren’t about to be ignored by the business. If the IT department just said no to cloud, the business could go elsewhere for their IT services – after all, the cloud doesn’t care who signs the checks. In fact, a recent survey revealed that in 60% of organizations, the business occasionally “circumvents IT and purchases technology on their own to support a project,” a practice commonly referred to as rogue IT, and another recent study found a direct correlation between rogue IT and data loss. This is obviously something that the IT department can’t ignore.

Identity is the New Perimeter

The proliferation of cloud connected devices and users accessing data from outside the firewall demands a shift in the way we secure data. Security is no longer about locking down the perimeter – it’s about understanding who is accessing the information and the data they’re allowed to access. IT needs to implement an identity-centric approach to secure data, but according to a recent Ponemon study, only 29% of organizations are confident that they can authenticate users in the cloud. At first glance, that appears to be a shockingly low number, but if you think about it, how do you verify identity? Usernames and passwords, while still the norm, are not sufficient to prove identity and sure, you can identify a device connected to the network, but can you verify the identity of the person using the device?

In a recent @CloudCommons tweetchat on cloud security, the issue of proving the identity of cloud users kept cropping up:

 Andi Mann

Today’s hackers don’t need to break into your data center to steal your data. They just need an access point and your username and password. That’s why identity and access management is such a critical component of IT security. New technologies are emerging to meet the security challenge, such as strong authentication software that analyzes risk and looks for irregularities when a user tries to access data. If a user tries to access data from a new device, the strong authentication software will recognize that it’s a new device and extra authentication flows kick in that require the user to further verify their identity.

What IT should be doing now to secure identity

To take advantage of cloud computing, mobility, and the Internet of Things in a secure way, the IT department needs to implement these types of new and innovative technologies that focus on verifying identity. In addition to implementing new technologies, the IT department needs to enact a broader cloud and mobile device strategy that puts the right policies and procedures in place and focuses on educating employees to minimize risk. Those in charge of IT security must also establish a trust framework that enforces how you identify, secure and authenticate new employees and devices.

Cloud computing, mobile devices, and the Internet of Things can’t be ignored by IT and the sooner a trust framework and a cloud security strategy is established, the sooner your organization can take advantage of new and innovative technologies, allowing the business to reap the benefits of cloud, mobile, and the Internet of Things, while keeping the data safe and sound. And to me, that sounds like a blockbuster for IT.

 

Andi Mann is vice president of Strategic Solutions at CA Technologies. With over 25 years’ experience across four continents, Andi has deep expertise of enterprise software on cloud, mainframe, midrange, server and desktop systems. Andi has worked within IT for global corporations, with software vendors, and as a leading industry analyst. He has been published in the New York Times, USA Today, Forbes, CIO, Wall Street Journal, and more, and has presented worldwide on virtualization, cloud, automation, and IT management. Andi is a co-author of the popular handbook, ‘Visible Ops – Private Cloud’, and the IT leader’s guide to business innovation, ‘The Innovative CIO’. He blogs at https://pleasediscuss.com/andimann and tweets as @AndiMann.

 

 

 

Cloud Corner Video- Keys to Hybrid Cloud Management

http://www.youtube.com/watch?v=QIEGDZ30H2Q

 

GreenPages CEO Ron Dupler and LogicsOne Executive Vice President and Managing Director Kevin Hall sit down to talk about the current state of the cloud market, challenges IT decision makers are facing today in regards to hybrid cloud environments, as well as a revolutionary new Cloud Management as a Service Offering.

If you’re looking for more information on hybrid cloud management, download this free whitepaper.

 

Or, if you would like someone to contact you about GreenPages Cloud Management as a Service offering, fill out this form.

Cloudviews Recap: The Enterprise Cloud

By John Dixon, Consulting Architect, LogicsOne

A few weeks ago, I took part in another engaging tweetchat on Cloud Computing. The topic: the enterprise cloud. Transcript here: http://storify.com/CloudCommons/cloudviews-tweetchat-enterprise-cloud

I’ll be recapping the responses to each question posed in the Tweetchat and giving an expanded response from the GreenPages perspective. As usual with tweetchats hosted by CloudCommons, five questions are presented a few days in advance of the event. This time around, the questions were:

  1. How should an enterprise get started with cloud computing?
  2. Is security still the “just because” reason for not migrating to the cloud?
  3. Who is responsible for setting the cloud strategy in the enterprise?
  4. What’s the best way for enterprises to measure cloud ROI?
  5. What are the top 3 factors enterprises should consider before moving to a cloud model?
  6. How should an enterprise measure the success of its cloud implementation?

Before we jump in to each question, let me say that the Cloud Commons Tweetchats are getting better and better. I try to participate in each one, and I find the different perspectives very interesting. The dynamic on Twitter makes these conversations pretty intense, and we always cover a lot of ground in just 60 minutes. Thanks to all of the regulars that participate. And if you haven’t been able to participate yet, I encourage you to have a look.

How should an enterprise get started with cloud computing?

I’m sure you’d agree that there are lots of different perspectives on cloud computing, especially now that adoption is gaining momentum. Consumers are regularly using cloud services. Organizations large and small are using cloud computing in different ways. Out of the gate, these different perspectives came to the surface. Here’s a recap of the first responses (with my take in parentheses). I don’t disagree with any of them; I think they’re all valid:

  1. “Ban new development that doesn’t use cloud … as a means to help development teams begin to learn the new paradigm” (maybe a little harsh, but I can see some policy and governance coming through in that point – after all, many corporate IT shops have a virtualization policy that kind of works this way, don’t they?)
  2. 2.       “Inventory applications, do some analysis, and find cloud candidates” (this is definitely one way to go, and maybe the most risk-averse; this perspective holds “the cloud” as a destination)
  3. 3.       “Use SaaS” (certainly the easiest and quickest way to start using cloud, if that’s a mandate from management)
  4. 4.       “Enterprises are already using cloud, next question” (I definitely agree with this one, enterprises are already using cloud for some things, no doubt about it)
  5. 5.       “Look at rogue IT, then enhance and wrap some governance around the best ideas” (again, I definitely agree with this one as a valid method; in fact, I did a recent blog post on the same concept,
  6. 6.       “Know what you need from a cloud provider and be prepared” (the Boy Scout model, be prepared! I definitely agree with this one as well! In fact, look here.)
  7. 7.       “Partner with the business to determine how cloud fits in the COMPANY strategy, not the IT strategy” (this was from me; and maybe it is obvious by now that cloud has huge business benefits, not just benefits for corporate IT)

 

There was lots of talk about the idea of identifying the “rogue IT” groups and embracing the unique things they have done in the cloud. All in all, these are ALL great ways to get started with cloud. In hindsight, I would add in another method of my own:

  1. Manage your internal infrastructure as if it were already deployed to the cloud. Some tools emerging now have this capability – to manage infrastructure through one interface whether it is deployed in your datacenter, with Rackspace, or even Amazon. This way, if and when you do decide to move some IT services to an external provider, the same tools and processes can be applied.

 

Your organization may have some additional methods to get started with cloud (and I’d love to hear about them!). So, why not use all of these methods in one concerted effort to evaluate cloud computing technology?

 

Is security still the “just because” reason for not migrating to the cloud?

The short recap on this topic: yes, organizations do use security as a convenient way to avoid acting on something. The security justification is more prevalent in large organizations, for obvious reasons. I’d like to point out one of the first responses though:

“…or is security becoming a reason to move to the cloud? Are service providers able to hire better security experts?”

I think this is a fantastic, forward looking response. History has seen that specialization in markets does occur. Call this industrial specialization: eventually…

  • “The price of infrastructure services will be reduced as the market becomes more competitive. Providers will compete in the market by narrowing their focus on providing infrastructure in a secure and reliable way – they specialize or go out of business.” To compete, service providers will find/attract the best people who can help them design, build, and test infrastructure effectively
  • Thus, the best people in IT security (a.k.a., the people most interested in security) will be attracted to the best jobs with service providers

Who is responsible for setting the cloud strategy in the enterprise?

Common answer was C-level, either CIO or even CEO. Cloud computing should enable the strategy of the overall business, not only IT. I think that IT should own the cloud strategy, but that more business-oriented thinkers should be in IT!

 

What’s the best way for enterprises to measure cloud ROI?

Lots of perspectives popped up on these topics. I don’t think the group stood behind a single answer. Here are some of the interesting responses for measuring ROI of cloud:

  • IT staff reduction
  • Business revenue divided by IT operations expense
  • Improving time to market for new applications

Measuring the value of IT services is, excuse the pun, tricky business. I think cloud adoption will undoubtedly accelerate once there is a set of meaningful metrics that is applicable across industries. Measuring ROI of a virtualization effort was fairly easy math – reduction in servers, networking, datacenter floor space, etc. Measuring ROI of cloud is much more difficult, but the prize is up for grabs!

 

What are the top 3 factors enterprises should consider before moving to a cloud model?

This goes back to the Boy Scout model of proper preparation, which I wrote about a few months ago. I saw a few responses that were especially interesting, and yet unsolved:

  • Repatriation, or portability of applications
  • Organizational change (shouldn’t cloud be transparent?)
  • Investments in legacy technology, goes to timing and WHEN to consider cloud
  • Security, location of data, etc.

 

At GreenPages, we think of cloud computing more as a management paradigm, and as a New Supply Chain for IT. Considering that perspective, the points above are less of an issue. GreenPages  Cloud Management as a Service (CMaaS) offering was designed specifically for this – to view cloud computing as the New Supply Chain for IT. In a world of consumers (enterprises) and providers (the likes of Amazon, Rackspace, and Terremark), where competition drives prices down, cloud computing, like other supply chains, can be thought of as the way to take advantage of market forces to benefit the business.

Thanks to Cloud Commons for another great conversation…looking forward to the next one!