Archivo de la categoría: Featured

Day Two at Cisco Live — Video Recap

By Nick Phelps, Consulting Architect, LogicsOne

 

http://www.youtube.com/watch?v=2pnAWdPH36g

 

Here’s the recap of Day 2 that I filmed down in Orlando at Cisco Live. If you missed it, here is my video from Day 1. I got a ton of great information out of the breakout sessions on Day 2…let me know if there are any questions, and I’d be more than happy to provide additional details.

Day One at Cisco Live — Video Recap

By Nick Phelps, Consulting Architect, LogicsOne

 

http://www.youtube.com/watch?v=fYZdaT8LCCk

 

I attended Cisco Live down in Orlando at the end of June. At the end of each day of the event, I made a video recapping the key highlights. There was some really interesting news and updates to come out of this conference that’s worth being aware of. Enjoy and let me know if you have any questions!

 

The Impact of Unified Communication & Collaboration

 

http://www.youtube.com/watch?v=MLYCeloSXMk

“>http://youtu.be/MLYCeloSXMk

 

In this video, GreenPages Solutions Architect Ralph Kindred talks about the latest industry trends around unified communications and video collaboration and the positive impact it has on businesses today.

 

To learn more about how GreenPages can help your organization with unified communications & collaboration, fill out this form

What’s Missing from Today’s Hybrid Cloud Management – Leveraging Brokerage and Governance

By John Dixon, Consulting Architect, LogicsOne

Recently GreenPages and our partner Gravitant hosted a webinar on Cloud Service Broker technology. Senior Analyst Dave Bartoletti gave a preface to the webinar with Forrester’s view on cloud computing and emerging technology. In this post we’ll give some perspective on highlights from the webinar. In case you missed it, you can also watch a replay of the webinar here: http://bit.ly/12yKJrI

Ben Tao, Director of Marketing for Gravitant, kicks off the discussion by describing the traditional data center sourcing model. Two key points here:

  1. Sourcing decisions, largely based on hardware selection, are separated by years
  2. In a cloud world, sourcing decisions can be separated by months or even weeks

 

The end result is that cloud computing can drive the benefit of a multi-sourcing model for IT, where sourcing decisions are made in close proximity to the use of services. This has the potential of enabling organizations to adjust their sourcing decisions more often to best suit the needs of their applications.

Next, Dave Bartoletti describes the state of cloud computing and the requirements for hybrid cloud management. The core of Dave’s message is that the use of cloud computing is on the rise, and that cloud is being leveraged for more and more complex applications – including those with sensitive data.

Dave’s presentation is based on the statement, “what IT must do to deliver on the hybrid cloud promise…”

Some key points here:

  • Cloud is about IT services first, infrastructure second
  • You won’t own the infrastructure, but you’ll own the service definitions; take control of your own service catalog
  • The cloud broker is at the center of the SaaS provider, cloud VAR, and cloud integrator
  • Cloud brokers can accelerate the cloud application lifecycle

 

Dave does an excellent job of explaining the things that IT must do in order to deliver on the hybrid cloud promise. Often, conversations on cloud computing are purely about technology, but I think there’s much more at stake. For example, Dave’s first two points above really resonate with me. You can also read “cloud computing” as ITIL-style sourcing. Cloud computing puts service management back in focus. “Cloud is about IT services first, infrastructure second,” and “You won’t own the infrastructure […]” also suggests that cloud computing may influence a shift in the makeup of corporate IT departments – fewer   core technologists and more “T-shaped” individuals. So called T-shaped individuals have knowledge and experience with a broad set of technologies (the top of the “T”), but have depth in one or more areas like programming, Linux, or storage area networking. My prediction is that there will still be a need for core technologists; but that some of them may move into roles to do things like define customer-facing IT services. For this reason, our CMaaS product also includes optional services to deal with this type of workforce transformation. This is an example of a non-technical item that must be made when considering cloud computing. Do you agree? Do you have other non-technical considerations for cloud computing?

Chris Ward, CTO of LogicsOne, then dives in to the functionality of the Cloud Management as a Service, or CMaaS offering. The GreenPages CMaaS product implements some key features that can be used to help customers advance to the lofty points that Dave suggests in his presentation. CMaaS includes a cloud brokerage component and a multi-cloud monitoring and management component. Chris details some main features from the brokerage tool, which are designed to address the key points that Dave brought up:

  • Collaborative Design
  • Customizable Service Catalog
  • Consistent Access for Monitoring and Management
  • Consolidated Billing Amongst Providers
  • Reporting and Decision Support

Chris then gives an example from the State of Texas and the benefits that they realized from using cloud through a broker. Essentially, with the growing popularity of e-voting and the use of the internet as an information resource on candidates and issues, the state knew the demand for IT resources would skyrocket on election day. Instead of throwing away money to buy extra infrastructure to satisfy a temporary surge in demand, Texas utilized cloud brokerage to seamlessly provision IT resources in real time from multiple public cloud sources to meet the variability in demand.

All in all, the 60-minute webinar is time well spent and gives clients some guidance to think about cloud computing in the context of a service broker.

To view this webinar in it’s entirety click here or download this free whitepaper to learn more about hybrid cloud management

 

How RIM Can Improve Efficiency and Add Value To Your IT Ops

This is a guest post from Chris Joseph, VP, Product Management & Marketing, NetEnrich

 

Cloud, virtualization and hybrid IT technologies are being used in small and large IT enterprises everywhere to both modernize, and achieve business goals and objectives. As such, a top concern for today’s IT leaders is whether the investments being made in these technologies are delivering on the promise of IT modernization. Another concern is finding ways to free up IT funds currently spent on routine maintenance of IT infrastructure, so that they can invest in these new and strategic IT modernization projects.

Don’t Waste Time, Money and Talent on Blinking Lights

Everyone knows that IT organizations simply can’t afford to have a team of people dedicated to watching for blinking lights and waiting for something to fix.  It’s a waste of talent and will quickly burn through even the most generous of IT budgets. Yet, according to a Gartner study, 80% of an enterprise IT budget is generally spent on routine IT, while only 20% is spent on new and strategic projects.

If this scenario sounds familiar, then you may want to consider taking a long and hard look at third-party Remote Infrastructure Management (RIM) services for your IT infrastructure management. In fact, RIM services have been shown to reduce spending on routine IT operations by 30-40%, but how is this possible?

(1)     First of all, RIM services rationalize, consolidate and integrate the tools that are used to power the functionality of the monitoring and management of IT infrastructure within an enterprise.  According to Enterprise Management Associates, a leading IT and data management research and consulting firm, a typical enterprise has nearly 11 such tools running in its environment, and these typically include IT Operations Management (ITOM) tools and IT Service Management (ITSM) tools. As any IT professional can attest to, while there is significant overlap, some of these tools tend to be deficient in their capabilities, and they can be a significant source of noise and distraction, especially when it comes to false alerts and tickets. Yet, through RIM, IT organizations can eliminate many of these tools and consolidate their IT operations into a single pane of glass view, which can result in significant cost savings.

(2)     Secondly, by leveraging RIM, IT teams can be restructured and organized into shared services delivery groups, which can result in better utilization of skilled resources, while supporting the transformation of IT into a new model that acts as a service provider to business units.  Combine these elements of RIM with remote service delivery, and not only will you improve economies of scale and scope, but you will also promote cost savings.

(3)     Thirdly, RIM services consistently look to automation, analytics, and best practices to promote cost savings in the enterprise. Manual processes and runbooks are not only costly, but also time consuming and error prone. Yet, to automate processes effectively, IT organizations must rely on methodologies, scripts, and tools. This is where RIM comes into play. In fact, within any enterprise, 60-80% of manual processes and runbooks can easily be automated with RIM.

Download this free whitepaper to learn how to avoid focusing on ”keeping the lights on” to allow your team to focus on strategic initiatives

Beyond Cost Savings and Greater Efficiency: Building a Case for RIM

In addition to reducing routine spending and improving the efficiency of your IT operations, there are several other benefits to leveraging third-party RIM services:

  • 24×7 IT operations support.  Third-party RIM services often provide 24×7 IT ops support.  IT organizations benefit from around the clock monitoring and management of their IT infrastructures without additional headcount, or straining internal resources, which saves operating costs.
  • Be the first to know. 24×7 IT operations support means that you are always the first to know when customer-facing IT systems such as the company’s website, online shopping portal, mobile apps and cloud-based solutions go down. And, the issue is resolved in many cases by RIM services teams before the end-user has time to notice.
  • Skills and expertise. Third-party RIM services can provide your IT organization with certified engineers in various IT infrastructure domains. These engineers are responsible for monitoring, alerting, triaging, ticketing, incident management, and the escalation of critical outages or errors to you and your IT staff, if they cannot be immediately resolved. In addition, they may also be available on an on-demand basis if you are looking for skills and expertise in a specific domain.

The bottom line: by leveraging RIM services, IT organizations like yours can not only enhance their service capabilities and bolster service levels, but they can also can say goodbye to the fire drills and late night calls that plague your IT staff.  Proactive management of your IT infrastructure through RIM ensures that it is always running at peak performance.

To hear more from Chris, visit the NetEnrich blog

To learn more about how GreenPages can help you monitor and manage your IT Operations fill out this form

Part 2: Want to Go Cloud? What’s the Use Case?

By Lawrence Kohan, Senior Consultant, LogicsOne

 

Recap:

In Part 1 of this blog post, I started by reiterating the importance of having a strategy for leveraging the Cloud before attempting to migrate services to it in order to achieve the best results.  Using an example use case, I showed the basic pros and cons of considering moving a company’s e-mail services to the Cloud.  Then, delving further into the additional factors to consider, based on the size and breadth of the company, I showed that in that particular scenario, that an e-mail migration to the Cloud would provide more benefit to small businesses and startups instead of medium to large enterprises; wherein such a migration may actually be more detrimental than helpful.

Use the Cloud to level the playing field!

Historically, a small business is typically at a disadvantage to their larger counterparts, as they generally have less capital to work with.  However, the Cloud Era may prove to be the great equalizer.  The nimbleness and portability of a small business may prove to be quite an advantage when it comes to reducing operating costs to give the small business a competitive edge.  A small business with a small systems footprint may be able to consider strategies for moving most—if not all—of their systems to the Cloud.  A successful migration would greatly reduce company overhead, administrative burden, and increased office space and real estate by repurposing decommissioned server rooms.  Thus, a small business is able to leverage the Cloud in a way to gain a competitive advantage in a way that is (most likely) not an option for a medium or large enterprise.

So, what is a good Cloud use case for a medium to large business?

The Cloud can’t be all things to all people.  However, the Cloud can be many things to many people.  While the enterprise may not have the same options as the small business, they still have many options available to them to reduce their costs or expand their resources to accommodate their needs in a cost-effective way.

Enterprise Use Case 1: Using IaaS for public website hosting

A good low-risk Cloud option that an enterprise can readily consider: moving non-critical, non-confidential informational data to the Cloud.  A good candidate for initial Cloud migration would be a corporate website with marketing materials or information about product or service offerings.  It is important that a company’s website containing product photos, advertising information, hours of operation and location and contact information is available 24/7 for customer and potential customer access.  In this case, the enterprise can leverage a Cloud Service Provider’s Infrastructure as a Service (IaaS) in order to host their website.  For a monthly service fee, the Cloud Service Provider will host the enterprise’s website on redundant, highly available infrastructure and proactively monitor the site to ensure maximum uptime.  (The enterprise should consider the Cloud Service Provider’s SLA when determining their uptime needs).

By this strategy, the enterprise is able to ensure maximum uptime for it important revenue-generating web materials, while offloading the costs associated with hosting and maintenance of the website.  At the same time, the data being presented online is not confidential in nature, so there is little risk in having it hosted externally.  This is an ideal use case of a Public Cloud.

In addition to the above, a Hybrid Cloud approach can also be adopted: the public-facing website could conduct e-commerce transactions by redirecting purchase requests to privately hosted e-commerce applications and customer databases that are secure and PCI compliant.  Thus, we have an effective, hybrid use of Cloud resources to leverage high availability, while still keeping confidential customer and credit card data secure and internally hosted. We’ll actually be hosting a webinar tomorrow with guest speakers from Forrester Research and Gravitant that will talk about hybrid cloud management. If you’re interested in learning more about how to properly manage your IT environment, I’d highly recommend sitting in.

Enterprise Use Case 2: Using Cloud Bursting to accommodate increased resource demands as needed

Another good Public Cloud use case: let’s say a company, operating at maximum capacity, has periodic or seasonal needs to accommodate spikes in workload.  This could either be increased demands on applications and infrastructure, or needing extra staff to perform basic clerical or administrative functions on a limited basis.  It would be a substantial investment to procure additional office space and computer hardware for limited use—not to mention the additional expenses of maintaining the hardware and office space.  In such a case, an enterprise using a Cloud Service Provider’s IaaS would be able to rapidly provision virtual servers and desktops that can be accessed via space-saving thinclients, or even remotely.  Once the project is completed, those virtual machines can be deleted.  Upon future need, new virtual machines could easily be provisioned in the same way.  And most importantly, the company only pays for what it needs, when it needs it.  This is another great way for an enterprise to leverage the Cloud’s elasticity to accommodate its dynamic needs!

Enterprise Use Case 3: Fenced testing environments for application development

Application teams often need to simulate production conditions for testing, while not effecting actual production.  When dealing with traditional hardware infrastructure, setting up a dedicated development infrastructure could be an expensive and time consuming proposition.  In addition, the Apps team may require many identical setups for multiple teams’ testing, or to simulate many scenarios using the same parameters such as IP and MAC addresses.  With traditional hardware setups, this is an extremely difficult task to achieve in a productive, isolated manner.  However, with Cloud services, such as VMware’s vCloud Suite, isolated fenced applications can be provisioned and mass-produced quickly for an Apps team’s use without affecting production, and then can be rapidly decommissioned as well.  In this particular example use case of the vCloud Suite, VMware’s Chargeback Manager can also be used to get a handle on the costs associated with development environment setup, which can then provide showback and chargeback reports to a department, organization, or other business entity.  This is yet another good example of an efficient and cost-effective use of the Cloud to solve a complex business need.

 

Consider your strategy first!  Then, use the Cloud to your advantage!

So, as we have seen, the Cloud offers various time-saving, flexible, efficient solutions, that can accommodate businesses of any size or nature.  However, the successful transition to the Cloud depends—more than anything else—on the initial planning and strategy that goes into its adoption.

Of course, there are many other options and variables to consider in a Cloud adoption strategy, such as choice of providers, consulting services, etc.  However, before even looking into the various Cloud vendors and options, start out by asking the important internal questions, first:

  • What are our business goals?
  • What are our intended use case(s) for the Cloud?
  • What are we looking to achieve from its use?
  • What is the problem that we are trying to solve?  (And is the Cloud the right choice for that particular problem?)
  • What type of Cloud service would address our need? (Public, Private, Hybrid?)
  • What is our timetable for transition to the Cloud?
  • What is our plan?  Is it feasible?
  • What is our contingency plan?  (How do we backup and/or back-out?)

When a company has solid answers for question such as the above, they are ready to begin their own journey to the cloud.

 

Last chance to register for tomorrow’s webinar on leveraging cloud brokerage. Speakers from GreenPages, Forrester Research, and Gravitant.

Is There Such a Thing as Just-In-Time IT?

By Praveen Asthana, Chief Marketing Officer, Gravitant

 

The concept of “Just-in-Time” was pioneered in the manufacturing supply chain as a critical way to reduce costs by minimizing inventory.   Implementing a just-in-time system that can handle unexpected demand is not a trivial undertaking.  It requires the confluence of a number of disciplines such as analytics, statistics, sourcing, procurement, production management, brokerage and economics.

An interesting new idea is to take this concept pioneered in manufacturing and apply it to Information Technology resources.  Doing this can provide an effective way to meet dynamically changing needs while minimizing the inventory of unused IT resources across a set of cloud services platform and providers.

Case Study:  Election Day 2012.

With the growing popularity of e-voting and use of the Internet as an information resource on candidates and issues, the Secretary of State’s office for one of the most populous U.S. states knew that demand for IT resources would go up significantly on election day.  But they didn’t know exactly how much, and they didn’t want to buy extra infrastructure for a temporary surge in demand.  Even if they could come up with a good guess for the demand, deploying the right amount of resources in a timely manner would be challenging.  Given the time it normally took (months) to deploy and provision new servers, the Secretary of State’s office knew they couldn’t use traditional means to procure compute and storage capacity to meet this demand.

As it turned out, demand went up over 1000% to over five million hits on the state voting web site by noon on Election Day.

Praveen

Fortunately the state had deployed a novel capability based on a cloud brokerage and management platform to seamlessly provision IT resources in real time from multiple public cloud sources to meet the variability in demand.  As a result, this demand was fully met without needing to do complicated planning or buy unneeded infrastructure. I’ll actually be speaking on a webinar with Chris Ward, CTO at GreenPages-LogicsOne and Dave Bartoletti, a Senior Analyst at Forrester Research on June 12th to talk about leveraging cloud brokerage and the impact it can have on managing your IT environment.

Minutes, not months—that’s what enterprise users want when it comes to having I.T. resources available to meet changing business needs or develop new applications.

However users find this to be an extraordinary challenge—most IT departments today struggle with rigid processes, a round-robin of tasks and approvals across multiple silos and departments, and manual provisioning steps.  All this adds significant time to the deployment of I.T. resources resulting in users waiting for months before the resources they need become available.

How do users respond to such delays?  By going around their IT departments and directly accessing cloud services.  Often termed ‘rogue IT’ or ‘shadow IT,’ such out of process actions expose the company to financial risk, security risks, and operational risk.

The Solution: Just-in-time IT with Real-Time Governance

Just-in-time IT is not merely about using private or public cloud services.   It is about engineering the end-to-end IT supply chain so it can be agile and respond immediately to dynamic business needs.  To achieve this in practice, you need:

  1. Effective assessment and strategy
  2. Self-service catalog of available IT resources
  3. Collaborative solution design
  4. Rapid approval work flow
  5. Sourcing platform that allows you to select the right supply chain partners for your business need or workload profile.
  6. Single button provisioning of resources
  7. Transparency across the IT supply chain
  8. Sophisticated supply-demand analytics
  9. Elastic source for resources
  10. Governance—dynamic control of resources based on goal based optimization of budget, resource usage and SLAs.

 

The first critical aspect of real time supply chain is identifying, sourcing and procurement of best fit cloud platforms and providers (internal or external) to meet your unique business needs.

The second critical aspect of ensuring just-in-time IT is effective is real-time governance, for this is the mechanism by which you truly manage the elasticity of cloud resources and ensure that IT resource inventory is minimized.   This also has the additional benefit of eliminating shadow or rogue I.T.

As I mentioned above, if you’re interested in learning more on this topic I would highly recommend registering for the upcoming webinar “What’s Missing In Today’s Hybrid Cloud Management – Leveraging Cloud Brokerage” being held on June 12th. This should be a great session and there will be time for Q & A at the end.

About the Author:

Praveen Asthana is Chief Marketing Officer of Gravitant (www.gravitant.com), a cloud services brokerage and management company.  Prior to joining Gravitant, Praveen was Vice President of Marketing and Strategy for Dell’s $13B Enterprise Solutions Division.

Want to Go Cloud? What’s the Use Case?

By Lawrence Kohan, Senior Consultant, LogicsOne

This is the first of a two-part blog series intended to provide practical, real world examples of when it makes sense to use the cloud and when it does not.

We’re well into an exciting new era in the technology world.  The buzz-words are flying around at light speeds, and talk of “Cloud” and “software-defined-everything” is all the rage.

Advances in virtualization, which allows software processes to be decoupled from underlying hardware is giving way to amazing possibilities for moving around workloads as needed, either between racks in a datacenter, or even between datacenters!  In addition, the concept of “Cloud” is very exciting in the possibilities is offers business to leverage these advances by being able to move workloads offsite for greater availability, redundancy, disaster recovery.

Indeed, the promise of the Cloud as a whole is to provide IT as a service.  It’s a way of offering companies resources on a metered usage basis, so that they can consume as needed, grow or reduce their resources as needed, and only pay on a per use basis for what they consume, as needed.  The hope is to free up a business and their IT staff from worrying about the mundane, daily details and repetitive administrative tasks and burdens of keeping the business functioning and allows them to be more strategic with their time and efforts.  In the Cloud Era, servers and desktops can be provisioned, configured, and deployed in minutes instead of weeks!  The time saved allows the business to focus on all other areas of the business to make it more profitable, such as their marketing and advertising strategies, application/website development, and the betterment of their product and services.

Cloudy Conditions Ahead

All of this sounds like a wonderful dream.  However, before jumping in, it is important to understand what the business goals are.  What is it you intend to get out of the Cloud?  How do you intend to leverage it to your best advantage?  These questions and answers must come first before any decision is made regarding software vendors or Cloud service providers to be used.  The promise of the Cloud is tremendous gains in efficiency, but only when it is adopted and utilized correctly.

 

Register for our upcoming free webinar on June 12th on what’s missing in hybrid cloud management today. Speakers include Chris Ward from GreenPages, Praveen Asthana from Gravitant, and David Bartoletti, a top analyst from Forrester Research.

 

To Host or Not to Host?

For starters, let’s look at a simple use case: Whether or not to host a company’s e-mail in the Cloud.

Pros:

  • Hosting email will be billed on a per-usage basis, either by number of user mailboxes, number of emails sent/received, or storage used.
  • Day-to-day administration, availability, fault tolerance, backups are all handled by the service provider.  Little administration is needed aside from creating user accounts and mailboxes.
  • Offsite-hosted email still has the same look-and-feel as on-premise email, and can be accessed remotely, in the same ways, from anywhere.  Most users don’t even know the difference!

Cons:

  • Company is subject to outages and downtime windows of the service provider.  (In such a case, as long as it is not an unplanned outage or disaster, steps should be taken to ensure continued e-mail delivery, but systems may be unavailable for periods of time, usually on weekends or overnight)
  • Initial migration and large data transfers can be an administrative burden.

There are factors that can either be positives or negatives depending on the business size and need.  For example, a small startup company with only several people needs to be extremely budget conscious.  In their case, it would certainly make more sense financially to outsource their e-mail for a monthly fee instead of looking to install and maintain their own internal email servers, which after hardware and software costs and licensing would cost 5 figures, not to mention needed at least one dedicated person to maintain it.  This is certainly not a cost-effective option for a small, young company trying to get off the ground.

 

Download this free whitepaper to learn more about how organizations can revolutionize the way it manages hybrid cloud environments.

 

At the same time, a very large enterprise with thousands of mailboxes may find the process of migration to be an expensive, time consuming administrative burden.  While offsite email would offer good availability and safeguards against system failure, perhaps even above and beyond what the enterprise currently utilizes, it is also a substantial risk; if the Cloud Provider has an outage that could affect the enterprise’s email access.  The same risk would apply to a small business as well; however, the smaller and more localized the business, the more likely they are to adapt to an e-mail outage and resume intra-office communications via secondary means—a contingency plan that is more difficult to act upon for a larger global enterprise.  And, yes, the enterprise that hosts e-mail internally has the same risk of an outage, however that enterprise can respond to an internal e-mail outage immediately and be able to ascertain how long the outage will be, instead of being at the mercy of the Cloud Provider’s timetable and troubleshooting efforts.

Therefore, in our sample “hosted e-mail” use case, it would make more sense for a smaller business to consider the option of moving e-mail services to the Cloud, and may not provide much value, if any, for the enterprise.

In the second part of this two-part blog series, I will cover when is a good time to utilize cloud for medium to large businesses. In the meantime, would love to hear your thoughts!

Webinar June 12th 11am-12pm EST “What’s Missing in Today’s Hybrid Cloud Management – Leveraging Cloud Brokerage” Speakers from Forrester, GreenPages, and Gravitant. Register here!

Questions Around Uptime Guarantees

Some manufacturers recently have made an impact with a “five nines” uptime guarantee, so I thought I’d provide some perspective. Most recently, I’ve come in contact with Hitachi’s guarantee. I quickly checked with a few other manufacturers (e.g. Dell EqualLogic) to see if they offer that guarantee for their storage arrays, and many do…but realistically, no one can guarantee uptime because “uptime” really needs to be measured from the host or application perspective. Read below for additional factors that impact storage uptime.

Five Nines is 5.26 minutes of downtime per year, or 25.9 seconds a month.

Four Nines is 52.6 minutes/year, which is one hour of maintenance, roughly.

Array controller failover in EQL and other dual controller, modular arrays (EMC, HDS, etc.) is automated to eliminate downtime. That is really just the beginning of the story. The discussion with my clients often comes down to a clarification of what uptime means – and besides uninterrupted connectivity to storage, data loss (due to corruption, user error, drive failure, etc.) is often closely linked in people’s minds, but is really a completely separate issue.

What are the teeth in the uptime guarantee? If the array does go down, does the manufacturer pay the customer money to make up for downtime and lost data?

{Register for our upcoming webinar on June 12th ”What’s Missing in Hybrid Cloud Management- Leveraging Cloud Brokerage“ featuring guest speakers from Forrester and Gravitant}

There are other array considerations that impact “uptime” besides upgrade or failover.

  • Multiple drive failures, since most are purchased in batches, are a real possibility. How does the guarantee cover this?
  • Very large drives must be in a suitable RAID configuration to improve the chances that a RAID rebuild will be completed before another URE (unrecoverable read error) occurs. How does the guarantee cover this?
  • Dual controller failures do happen to all the array makers, although I don’t recall this happening with EQL. Even a VMAX went down in Virginia once, in the last couple of years. How does the guarantee cover this?

 

The uptime “promise” doesn’t include all the connected components. Nearly every environment has something with a single path or SPOF or other configuration issue that must be addressed to insure uninterrupted storage connectivity.

  • Are applications, hosts, network and storage all capable of automated failover at sub-10 ms speeds? For a heavily loaded Oracle database server to continue working in a dual array controller “failure” (which is what an upgrade resembles), it must be connected via multiple paths to an array, using all available paths.
  • Some operating systems don’t support an automatic retry of paths (Windows), nor do all applications resume processing automatically without IO errors, outright failures or reboots.
  • You often need to make temporary changes in OS & iSCSI initiator configurations to support an upgrade – e.g. change timeout value.
  • Also, the MPIO software makes a difference. Dell EQL MEM helps a great deal in a VMware cluster to insure proper path failover, as do EMC PowerPath and Hitachi Dynamic Link Manager. Dell offers a MS MPIO extension and DSM plugin to help Windows recover from a path loss in a more resilient fashion
  • Network considerations are paramount, too.
    • Network switches often take 30 seconds to a few minutes to reboot after a power cycle or reboot.
    • Also in the network, if non-stacked switches are used, RSTP must be enabled. If not, and anything else isn’t configured correctly, connectivity to storage will be lost.
    • Flow Control must be enabled, among other considerations (disable unicast storm control, for example), to insure that the network is resilient enough.
    • Link aggregation, if not using stacked switches, must be dynamic or the iSCSI network might not support failover redundancy

 

Nearly every array manufacturer will say that upgrades are non-disruptive, but that is at the most simplistic level. Upgrades to a unified storage array, for example, will involve disruption to file system presentation, almost always. Clustered or multi-engine frame arrays (HP 3PAR, EMC VMAX, NetApp, Hitachi VSP) can offer the best hope of achieving 5 nines, or even greater. We have customers with VMAX and Symmetrix that have had 100% uptime for a few years, but the arrays are multi-million dollar investments. Dual controller modular arrays, like EMC and HDS, can’t really offer that level of redundancy, and that includes EQL.

If the environment is very carefully and correctly set up for automated failover, as noted above, then those 5 nines can be achieved, but not really guaranteed.

 

The Buzz Around Software Defined Networking

By Nick Phelps, Consulting Architect, LogicsOne

 

http://www.youtube.com/watch?v=p51KAxPOrt4

 

One of the emerging trends in our industry that is stirring up some buzz right now is software defined networking. In this short video I answer the following questions about SDN:

 

  1. What is Software Defined Networking or SDN?
  2. Who has this technology deployed and how are they using it?
  3. What does SDN mean to the small to mid-market?
  4. When will the mid-market realize the benefits from SDN based offerings?
  5. When will we hear more? When should we expect the next update?

 

What are your thoughts on SDN? I’d love to hear you’re comments on the video and my take on the topic!