Archivo de la categoría: hyperconverged infrastructure

How SimpliVity Gave Me Back My Weekend

At GreenPages, we have a well outfitted lab environment that is used for customer facing demos and as a sandbox for our technical team to learn/experiment/test various solutions in the market.  We’ve been in the process of refreshing the lab for a couple of months but have kept a skeleton environment up and running for simple administrative remote access.  As part of the refresh, we had been cleaning up old VMs, systems, storage, etc. to reduce our footprint, and as part of the cleanup we moved several management VMs from an aging HP blade environment over to a small 2+1 SimpliVity OmniStack environment.  I really didn’t think much about it at the time as I just needed a place to put these VMs that had no tie to older systems, which were being decommissioned. Also, the OmniStack made sense because it had plenty of capacity and performance self-contained, thus freeing up any reliance on other external storage and older compute environments.

I just recently came back from a West coast trip. While I was there, I needed to re-configure something so that a colleague could do some other configuration work.  I brought up my RDP client to login to the jump box terminal server we use to administer the lab, and I got an error that said my profile wouldn’t load.  So, I VPN in to check out the VM, logged in as the local administrator, and quickly discovered the box had been pwned with ransomware and a good majority of the data files (my profile included) were encrypted.  After saying a few choice words to myself I investigated and determined an old lab account with a less than secure password had been used to access the system.   I got the account disabled and started thinking to myself how long it’s going to take me to either attempt to ‘clean’ the box and get the files decrypted (assuming I could even find a tool to do it) or to just trash and rebuild the box.  I figured that was going to take up most of my weekend but then the thought crossed my mind that we had moved all of the management VMs over to the SimpliVity boxes.

For those who may not be aware, SimpliVity’s core value proposition is all around data protection via integrated backup, replication, and DR capabilities.  I knew we had not configured any specific protection policies for those management VMs, we had simply dumped them into newly created resource pool, but I figured it was worth a look.  I logged into the vSphere client and took a look at the SimpliVity plugin for that terminal server VM and, low and behold, it had been backed up and replicated on a regular basis from the moment it was put into the environment.  From there, I simply went back a couple of days in the snap-in, right click, restore VM.  Within about half a second, the VM had been restored, and I powered it up and within another five minutes, I was logging into it via my RDP session from the West coast.  Bottom line, SimpliVity took a four to six hour process and transformed it into something that takes less than six minutes.  Therefore, I suggest you check it out.  Thank you SimpliVity, for being kind enough to donate some gear to our lab and for giving me some family time back this weekend!

By Chris Ward, CTO, GreenPages Technology Solutions

If you would like to discuss how SimpliVity could fit into your IT strategy, reach out to us here.

Putting the “Converged” in Hyperconverged Support

Today’s hyperconverged technologies are here to stay it seems.  I mean, who wouldn’t want to employ a novel technology approach that “consolidates all required functionality” into a single infrastructure appliance that provides an “efficient, elastic pool of x86” resources controlled by a “software-centric” architecture?  I mean, outside of the x86 component, it’s not like we haven’t seen this type of platform before (hello, mainframe anyone?).

But this post is not about the technology behind HCI, nor about whether this technology is the right choice for your IT demands – it’s more about what you need to consider on day two, after your new platform is happily spinning away in your datacenter.  Assuming you have determined that the hyperconverged path will deliver technology and business value for your organization, why wouldn’t you extend that belief system to how you plan on operating it?

Today’s hyperconverged vendors offer very comprehensive packages that include some advanced support offerings.  They have spent much time and energy (and VC dollars) in creating monitoring and analytics platforms that are definitely an advancement over traditional technology support packages.  While technology vendors such as HP, Dell/EMC, Cisco and others have for years provided phone-home monitoring and utilization/performance reporting capabilities, hyperconverged vendors have pushed these capabilities further with real-time analytics and automation workflows (ie Nutanix Prism, SimpliVity OmniWatch, OmniView).  Additionally, these vendors have aligned support plans to business outcomes such as “mission critical”, “production”, “basic”, etc.

Now you are asking, Mr. know-it-all, didn’t you just debunk your own argument? Au contraire I say, I have just re-enforced it…

Each hyperconverged vendor technology requires its own SEPARATE platform for monitoring and analytics.  And these tools are RESTRICTED to just what is happening INTERNALLY within the converged platform.  Sure, that covers quite a bit of your operational needs, but is it the COMPLETE story?

Let’s say you deploy SimpliVity for your main datacenter.  You adopt the “Mission Critical” support plan, which comes with OmniWatch and OmniView.  You now have great insight into how your OmniCube architecture is operating, and you can delve into the analytics to understand how your SimpliVity resources are being utilized.  In addition, you get software support with 1, 2, or 4 hour response (depending on the the channel you use – phone, email, web ticket).  You also get software updates and RCA reports.  It sounds like a comprehensive, “converged” set of required support services.

And it is, for your selected hyperconverged vendor.  What these services do not provide is a holistic view of how the hyperconverged platforms are operating WITHIN the totality of your environment.  How effective is the networking that connects it to the rest of the datacenter?  What about non-hyperconverged based workloads, either on traditional server platforms or in the cloud?  And how do you measure end user experience if your view is limited to hyperconverged data-points?  Not to mention, what happens if your selected hyperconverged vendor is gobbled up by one of the major technology companies or, worse, closes when funding runs dry?

Adopting hyperconverged as your next-generation technology play is certainly something to consider carefully, and has the potential to positively impact your overall operational maturity.  You can reduce the number of vendor technologies and management interfaces, get more proactive, and make decisions based on real data analytics. But your operations teams will still need to determine if the source of impact is within the scope of the hyperconverged stack and covered by the vendor support plan, or if its symptomatic of an external influence.

Beyond the awareness of health and optimized operations, there will be service interruptions.  If there weren’t we would all be in the unemployment line.  Will a 1 hour response be sufficient in a major outage?  Is your operational team able to response 7X24 with hyperconverged skills?  And, how will you consolidate governance and compliance reporting between the hyperconverged platform and the rest of your infrastructure?

Hyperconverged platforms can certainly enhance and help mature your IT operations, but they do provide only part of the story.  Consider carefully if their operational and support offerings are sufficient for overall IT operational effectiveness.  Look for ways to consolidate the operational information and data provided by hyperconverged platforms with the rest of your management interfaces into a single control plane, where your operations team can work more efficiently.  If you’re looking for help, GreenPages can provide this support via its Cloud Management as a Service (CMaaS) offering.

Convergence at this level is even more critical to ensure maximum support of your business objectives.

If you are interested in learning how GreenPages’ CMaaS platform can help you manage hyper-converged offerings, reach out!

 

By Geoff Smith, Senior Manager, Managed Services Business Development

Nutanix and Parallels RAS. The perfect partnership.

Desktop and application virtualization has transformed IT networks and is having a significant impact on the cloud computing space as well. Now is the time for datacenter transformation. With new technologies such as Nutanix Hyperconverged Infrastructure (HCI) solutions, managing datacenters has become easy and cost-effective. The need for Hyperconverged Infrastructure Traditional IT networks consist of […]

The post Nutanix and Parallels RAS. The perfect partnership. appeared first on Parallels Blog.

Parallels simplicity combined w/ HPE Hyper Converged innovation—a winning combination

The advent of cloud computing has transformed the way organizations manage their IT networks. Gone are the days when offices were tied to a physical location; the cloud has enabled organizations to set up virtual offices in remote locations. Along with flexibility and innovation, cloud computing presents certain challenges as well. Managing dynamically evolving VM […]

The post Parallels simplicity combined w/ HPE Hyper Converged innovation—a winning combination appeared first on Parallels Blog.

Guest Post: How to Minimize the Damage Done by Ransomware

Below is a guest post from Geoff Fancher, Vice President, Americas Channels at SimpliVity Corporation

Have you ever woken up in the middle of the night, sweating profusely, scared half to death, and terrified that your data center was infected with ransomware? If so, you’ve had an IT nightmare. Just be thankful it was all just a dream and your data isn’t lost.

Though IT nightmares come in all forms, one thing all IT pros fear nowadays is ransomware. That’s because ransomware is becoming increasingly commonplace and is evolving to become even more vicious and hard to stop once it has entered an IT environment. The cost to business productivity can be crippling, and the data loss that can occur can set a company back for days.

According to Ponemon Institute, the average cost of IT downtime is $7900 per minute. Per minute! The reason recovering from ransomware attacks can be so costly is that backing up from restores can take a long time, typically measured in hours or days depending on where and how backups were stored. Also, depending on when the most recent backup took place, a lot of data could be lost once the backup is recovered.

With the cost of downtime due to ransomware being so high because quick restores aren’t an option, many organizations are choosing to pay the ransom to get their data back. The most notable example comes from Hollywood Presbyterian Medical Center. The hackers infected the hospital’s computer systems, shutting down all communication between the systems, and demanded $17,000 to unlock them. The hospital, being in an a high-pressure situation without the correct resources to be able to quickly shut down its systems and restore from a recent backup, was forced to pay the ransom to receive the decryption key and get back online.

The fundamental piece, then, to avoiding paying the ransom demanded by cyber criminals is to have a disaster recovery plan in place. You should know what to do if a ransomware attack happens. As the old adage goes, “Hope for the best. Prepare for the worst.” That’s the attitude to take and the way to do it is to have a plan.

One company that instituted a solid disaster recovery plan just in the nick of time was an enterprise manufacturing company based in the Netherlands. The company was infected with ransomware while its IT partner was in the process of migrating VMs to a new hyperconverged infrastructure environment that had built-in data protection. Most of the infected folders were already on the new solution, which was lucky because they were able to use the solution’s backup to restore within fifteen minutes, when just a day before, on the previous infrastructure, it would have taken about three hours to restore to the most recent backup. The partner was also performing hourly backups on the new solution, so they lost less than an hour’s worth of data during the restore. Before deploying hyperconverged infrastructure, the partner was backing up to tape every 12 hours, so they saved the company about 11 hours of data loss on the new hyperconverged solution. What a difference a day makes.

For a disaster recovery plan to be successful, the IT team needs to define recovery time objectives (RTOs) – how long it takes to restore the backup – and recovery point objectives (RPOs) – the nearest backup they can restore from. Basically, businesses have to ask themselves two questions: How long can the business shut down while waiting for the restore to take place? And, how many hours of business-critical data can the company afford to lose? There are data protection plans for every size of company and for every budget. The first step to a data protection plan is defining the organization’s requirements.

Hyperconverged infrastructure, for example, can dramatically cut down the hours it takes for businesses to recover from IT downtime. By making data efficient from the start of its lifecycle, businesses are able to quickly recover from a previous backup.

With SimpliVity hyperconverged infrastructure, companies are able to backup quickly and efficiently with minimal data loss because SimpliVity’s solution is designed to meet even the most stringent RTOs and RPOs to ensure businesses functions aren’t interrupted for long in case of a disaster or ransomware attack. If you’re heading to GreenPages’ Summit Event next week, definitely swing by the SimpliVity booth to chat!

 

 

 

Hyper-converged Infrastructure vs Converged Infrastructure

Hyper-coverged Infrastructure Hyper-converged infrastructure is the latest buzz in IT circles. Thanks to virtualization and cloud computing technology, businesses are now able to integrate multiple IT components into a single entity to remove silos, optimize costs, and improve productivity. Converged and hyper-converged infrastructures provide this flexibility to businesses. This article looks at the differences between […]

The post Hyper-converged Infrastructure vs Converged Infrastructure appeared first on Parallels Blog.

EMC World 2015: Event Recap

After EMC World 2015, I’m languishing in airports today in post-conference burnout – an ideal time to deliver a report on the news, announcements and my prognostications on what this means to our business.

The big announcements were delivered in General Sessions on Monday (EMC Information Infrastructure & VCE) and on Tuesday (Federation: VMware & Pivotal). The Federation announcements are more developer and futures oriented, although important strategically, so I’ll pass on that for now.

EMC and VCE have updated their converged and Hyperconverged products pretty dramatically. Yes, VSPEX Blue is Hyperconverged, however unfortunate the name is in linking an EVO:RAIL solution to a reference architecture solution.

The products can be aligned as:

  1. Block
  2. Rack
  3.  Appliances

EMC World 2015

The VCE Vblock product line adheres to its core value proposition closely.

  1. Time from order to completely deployed on the data center floor in 45 days. (GreenPages will provide the Deploy & Implementation services. We have three D&I engineers on staff now.)
  2. Cross component Unified upgrade through a Release Candidate Matrix – every single bit of hardware is tested in major and minor upgrades to insure compatibility: storage, switch, blade, add-ons (RecoverPoint, Avamar, VPLEX).
  3. Unified support – one call to VCE, not to all the vendors in the build

However, VCE is adding options and variety to make the product less monolithic.

  1. VXblock – this is the XtremIO version, intended for large VDI or mission critical transactional deployments (trading, insurance, national healthcare claims processing). The Beast is a Vblock of eight 40 TB Xbrick nodes, 320 TB before dedupe and compression, or nearly 2 PB with realistic data reduction. Yes, that is Two Petabytes of All Flash Array. Remote replication is now totally supported with RecoverPoint.
  2. VXRack – this is a Vblock without an array, but it isn’t VSAN either. It is….ScaleIO, a software storage solution that pools server storage into a shared pool. The minimum configuration is 100 compute nodes, which can be dense performance (4 node form factor in 2 U chassis) or capacity. The nodes can be bare metal or hypervisor of any sort. This can scale to 328 Petabytes. Yes, Petabytes. This is web-scale, but they call it “Rack Scale” computing (first generation). More on that later…
  3. Vscale – Networking! This is Leaf and Spine networking in a rack to tie a VXrack or Vblock deployment together, at scale. “One Ring to Rule Them All”. This is big, literally. Imagine ordering a petabyte installation of VXblock, VXrack and Vscale, and rolling it onto the floor in less than two months.

So, that is Block and Rack. What about Appliance?

Enter VSPEX Blue, the EMC implementation of EVO:RAIL. This has definite value in…

  • Pricing
  • Unified management & support
  • The “app store” with
    • integrated backup (VDPA)
    • replication (vRPA)
    • Cloud Array integration (TwinStrata lives!), a virtual iSCSI controller that will present cloud storage to the system as a backup target or a capacity tier.

This post from Mike Colson provides a good explanation.

Future apps will include virus scanning, links to Public IaaS and others.

I set one up in the lab in 15 minutes, as advertised, although I had to wait for the configuration wizard to churn away after I initialized it and input all the networking. Professional Services will be required, as EMC is requiring PS to implement. Our team is and will be prepared to deploy this. We can discuss how this compares to other Hyperconverged appliances. Contact us for more information.

There are other announcements, some in sheer scale and some in desirable new features.

Data Domain Beast: DD9500, 58.7 TB/hr. and 1.7 PB of capacity. This is rated at 1.5x the performance and 4x the scalability of the nearest competitor.

VPLEX News: The VPLEX Witness can now be deployed in the public Cloud (naturally EMC recommends the EMC Hybrid Cloud or vCloud Air). The Witness has to be outside the fault domains of any protected site, so where better than the Cloud? It is a very lightweight VM.

CloudArray (TwinStrata’s Cloud Array Controller) is integrated with VPLEX. You can have a distributed volume spanning on premise and cloud storage. I’m still trying to grasp the significance of this. The local cache for the CloudArray controller can be very fast, so this isn’t limited to low latency applications. The things you could do…

VPLEX is now available in a Virtual Edition (VPLEX/VE). This will obviously come with some caveats and restrictions, but this also is a fantastic new option for smaller organizations looking for the high availability that VPLEX provides, as well as data mobility and federation of workloads across metro distances.

VVOL: Chuck Hollis (@chuckhollis) led an entertaining and informative ‘Birds of a Feather’ session for VVOLs. Takeaway – this is NOT commonly deployed yet. Only a handful of people have even set it up, and mostly for test. This was in a room with at least 150 people, so high interest, but low deployment. Everyone sees the potential and is looking forward to real world policy based deployments on industry standard storage. This is an emerging technology that will be watched closely.

VNX/VNXe: I didn’t see or hear many striking features or upgrades in this product line, but an all flash VNXe was trumpeted. I’ll be looking at the performance and design specifications of this more closely to see how it might fit targeted use cases or general purpose storage for SMB and commercial level customers. There is talk around the virtualization of the VNX array, as well as Isilon, so pretty soon nearly every controller or device in the EMC portfolio will be available as a virtual appliance. This leads me to…

ViPR Controller and ViPR SRM: Software Defined Storage

ViPR Controller is definitely a real product with real usefulness. This is the automation and provisioning tool for a wide variety of infrastructure elements, allowing for creation of virtual arrays with policy based provisioning, leveraging every data service imaginable: dedupe, replication, snapshots, file services, block services and so on.

ViPR SRM is the capacity reporting and monitoring tool that provides the management of capacity that is needed in an SDS environment. This is a much improved product with a very nice GUI and more intuitive approach to counters and metrics.

I’d recommend a Storage Transformation Workshop for people interested in exploring how SDS can change the way (and cost) of how you manage your information infrastructure.

More on EVO:RAIL/VSPEX Blue

I met with Mike McDonough, the mastermind behind EVO:RAIL. He is indeed a mastermind. The story of the rise of EVO:RAIL as a separate business unit is interesting enough (300 business cases submitted, 3 approved, and he won’t say what the other mystery products are), but the implementation and strategy and vision are what matter to us. The big factor here was boiling down the support cases to come up with the 370 most common reasons for support, all around configuration, management and hardware. The first version of EVO:RAIL addressed 240 of those issues. Think of this as having a safety rail around a vSphere appliance to prevent these common and easily avoidable issues, without restricting the flexibility too much. The next version will incorporate NSX, most likely. Security and inspection are the emphases for the next iteration. Partners and distributors were chosen carefully. GreenPages is one of only 9 national partners chosen for this, based on our long history as a strategic partner and our thought leadership! The tightly controlled hardware compatibility list is a strength, as future regression tests for software and other upgrades will keep the permutations down to a minimum. (By the way, the EMC server platform is Intel, for VxRack, VSPEX Blue and I think for all of their compute modules for all their products). The implication here, competitively, is that as competitive appliances that are buying white box hardware with commodity contracts allowing for flexibility in drives, memory and CPU, will have an exponentially more difficult task in maintain the increasing permutations of hardware versions over time.

Final Blue Sky note:

Rack Scale is an Intel initiative that promises an interesting future for increased awareness of the hardware for hypervisors, but is a very future leaning project. Read Scott Lowe’s thoughts on this.

 

As always, contact us for more details and in-depth conversations about how we can help you build the data center of the future, today.

 

By Randy Weis, Practice Manager, Information Infrastructure