Archivo de la categoría: Storage and Information Management

How SimpliVity Gave Me Back My Weekend

At GreenPages, we have a well outfitted lab environment that is used for customer facing demos and as a sandbox for our technical team to learn/experiment/test various solutions in the market.  We’ve been in the process of refreshing the lab for a couple of months but have kept a skeleton environment up and running for simple administrative remote access.  As part of the refresh, we had been cleaning up old VMs, systems, storage, etc. to reduce our footprint, and as part of the cleanup we moved several management VMs from an aging HP blade environment over to a small 2+1 SimpliVity OmniStack environment.  I really didn’t think much about it at the time as I just needed a place to put these VMs that had no tie to older systems, which were being decommissioned. Also, the OmniStack made sense because it had plenty of capacity and performance self-contained, thus freeing up any reliance on other external storage and older compute environments.

I just recently came back from a West coast trip. While I was there, I needed to re-configure something so that a colleague could do some other configuration work.  I brought up my RDP client to login to the jump box terminal server we use to administer the lab, and I got an error that said my profile wouldn’t load.  So, I VPN in to check out the VM, logged in as the local administrator, and quickly discovered the box had been pwned with ransomware and a good majority of the data files (my profile included) were encrypted.  After saying a few choice words to myself I investigated and determined an old lab account with a less than secure password had been used to access the system.   I got the account disabled and started thinking to myself how long it’s going to take me to either attempt to ‘clean’ the box and get the files decrypted (assuming I could even find a tool to do it) or to just trash and rebuild the box.  I figured that was going to take up most of my weekend but then the thought crossed my mind that we had moved all of the management VMs over to the SimpliVity boxes.

For those who may not be aware, SimpliVity’s core value proposition is all around data protection via integrated backup, replication, and DR capabilities.  I knew we had not configured any specific protection policies for those management VMs, we had simply dumped them into newly created resource pool, but I figured it was worth a look.  I logged into the vSphere client and took a look at the SimpliVity plugin for that terminal server VM and, low and behold, it had been backed up and replicated on a regular basis from the moment it was put into the environment.  From there, I simply went back a couple of days in the snap-in, right click, restore VM.  Within about half a second, the VM had been restored, and I powered it up and within another five minutes, I was logging into it via my RDP session from the West coast.  Bottom line, SimpliVity took a four to six hour process and transformed it into something that takes less than six minutes.  Therefore, I suggest you check it out.  Thank you SimpliVity, for being kind enough to donate some gear to our lab and for giving me some family time back this weekend!

By Chris Ward, CTO, GreenPages Technology Solutions

If you would like to discuss how SimpliVity could fit into your IT strategy, reach out to us here.

Putting the “Converged” in Hyperconverged Support

Today’s hyperconverged technologies are here to stay it seems.  I mean, who wouldn’t want to employ a novel technology approach that “consolidates all required functionality” into a single infrastructure appliance that provides an “efficient, elastic pool of x86” resources controlled by a “software-centric” architecture?  I mean, outside of the x86 component, it’s not like we haven’t seen this type of platform before (hello, mainframe anyone?).

But this post is not about the technology behind HCI, nor about whether this technology is the right choice for your IT demands – it’s more about what you need to consider on day two, after your new platform is happily spinning away in your datacenter.  Assuming you have determined that the hyperconverged path will deliver technology and business value for your organization, why wouldn’t you extend that belief system to how you plan on operating it?

Today’s hyperconverged vendors offer very comprehensive packages that include some advanced support offerings.  They have spent much time and energy (and VC dollars) in creating monitoring and analytics platforms that are definitely an advancement over traditional technology support packages.  While technology vendors such as HP, Dell/EMC, Cisco and others have for years provided phone-home monitoring and utilization/performance reporting capabilities, hyperconverged vendors have pushed these capabilities further with real-time analytics and automation workflows (ie Nutanix Prism, SimpliVity OmniWatch, OmniView).  Additionally, these vendors have aligned support plans to business outcomes such as “mission critical”, “production”, “basic”, etc.

Now you are asking, Mr. know-it-all, didn’t you just debunk your own argument? Au contraire I say, I have just re-enforced it…

Each hyperconverged vendor technology requires its own SEPARATE platform for monitoring and analytics.  And these tools are RESTRICTED to just what is happening INTERNALLY within the converged platform.  Sure, that covers quite a bit of your operational needs, but is it the COMPLETE story?

Let’s say you deploy SimpliVity for your main datacenter.  You adopt the “Mission Critical” support plan, which comes with OmniWatch and OmniView.  You now have great insight into how your OmniCube architecture is operating, and you can delve into the analytics to understand how your SimpliVity resources are being utilized.  In addition, you get software support with 1, 2, or 4 hour response (depending on the the channel you use – phone, email, web ticket).  You also get software updates and RCA reports.  It sounds like a comprehensive, “converged” set of required support services.

And it is, for your selected hyperconverged vendor.  What these services do not provide is a holistic view of how the hyperconverged platforms are operating WITHIN the totality of your environment.  How effective is the networking that connects it to the rest of the datacenter?  What about non-hyperconverged based workloads, either on traditional server platforms or in the cloud?  And how do you measure end user experience if your view is limited to hyperconverged data-points?  Not to mention, what happens if your selected hyperconverged vendor is gobbled up by one of the major technology companies or, worse, closes when funding runs dry?

Adopting hyperconverged as your next-generation technology play is certainly something to consider carefully, and has the potential to positively impact your overall operational maturity.  You can reduce the number of vendor technologies and management interfaces, get more proactive, and make decisions based on real data analytics. But your operations teams will still need to determine if the source of impact is within the scope of the hyperconverged stack and covered by the vendor support plan, or if its symptomatic of an external influence.

Beyond the awareness of health and optimized operations, there will be service interruptions.  If there weren’t we would all be in the unemployment line.  Will a 1 hour response be sufficient in a major outage?  Is your operational team able to response 7X24 with hyperconverged skills?  And, how will you consolidate governance and compliance reporting between the hyperconverged platform and the rest of your infrastructure?

Hyperconverged platforms can certainly enhance and help mature your IT operations, but they do provide only part of the story.  Consider carefully if their operational and support offerings are sufficient for overall IT operational effectiveness.  Look for ways to consolidate the operational information and data provided by hyperconverged platforms with the rest of your management interfaces into a single control plane, where your operations team can work more efficiently.  If you’re looking for help, GreenPages can provide this support via its Cloud Management as a Service (CMaaS) offering.

Convergence at this level is even more critical to ensure maximum support of your business objectives.

If you are interested in learning how GreenPages’ CMaaS platform can help you manage hyper-converged offerings, reach out!

 

By Geoff Smith, Senior Manager, Managed Services Business Development

Disruption in the Storage Market: Advances in Technology and Business Models

New technologies, business models, and vendors have led to major disruption in the storage market. Watch the video below to hear Randy Weis discuss the evolution of flash storage, how new business models have driven prices down, and the vendors that are making it possible.

Or watch on YouTube

 

Did you miss VMWorld? Register for our upcoming webinar to get all of the most important updates from Las Vegas and Barcelona.

Azure Site Recovery: 4 Things You Need to Know

Disaster recovery has traditionally been a complex and expensive proposition for many organizations. Many have chosen to rely on backups of data as the method of disaster recovery. This approach is cost effective, however, it can result in extended downtime during a disaster while new servers are provisioned (referred to as Recovery Time Objective or RTO) and potentially large data loss of information created from the time of the backup the time of the failure (referred to as Recovery Point Objective). In the worst case scenario, these backups are not viable at all and there is a total loss. For those who have looked into more advanced disaster recovery models, the complexity and costs of such a system quickly add up. Azure Site Recovery helps bring disaster recovery to all companies in four key ways.

 

Azure Site Recovery makes disaster recovery easy by delivering it as a cloud hosted service

The Azure Site Recovery lives within the Microsoft cloud and is controlled and configured through the Azure Management Portal. There is no requirement to patch or maintain servers; it’s disaster recovery orchestration as a service. Using Site Recovery does not require that you use Azure as the destination of replication. It can protect your workloads between 2 company-owned sites. For example, if you have a branch office and a home office that both run VMware or Hyper-V, you can use Azure Site Recovery to replicate, protect and fail over workloads between your existing sites. It also has the optional function of being able to replicate data directly to Azure which can be used to avoid the expense and complexity of building and maintaining a disaster recovery site. 

 

Azure Site Recovery is capable of handling almost any source workload and platform

Azure Site Recovery offers an impressive list of platforms and applications it can protect. Azure site recovery can protect any workload running on VMware Virtual Machines on vSphere or ESXi, Hyper-V VMs with or without System Center Virtual Machine Manager and, yes; even physical workloads can be replicated and failed over to Azure. Microsoft has worked internally with its application teams to make sure Azure Site Recovery works with many of the most popular Microsoft solutions including Active Directory, DNS, Web apps (IIS, SQL), SCOM, SharePoint, Exchange (non-DAG), Remote Desktop/VDI, Dynamics AX, Dynamics CRM, and Windows File Server. They have also independently tested protecting SAP, Linux (OS and Apps) and Oracle workloads.

 

Azure Site Recovery has predictable and affordable pricing

Unlike traditional disaster recovery products that require building and maintaining a warm or hot DR site, Site Recovery allows you to replicate VMs to Azure. Azure Site Recovery offers a simple pricing model that makes it easy to estimate costs. For virtual machines protected between company-owned sites, it is a flat $16/month per protected virtual machines. If you are protecting your workloads to Azure then it is $54/month per protected server. In addition, the first 31 days of protection for any server is free. This allows you to try out and test Azure site recovery before you have to pay for it. It is also a way for you to use Azure Site Recovery to migrate your workloads to Azure for free.

 

Azure Site Recovery is secure and reliable

Azure Site Recovery continuously monitors the replication and health of the protected workloads from Azure. In the event of an inability to replicate data, you can configure alerts to email you a notification. Protecting the privacy of your data is a top priority in Site Recovery. All communication between your on premises environment and Azure is sent over SSL encrypted channels. All of your data is encrypted both when in transit and at rest in Azure. Performing failover testing with Azure Site Recovery allows you to do a test failover without impacting your production workloads.

 

For these reasons, companies should be considering adding Azure Site Recovery to their business continuity and disaster recovery toolbox.

 

[If you’re looking for more Microsoft resources, download our recent webinar around strategies for migrating to Office 365]

 

By Justin Gallagher, Enterprise Consultant

Infinio Blog: Executive Viewpoint 2016 Prediction

This post originally appeared on Virtual-Strategy Magazine and is authored by Scott Davis, CTO at Infinio, a GreenPages partner.  It does not necessarily reflect the views or opinions of GreenPages Technology Solutions.

 

It’s that time of year for CTO predictions. The rate of innovation and disruption across IT is certainly accelerating, providing ample opportunities for comment. Although there is a significant amount of disruptive change going on across many disciplines, I wanted to primarily focus on storage observations for 2016.

Emergence of Storage-class Memory

Toward the end of 2016, we’ll see the initial emergence of a technology that I believe will become the successor to flash. This new storage technology (storage class memory, or SCM) will fundamentally change today’s storage industry just as dramatically as flash changed the hard drive industry. Intel/Micron calls one version 3D XPoint and HP/SanDisk have joined forces for another variant.

SCM is persistent memory technology – 1,000 times faster than flash, 1,000 times more resilient, and unlike flash, it delivers symmetric read/write performance. SCM devices connect to memory slots in a server and they are mapped and accessed similarly to memory, although they are slightly slower. Unlike previous generations of storage technology, SCM devices can be addressed atomically at either the byte level or block-level granularity. Operating systems will likely expose them as either very fast block storage devices formatted by traditional file systems and databases (for compatibility) or as direct memory mapped “files” for next-generation applications. Hypervisors will likely expose them as new, specially named and isolated SCM regions for use by applications running inside the guest operating system (OS).

I expect that SCM will provide unprecedented storage performance, upend the database/file system structures we’ve grown accustomed to, and further drive the trend towards server-side storage processing, shaking up everything from storage economics to application design.

VSAN becomes an Alternative to HCI

Hyperconverged infrastructure (HCI) is a sales strategy wrapped around a software-defined storage architecture that has garnered much attention in the past few years. HCI offerings comprise integrated hardware and software “building blocks” bundled and sold together as a single entity. The hardware is typically a server with direct attached storage disks and PCI-e flash cards. All the software needed to run virtual workloads is packaged as well, including hypervisor, systems management, configuration tools and virtual networking. Perhaps most relevant to our part of the industry, there is always a software-defined storage (SDS) stack bundled with HCI offerings that virtualizes the disks and flash hardware into a virtual storage array while providing storage management capabilities. This SDS stack delivers all the storage services to the virtual machines.

In VMware’s EVO:Rail offering, VMware Virtual SAN (VSAN) is this integrated storage stack. Now battle-tested and rich with enterprise features, VSAN will become more prevalent in the datacenter.  Organizations attracted to cost-effective, high-performance server-side software-defined storage solutions no longer have to embrace the one-size-fits-all hyperconverged infrastructure sales strategy along with it. They will increasingly choose the more customizable VSAN-based solutions, rather than prepackaged HCI offerings, particularly for sophisticated enterprise data center use cases.

Flash Continues to Complement Traditional Spinning Drives, Not Replace Them

While the all-flash array market continues to grow in size, and flash decreases in price, the reality of flash production is that the industry does not have the manufacturing capacity necessary to enable flash to supplant hard disk drives. A recent Register article quoted Samsung and Gartner data that suggested that by 2020, the NAND Flash industry could produce 253 exabytes (EB), which is three times the current manufacturing capacity at a cost of approximately $23 billion.

 

Click to read the rest of this post!

 

Are you interested in learning how Infinio could potentially fit into your IT strategy? Reach out!

 

 

The Storage (R)Evolution or The Storage Superstorm?

The storage market is changing, and it isn’t changing slowly. While traditional storage vendors still dominate the revenue and units sold market share, IDC concludes that direct sales to hyperscale (cloud scale, rack scale) service providers are dominating sales of storage. Hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system; hyperscale datacenters are the type run by Facebook, Amazon, and Google. 

Quote to remember:

“…cloud-based storage, integrated systems, software-defined storage, and flash-optimized storage systems <are selling> at the expense of traditional external arrays.”

In my opinion, this is like the leading edge of a thunderstorm supercell or a “Sandy” Superstorm – the changes that are behind this trend will be tornadoes of upheaval in the datacenter technology business. As cloud services implementations accelerate and software defined storage services proliferate, the impact will be felt not only in the storage market, but also in the server and networking markets. These changes will be reflected in how solutions providers, consulting firms, and VAR/DVARs will help the commercial market solve their technology and business challenges.

EMC is still number one by a very large margin, although down 4% year over year. HP is up nearly 9%; IBM and NetApp are way down. EMC overall (with NAS) has 32.4% revenue share; NetApp number 2 with 12.3%. Even with the apparent domination of the storage vendor market, it is obvious to EMC, their investors, and storage analysts everywhere (including yours truly) that the handwriting on the wall says they must adapt or become irrelevant. The list of great technology firms that didn’t adapt is long, even in New England alone. Digital Equipment Corporation is just one example.

Is EMC next? Not if the leadership team has anything to say about it. The recent announcements by VMware (EMC majority owned) at VMworld 2015 show not only the renewed emphasis on hybrid cloud services but also the intensive focus on software defined storage initiatives enabling the storage stack to be centrally managed within the vSphere Hypervisor. VMware vSphere APIs for IO Filtering are focused on enabling third party data services, such as replication, as part of vSphere Storage Policy-Based Management, the framework for software-defined storage services in vSphere.

EMC is clearly doubling down on the move to Hybrid Clouds with their Federation EMC Hybrid Cloud, as well as all the VMware vCloud Air initiatives. GreenPages is exploring and advising their customers on ways to develop a hybrid cloud strategy, and this includes engaging the EMC FEHC team as well as the VMware vCloud Air­ solution. EMC isn’t the only traditional disk array vendor to explore a cloud strategy, but it seems to be much further along than the others.

Software Defined Storage is the technology to keep an eye on. DataCore and FalconStor software dominated this space before it was even called SDS by default – there were no other SDS solutions out there. EMC came back in a big way with ViPR, arguably the most advanced “true” software defined storage solution in the market place now. Some of the other software-only vendors surging in this space, where software manages advanced data services across different arrays, like provisioning, deduplication, tiering, replication and snapshots, include Nexenta, Hedvig and others. Vendor SDS is a valid share of the market and is enabled by storage virtualization solutions by IBM, NetApp and others. Once “virtualized,” the vendor software enables cross platform data services. Other software-enabled platforms for advanced storage solutions include Coho Data and Pivot3. Hyperconverged solutions such as VSAN, SimpliVity or Nutanix offer more options to new datacenter solutions that don’t include a traditional storage array. “Tier 2” storage platforms such as Nexsan can benefit from this surge because, while the hardware platforms are solid and well-built, those companies haven’t invested as much or as long in the add-on software services that NetApp (for example) has. With advanced SDS solutions in place, this tier of storage can step up with a more “commodity” priced solution for advanced storage solutions.

In addition to the Hybrid Cloud diversification strategy, EMC and other traditional storage manufacturers are keeping a wary eye on the non-traditional vendors such as Nimble Storage, which is offering innovative and easy-to-use alternatives to the core EMC market. There are also a myriad of startups developing new storage services such as Coho, Rubrik, Nexenta, CleverSafe and others. The All Flash Array market is exploding with advanced solutions made possible by the growing maturity of the flash technology and the proliferation of new software designed to leverage the uniqueness of flash storage. Pure Storage grabbed early market share, followed by XtremIO (EMC), but SolidFire, Nexenta, Coho and Kaminario have developed competitive solutions that range from service provider oriented products to software defined storage services leveraging commodity flash storage.

 

What does this coming superstorm of change mean to you, your company, and your data center strategy? It means that when you are developing a strategic plan for your storage refreshes or datacenter refreshes, you have more options than ever to reduce total cost of ownership, add advanced data services such as disaster recovery or integrated backups, and replace parts (or the whole) of your datacenter storage, server and networking stacks. Contact us today to continue this discussion and see where it leads you. 

 

 

 

 

 

By Randy Weis, Principal Architect

The Storage (R)Evolution or The Storage Superstorm?

The storage market is changing, and it isn’t changing slowly. While traditional storage vendors still dominate the revenue and units sold market share, IDC concludes that direct sales to hyperscale (cloud scale, rack scale) service providers are dominating sales of storage. Hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system; hyperscale datacenters are the type run by Facebook, Amazon, and Google. 

Quote to remember:

“…cloud-based storage, integrated systems, software-defined storage, and flash-optimized storage systems <are selling> at the expense of traditional external arrays.”

In my opinion, this is like the leading edge of a thunderstorm supercell or a “Sandy” Superstorm – the changes that are behind this trend will be tornadoes of upheaval in the datacenter technology business. As cloud services implementations accelerate and software defined storage services proliferate, the impact will be felt not only in the storage market, but also in the server and networking markets. These changes will be reflected in how solutions providers, consulting firms, and VAR/DVARs will help the commercial market solve their technology and business challenges.

EMC is still number one by a very large margin, although down 4% year over year. HP is up nearly 9%; IBM and NetApp are way down. EMC overall (with NAS) has 32.4% revenue share; NetApp number 2 with 12.3%. Even with the apparent domination of the storage vendor market, it is obvious to EMC, their investors, and storage analysts everywhere (including yours truly) that the handwriting on the wall says they must adapt or become irrelevant. The list of great technology firms that didn’t adapt is long, even in New England alone. Digital Equipment Corporation is just one example.

Is EMC next? Not if the leadership team has anything to say about it. The recent announcements by VMware (EMC majority owned) at VMworld 2015 show not only the renewed emphasis on hybrid cloud services but also the intensive focus on software defined storage initiatives enabling the storage stack to be centrally managed within the vSphere Hypervisor. VMware vSphere APIs for IO Filtering are focused on enabling third party data services, such as replication, as part of vSphere Storage Policy-Based Management, the framework for software-defined storage services in vSphere.

EMC is clearly doubling down on the move to Hybrid Clouds with their Federation EMC Hybrid Cloud, as well as all the VMware vCloud Air initiatives. GreenPages is exploring and advising their customers on ways to develop a hybrid cloud strategy, and this includes engaging the EMC FEHC team as well as the VMware vCloud Air­ solution. EMC isn’t the only traditional disk array vendor to explore a cloud strategy, but it seems to be much further along than the others.

Software Defined Storage is the technology to keep an eye on. DataCore and FalconStor software dominated this space before it was even called SDS by default – there were no other SDS solutions out there. EMC came back in a big way with ViPR, arguably the most advanced “true” software defined storage solution in the market place now. Some of the other software-only vendors surging in this space, where software manages advanced data services across different arrays, like provisioning, deduplication, tiering, replication and snapshots, include Nexenta, Hedvig and others. Vendor SDS is a valid share of the market and is enabled by storage virtualization solutions by IBM, NetApp and others. Once “virtualized,” the vendor software enables cross platform data services. Other software-enabled platforms for advanced storage solutions include Coho Data and Pivot3. Hyperconverged solutions such as VSAN, SimpliVity or Nutanix offer more options to new datacenter solutions that don’t include a traditional storage array. “Tier 2” storage platforms such as Nexsan can benefit from this surge because, while the hardware platforms are solid and well-built, those companies haven’t invested as much or as long in the add-on software services that NetApp (for example) has. With advanced SDS solutions in place, this tier of storage can step up with a more “commodity” priced solution for advanced storage solutions.

In addition to the Hybrid Cloud diversification strategy, EMC and other traditional storage manufacturers are keeping a wary eye on the non-traditional vendors such as Nimble Storage, which is offering innovative and easy-to-use alternatives to the core EMC market. There are also a myriad of startups developing new storage services such as Coho, Rubrik, Nexenta, CleverSafe and others. The All Flash Array market is exploding with advanced solutions made possible by the growing maturity of the flash technology and the proliferation of new software designed to leverage the uniqueness of flash storage. Pure Storage grabbed early market share, followed by XtremIO (EMC), but SolidFire, Nexenta, Coho and Kaminario have developed competitive solutions that range from service provider oriented products to software defined storage services leveraging commodity flash storage.

 

What does this coming superstorm of change mean to you, your company, and your data center strategy? It means that when you are developing a strategic plan for your storage refreshes or datacenter refreshes, you have more options than ever to reduce total cost of ownership, add advanced data services such as disaster recovery or integrated backups, and replace parts (or the whole) of your datacenter storage, server and networking stacks. Contact us today to continue this discussion and see where it leads you. 

 

 

 

 

 

By Randy Weis, Principal Architect

EMC World 2015: Event Recap

After EMC World 2015, I’m languishing in airports today in post-conference burnout – an ideal time to deliver a report on the news, announcements and my prognostications on what this means to our business.

The big announcements were delivered in General Sessions on Monday (EMC Information Infrastructure & VCE) and on Tuesday (Federation: VMware & Pivotal). The Federation announcements are more developer and futures oriented, although important strategically, so I’ll pass on that for now.

EMC and VCE have updated their converged and Hyperconverged products pretty dramatically. Yes, VSPEX Blue is Hyperconverged, however unfortunate the name is in linking an EVO:RAIL solution to a reference architecture solution.

The products can be aligned as:

  1. Block
  2. Rack
  3.  Appliances

EMC World 2015

The VCE Vblock product line adheres to its core value proposition closely.

  1. Time from order to completely deployed on the data center floor in 45 days. (GreenPages will provide the Deploy & Implementation services. We have three D&I engineers on staff now.)
  2. Cross component Unified upgrade through a Release Candidate Matrix – every single bit of hardware is tested in major and minor upgrades to insure compatibility: storage, switch, blade, add-ons (RecoverPoint, Avamar, VPLEX).
  3. Unified support – one call to VCE, not to all the vendors in the build

However, VCE is adding options and variety to make the product less monolithic.

  1. VXblock – this is the XtremIO version, intended for large VDI or mission critical transactional deployments (trading, insurance, national healthcare claims processing). The Beast is a Vblock of eight 40 TB Xbrick nodes, 320 TB before dedupe and compression, or nearly 2 PB with realistic data reduction. Yes, that is Two Petabytes of All Flash Array. Remote replication is now totally supported with RecoverPoint.
  2. VXRack – this is a Vblock without an array, but it isn’t VSAN either. It is….ScaleIO, a software storage solution that pools server storage into a shared pool. The minimum configuration is 100 compute nodes, which can be dense performance (4 node form factor in 2 U chassis) or capacity. The nodes can be bare metal or hypervisor of any sort. This can scale to 328 Petabytes. Yes, Petabytes. This is web-scale, but they call it “Rack Scale” computing (first generation). More on that later…
  3. Vscale – Networking! This is Leaf and Spine networking in a rack to tie a VXrack or Vblock deployment together, at scale. “One Ring to Rule Them All”. This is big, literally. Imagine ordering a petabyte installation of VXblock, VXrack and Vscale, and rolling it onto the floor in less than two months.

So, that is Block and Rack. What about Appliance?

Enter VSPEX Blue, the EMC implementation of EVO:RAIL. This has definite value in…

  • Pricing
  • Unified management & support
  • The “app store” with
    • integrated backup (VDPA)
    • replication (vRPA)
    • Cloud Array integration (TwinStrata lives!), a virtual iSCSI controller that will present cloud storage to the system as a backup target or a capacity tier.

This post from Mike Colson provides a good explanation.

Future apps will include virus scanning, links to Public IaaS and others.

I set one up in the lab in 15 minutes, as advertised, although I had to wait for the configuration wizard to churn away after I initialized it and input all the networking. Professional Services will be required, as EMC is requiring PS to implement. Our team is and will be prepared to deploy this. We can discuss how this compares to other Hyperconverged appliances. Contact us for more information.

There are other announcements, some in sheer scale and some in desirable new features.

Data Domain Beast: DD9500, 58.7 TB/hr. and 1.7 PB of capacity. This is rated at 1.5x the performance and 4x the scalability of the nearest competitor.

VPLEX News: The VPLEX Witness can now be deployed in the public Cloud (naturally EMC recommends the EMC Hybrid Cloud or vCloud Air). The Witness has to be outside the fault domains of any protected site, so where better than the Cloud? It is a very lightweight VM.

CloudArray (TwinStrata’s Cloud Array Controller) is integrated with VPLEX. You can have a distributed volume spanning on premise and cloud storage. I’m still trying to grasp the significance of this. The local cache for the CloudArray controller can be very fast, so this isn’t limited to low latency applications. The things you could do…

VPLEX is now available in a Virtual Edition (VPLEX/VE). This will obviously come with some caveats and restrictions, but this also is a fantastic new option for smaller organizations looking for the high availability that VPLEX provides, as well as data mobility and federation of workloads across metro distances.

VVOL: Chuck Hollis (@chuckhollis) led an entertaining and informative ‘Birds of a Feather’ session for VVOLs. Takeaway – this is NOT commonly deployed yet. Only a handful of people have even set it up, and mostly for test. This was in a room with at least 150 people, so high interest, but low deployment. Everyone sees the potential and is looking forward to real world policy based deployments on industry standard storage. This is an emerging technology that will be watched closely.

VNX/VNXe: I didn’t see or hear many striking features or upgrades in this product line, but an all flash VNXe was trumpeted. I’ll be looking at the performance and design specifications of this more closely to see how it might fit targeted use cases or general purpose storage for SMB and commercial level customers. There is talk around the virtualization of the VNX array, as well as Isilon, so pretty soon nearly every controller or device in the EMC portfolio will be available as a virtual appliance. This leads me to…

ViPR Controller and ViPR SRM: Software Defined Storage

ViPR Controller is definitely a real product with real usefulness. This is the automation and provisioning tool for a wide variety of infrastructure elements, allowing for creation of virtual arrays with policy based provisioning, leveraging every data service imaginable: dedupe, replication, snapshots, file services, block services and so on.

ViPR SRM is the capacity reporting and monitoring tool that provides the management of capacity that is needed in an SDS environment. This is a much improved product with a very nice GUI and more intuitive approach to counters and metrics.

I’d recommend a Storage Transformation Workshop for people interested in exploring how SDS can change the way (and cost) of how you manage your information infrastructure.

More on EVO:RAIL/VSPEX Blue

I met with Mike McDonough, the mastermind behind EVO:RAIL. He is indeed a mastermind. The story of the rise of EVO:RAIL as a separate business unit is interesting enough (300 business cases submitted, 3 approved, and he won’t say what the other mystery products are), but the implementation and strategy and vision are what matter to us. The big factor here was boiling down the support cases to come up with the 370 most common reasons for support, all around configuration, management and hardware. The first version of EVO:RAIL addressed 240 of those issues. Think of this as having a safety rail around a vSphere appliance to prevent these common and easily avoidable issues, without restricting the flexibility too much. The next version will incorporate NSX, most likely. Security and inspection are the emphases for the next iteration. Partners and distributors were chosen carefully. GreenPages is one of only 9 national partners chosen for this, based on our long history as a strategic partner and our thought leadership! The tightly controlled hardware compatibility list is a strength, as future regression tests for software and other upgrades will keep the permutations down to a minimum. (By the way, the EMC server platform is Intel, for VxRack, VSPEX Blue and I think for all of their compute modules for all their products). The implication here, competitively, is that as competitive appliances that are buying white box hardware with commodity contracts allowing for flexibility in drives, memory and CPU, will have an exponentially more difficult task in maintain the increasing permutations of hardware versions over time.

Final Blue Sky note:

Rack Scale is an Intel initiative that promises an interesting future for increased awareness of the hardware for hypervisors, but is a very future leaning project. Read Scott Lowe’s thoughts on this.

 

As always, contact us for more details and in-depth conversations about how we can help you build the data center of the future, today.

 

By Randy Weis, Practice Manager, Information Infrastructure

Disaster Recovery as a Service: Does it make sense for you?

Does disaster recovery as a service make sense for your organization? It is oftentimes more cost effective and less of a headache than traditional disaster recovery options. As the importance of information infrastructure and applications grows, disaster recovery continues to become more and more critical to a company’s success. In this video, I break down the benefits of Disaster Recovery as a Service and discuss how you go about finding a solution that fits your needs. Benefits include:

  • You can get up and running in almost no time. Decrease implementation time from between 6 months-1 year down to 1 month or even a few weeks.
  • Shift from CapEx to OpEx
  • More affordable
  • No hardware refreshes
  • No software support

If you’re interested in learning more about Disaster Recovery as a Service and how it could impact your organization, reach out!

 

Disaster Recovery as a Service: Does it make sense for you?

http://www.youtube.com/watch?v=8kYOIGxhBRc

 

 

By Randy Weis, Practice Manager, Information Infrastructure

Flash Storage: Is it right for you?

In this video, I discuss flash storage. Remember, flash storage isn’t just an enterprise play. It’s important to understand how it can be used and when you should purchase it. Who are the mayor players? What’s the difference between all-flash and hybrid or adaptive flash? What about single cell or multi-level cell? What’s the pricing like?

What you should be doing is designing a solution that can take can take advantage of the flash that is right for your applications and that fits your needs and purposes. A combination of flash drives and spinning drives put together correctly with the right amount of intelligent software can address nearly everybody’s most critical application requirements without breaking the bank.

 

http://www.youtube.com/watch?v=6Nn1O3C3Vqo

 

If you’re interested in talking more about flash storage, reach out!

 

 

By Randy Weis, Practice Manager, Information Infrastructure