Archivo de la categoría: Storage

EMC World 2015: Event Recap

After EMC World 2015, I’m languishing in airports today in post-conference burnout – an ideal time to deliver a report on the news, announcements and my prognostications on what this means to our business.

The big announcements were delivered in General Sessions on Monday (EMC Information Infrastructure & VCE) and on Tuesday (Federation: VMware & Pivotal). The Federation announcements are more developer and futures oriented, although important strategically, so I’ll pass on that for now.

EMC and VCE have updated their converged and Hyperconverged products pretty dramatically. Yes, VSPEX Blue is Hyperconverged, however unfortunate the name is in linking an EVO:RAIL solution to a reference architecture solution.

The products can be aligned as:

  1. Block
  2. Rack
  3.  Appliances

EMC World 2015

The VCE Vblock product line adheres to its core value proposition closely.

  1. Time from order to completely deployed on the data center floor in 45 days. (GreenPages will provide the Deploy & Implementation services. We have three D&I engineers on staff now.)
  2. Cross component Unified upgrade through a Release Candidate Matrix – every single bit of hardware is tested in major and minor upgrades to insure compatibility: storage, switch, blade, add-ons (RecoverPoint, Avamar, VPLEX).
  3. Unified support – one call to VCE, not to all the vendors in the build

However, VCE is adding options and variety to make the product less monolithic.

  1. VXblock – this is the XtremIO version, intended for large VDI or mission critical transactional deployments (trading, insurance, national healthcare claims processing). The Beast is a Vblock of eight 40 TB Xbrick nodes, 320 TB before dedupe and compression, or nearly 2 PB with realistic data reduction. Yes, that is Two Petabytes of All Flash Array. Remote replication is now totally supported with RecoverPoint.
  2. VXRack – this is a Vblock without an array, but it isn’t VSAN either. It is….ScaleIO, a software storage solution that pools server storage into a shared pool. The minimum configuration is 100 compute nodes, which can be dense performance (4 node form factor in 2 U chassis) or capacity. The nodes can be bare metal or hypervisor of any sort. This can scale to 328 Petabytes. Yes, Petabytes. This is web-scale, but they call it “Rack Scale” computing (first generation). More on that later…
  3. Vscale – Networking! This is Leaf and Spine networking in a rack to tie a VXrack or Vblock deployment together, at scale. “One Ring to Rule Them All”. This is big, literally. Imagine ordering a petabyte installation of VXblock, VXrack and Vscale, and rolling it onto the floor in less than two months.

So, that is Block and Rack. What about Appliance?

Enter VSPEX Blue, the EMC implementation of EVO:RAIL. This has definite value in…

  • Pricing
  • Unified management & support
  • The “app store” with
    • integrated backup (VDPA)
    • replication (vRPA)
    • Cloud Array integration (TwinStrata lives!), a virtual iSCSI controller that will present cloud storage to the system as a backup target or a capacity tier.

This post from Mike Colson provides a good explanation.

Future apps will include virus scanning, links to Public IaaS and others.

I set one up in the lab in 15 minutes, as advertised, although I had to wait for the configuration wizard to churn away after I initialized it and input all the networking. Professional Services will be required, as EMC is requiring PS to implement. Our team is and will be prepared to deploy this. We can discuss how this compares to other Hyperconverged appliances. Contact us for more information.

There are other announcements, some in sheer scale and some in desirable new features.

Data Domain Beast: DD9500, 58.7 TB/hr. and 1.7 PB of capacity. This is rated at 1.5x the performance and 4x the scalability of the nearest competitor.

VPLEX News: The VPLEX Witness can now be deployed in the public Cloud (naturally EMC recommends the EMC Hybrid Cloud or vCloud Air). The Witness has to be outside the fault domains of any protected site, so where better than the Cloud? It is a very lightweight VM.

CloudArray (TwinStrata’s Cloud Array Controller) is integrated with VPLEX. You can have a distributed volume spanning on premise and cloud storage. I’m still trying to grasp the significance of this. The local cache for the CloudArray controller can be very fast, so this isn’t limited to low latency applications. The things you could do…

VPLEX is now available in a Virtual Edition (VPLEX/VE). This will obviously come with some caveats and restrictions, but this also is a fantastic new option for smaller organizations looking for the high availability that VPLEX provides, as well as data mobility and federation of workloads across metro distances.

VVOL: Chuck Hollis (@chuckhollis) led an entertaining and informative ‘Birds of a Feather’ session for VVOLs. Takeaway – this is NOT commonly deployed yet. Only a handful of people have even set it up, and mostly for test. This was in a room with at least 150 people, so high interest, but low deployment. Everyone sees the potential and is looking forward to real world policy based deployments on industry standard storage. This is an emerging technology that will be watched closely.

VNX/VNXe: I didn’t see or hear many striking features or upgrades in this product line, but an all flash VNXe was trumpeted. I’ll be looking at the performance and design specifications of this more closely to see how it might fit targeted use cases or general purpose storage for SMB and commercial level customers. There is talk around the virtualization of the VNX array, as well as Isilon, so pretty soon nearly every controller or device in the EMC portfolio will be available as a virtual appliance. This leads me to…

ViPR Controller and ViPR SRM: Software Defined Storage

ViPR Controller is definitely a real product with real usefulness. This is the automation and provisioning tool for a wide variety of infrastructure elements, allowing for creation of virtual arrays with policy based provisioning, leveraging every data service imaginable: dedupe, replication, snapshots, file services, block services and so on.

ViPR SRM is the capacity reporting and monitoring tool that provides the management of capacity that is needed in an SDS environment. This is a much improved product with a very nice GUI and more intuitive approach to counters and metrics.

I’d recommend a Storage Transformation Workshop for people interested in exploring how SDS can change the way (and cost) of how you manage your information infrastructure.

More on EVO:RAIL/VSPEX Blue

I met with Mike McDonough, the mastermind behind EVO:RAIL. He is indeed a mastermind. The story of the rise of EVO:RAIL as a separate business unit is interesting enough (300 business cases submitted, 3 approved, and he won’t say what the other mystery products are), but the implementation and strategy and vision are what matter to us. The big factor here was boiling down the support cases to come up with the 370 most common reasons for support, all around configuration, management and hardware. The first version of EVO:RAIL addressed 240 of those issues. Think of this as having a safety rail around a vSphere appliance to prevent these common and easily avoidable issues, without restricting the flexibility too much. The next version will incorporate NSX, most likely. Security and inspection are the emphases for the next iteration. Partners and distributors were chosen carefully. GreenPages is one of only 9 national partners chosen for this, based on our long history as a strategic partner and our thought leadership! The tightly controlled hardware compatibility list is a strength, as future regression tests for software and other upgrades will keep the permutations down to a minimum. (By the way, the EMC server platform is Intel, for VxRack, VSPEX Blue and I think for all of their compute modules for all their products). The implication here, competitively, is that as competitive appliances that are buying white box hardware with commodity contracts allowing for flexibility in drives, memory and CPU, will have an exponentially more difficult task in maintain the increasing permutations of hardware versions over time.

Final Blue Sky note:

Rack Scale is an Intel initiative that promises an interesting future for increased awareness of the hardware for hypervisors, but is a very future leaning project. Read Scott Lowe’s thoughts on this.

 

As always, contact us for more details and in-depth conversations about how we can help you build the data center of the future, today.

 

By Randy Weis, Practice Manager, Information Infrastructure

Flash Storage: Is it right for you?

In this video, I discuss flash storage. Remember, flash storage isn’t just an enterprise play. It’s important to understand how it can be used and when you should purchase it. Who are the mayor players? What’s the difference between all-flash and hybrid or adaptive flash? What about single cell or multi-level cell? What’s the pricing like?

What you should be doing is designing a solution that can take can take advantage of the flash that is right for your applications and that fits your needs and purposes. A combination of flash drives and spinning drives put together correctly with the right amount of intelligent software can address nearly everybody’s most critical application requirements without breaking the bank.

 

http://www.youtube.com/watch?v=6Nn1O3C3Vqo

 

If you’re interested in talking more about flash storage, reach out!

 

 

By Randy Weis, Practice Manager, Information Infrastructure

Emerging Technologies Across the Storage Landscape

There has been an influx of emerging technologies across the storage landscape. Many vendors are using the exact same hardware but are figuring out ways to do a lot of smarter things with the software. In this post, I’ll cover a handful of vendors who are doing a great job innovating at the software layer to improve storage technology and performance.

Nimble

Nimble was founded by the same people who did Data Domain and does data compression. Their success led to EMC buying them in June 2009. The company is known for its massively popular backup targets. They’re the one of the first ones to compress and duplicate the data as it was being stored to greatly reduce the amount of data that needed to be stored. Essentially, Nimble takes commodity solid state drives and slow 7,200 RPM spinning disks and turns it into an extremely fast, well-performing hybrid SAN, while delivering excellent compression ratios and the best support team in the business. Very simply, they’re doing smarter things with the same technology everyone else is using. It’s highly scalable and well designed. For example, you can change your controllers on the array during business hours with no interruptions, as opposed to having to wait until off hours as companies have been forced to do traditionally.

DataGravity

What’s interesting about DataGravity is that they have taken an entirely different approach to traditional storage. They make arrays that perform on par with just about everyone else’s, yet their secret sauce is taking unstructured, uncategorized data and categorizing it at the time it’s being written. Why is this important? A lot of companies have to keep track of Social Security Numbers, Credit Card Numbers, etc. Traditionally, you have to buy expensive software to do this. DataGravity does it at the time the Data is written. You don’t need to invest in any additional software. That sounds too good to be true, right? Every modern SAN has two storage controllers. Most are active passive or they are both on. DataGravity has one controller looking at these traditional things while the other storage controller looks at data, categorizes it and looks at data management functions. This eliminates the need for expensive compliance related software and data protection management.

Who should take advantage?

Any company that has to deal with regulatory compliance (Healthcare, Finance, etc.).

Simplivity

Simplivity offers hyper-converged infrastructure similar to Nuatanix, EVO Rails, and Dell Vertex. The piece that makes them unique is their dedication to reduce IO. They take all data and compress/duplicate at ingestion once and forever. This means that if I write a data block and the data is already on the storage system, there is zero IO; I don’t have to rewrite it. Furthermore, I can migrate virtual machines from one data center to another. It’s easy to migrate a 5 gig virtual machine and write less than a 100 mgs across the WAN. Also, when I clone a machine, there is no IO. IO is something companies can’t address during work hours because it takes up way too many resources and would bring them to their knees. You can’t do it without impacting the business. When you have Simplivity, there is no need for a third party backup vendor. Redundant data spreads through notes and only writes redundant blocks. It’s easy to have petabytes of backups living on terabytes of storage.

Who should take advantage?

We have a client who is currently in Massachusetts that is looking to move to a Colocation Facility in Florida. For this use case, Simplivity is a quick and easy way to migrate that data geographically without huge impacts on bandwidth, WAN costs, etc.

Pure Storage

If you’re looking for ridiculously fast storage, Pure Storage could be the solution for you. They use the same flash technology as everyone else, but they read and write to it differently so it’s much more efficient, optimized, and it matches the flash technology. Typically, vendors have been writing to flash drivers in the same way that they were treating spinning disk.

Who should take advantage?

If your organization has applications that require tremendously fast storage, this could be a good fit for you. One example would be if you have extremely demanding Oracle SAP or SQL applications.

VMware

VMware brings a lot of great benefits to the table with EVO: Rail. EVO: Rail is basically VMware SAN with prebuilt hardware that can very quickly and easily be deployed. It’s a scalable, software-defined data center building block that provides compute, networking, storage and management. Furthermore, it’s highly resilient.

Who should take advantage?

This is a good fit for organizations that have branch offices where there is a need for smaller VMware environments at multiple locations. It’s a quick, inexpensive way to manage them centrally from a virtual center.

 

Be sure to keep your eyes out for HP who is making innovations in flash storage. More on that soon.

Have you used any of these solutions? How have your experiences been? If you would like to talk more about this, send us an email at socialmedia@greenpages.com

Fun Facts about Microsoft Azure

facts about Microsoft AzureLooking for some helpful facts about Microsoft Azure? For those out there that may be confused about the Microsoft Azure solutions offered to date, here is the first in a series of posts about the cool new features of the Microsoft premium cloud offering, Azure.

Azure Backup, ok… wait, what? I need to do backup in the cloud? No one told me that!

Facts about Microsoft Azure

Yes Virginia, you need to have a backup solution in the cloud. To keep this high level below I attempted to outline what the Azure backup offering really is. There are several protections built into the Azure platform that help customers protect their data as well as options to recover from a failure.

In a normal, on premise scenario, host based hardware and networking failures are protected at the hypervisor level. In Azure you do not see this because control of the hypervisor has been removed. Azure, however, is designed to be highly available meeting and exceeding the posted SLAs associated with the service

Hardware failures of storage are also protected against within Azure. At the lowest end you have Local Redundant storage where they maintain 3 copies of your data within a region. The more common and industry preferred method is Geo-Redundant storage which keeps 3 copies in you’re region and 3 additional copies in another datacenter, somewhere geographically dispersed based on a complex algorithm. The above protections help to insure survivability of your workloads.

Important to note: The copies in the second datacenter are crash consistent copies so it should not be considered a backup of the data but more of a recovery mechanism for a disaster.

Did I hear you just ask about Recovery Services in Azure? Why yes, we have two to talk about today.

  • Azure Backup
  • Azure Site Recovery

Azure Site Recovery – This scenario both orchestrates site recovery as well as provides a destination for virtual machines. Microsoft currently supports Hyper-V to Azure, Hyper-V to Hyper-V or VMware to VMware recovery scenarios with this method.

Azure Backup is a destination for your backups. Microsoft offers traditional agents for Windows Backup and the preferred platform, Microsoft System Center 2012 – Data Protection Manager. Keeping the data in the cloud, Azure holds up to 120 copies of the data and can be restored as needed. At this time the Azure Windows backup version only protects files. It will not do Full System or Bare Metal backups of Azure VMs.

As of this blog post to get a traditional full system backup there is a recommend two-step process where you use Windows Backup which can capture a System State backup and the enable Azure Backup to capture this into your Azure Backup Vault.

There are 2 other methods that exist but currently the jury is out on the validity of these offerings. They are VM Capture and Blob Snapshot.

  • VM capture – which is equivalent to a VM snapshot
  • Blob Snapshot – This is equivalent to a LUN snapshot

As I said these are options but considered by many too immature at this time and respectfully not widely adopted. Hopefully, this provides some clarity around Azure and as with all things Microsoft Cloud related, Microsoft issues new features almost daily now. Check back again for more updates on what Azure can do for your organization!

 

By David Barter, Practice Manager, Microsoft Technologies

Storage Has Evolved – It Now Provides the Context & Management of Data

 

Information infrastructure is taking storage, which is a very fundamental part of any data center infrastructure, and putting context around it by adding value on what has been typically seen as a commodity item.

Bits in and of themselves have little value. Add context to it and assign value to that information and it becomes an information infrastructure. Organizations need to seek to add value to their datacenter environments by leveraging some advanced technologies that have become part of our landscape. These technologies include software defined storage, solid state storage, and cloud based storage. Essentially, there is a new way to deliver a datacenter application data infrastructure.

Storage has evolved

 

http://www.youtube.com/watch?v=yzbwG0g-Y7c

Interested in learning more about the latest in storage technologies? Fill out this form and we’ll get back to you!

By Randy Weis, Practice Manager – Information Infrastructure

EMC Acquired TwinStrata in July. What’s This Mean For You Moving Forward?

Video with Randy Weis, Practice Manager, Data Center   http://www.youtube.com/watch?v=McUyYF9NIec   Back in July, storage giant EMC acquired TwinStrata. Information infrastructure and storage expert Randy Weis breaks down TwinStrata’s capabilities and explains what this means for your organization.   Interested in speaking with Randy about the latest trends in storage? Email us at socialmedia@greenpages.com  …Read More »

Top 25 Findings from Giagom’s 4th Annual “Future of Cloud Computing” Survey

By Ben Stephenson, Journey to the Cloud

 

Giagom Research and North Bridge Partners recently released their 4th annual “Future of Cloud Computing” study. There was some great data gathered from the 1,358 respondents surveyed. In case you don’t have time to click through the entire 124 slideshare deck, I’ve pulled out what I think are the 25 most interesting statistics from the study. Here’s the complete deck if you would like to review in more detail.

 

  • 49% using the cloud for revenue generating or product development activities (Slide 9)
  • 80% of IT budget is used to maintain current systems (Slide 20) <–> GreenPages actually held a webinar recently explaining how organizations can avoid spending the majority of their IT budgets on “keeping the lights on
  • For IT across all functions tested in the survey, 60-85% of respondents will move some or significant processing to the cloud in the next 12-24 months (Slide 21)
  • Shifting CapEx to OpEx is more important for companies with over 5,000 employees (Slide 27)
  • For respondents moving workloads to the cloud today, 27% said they are motivated to do so because they believe using a cloud platform service will help them lower their capital expenditures (Slide 28)
  • Top Inhibitor: Security, remains the biggest concern, despite declining slightly last year, it rose again as an issue in 2014 and was cited by 49% of respondents (Slide 55)
  • Privacy is of growing importance. As an inhibitor, Privacy grew from 25% in 2011 to 31% (Slide 57)
  • Over 1/3 see regulatory/compliance as an inhibitor to moving to the cloud (Slide 60)
  • Interoperability concerns dropped by 45%, relatively, over the past two years…but 29% are still concerned about lock in (Slide 62)
  • Nearly ¼ people still think network bandwidth is an inhibitor (Slide 64)
  • Reliability concerns dropped by half since 2011 (Slide 66)
  • Amazon S3 holds trillions of objects and regularly peaks at 1.5 million requests per second (Slide 71)
  • 90% of world’s data was created in past two years…80% of it is unstructured (Slide 73) <–> Here’s a video blog where Journey to the Cloud blogger Randy Weis talks about big data in more detail
  • Approximately 66% of data is in the cloud today (Slide 74)
  • The number above is expected to grow 73% in two years (Slide 75)
  • 50% of enterprise customers will purchase as much storage in 2014 as they have accumulated in their ENTIRE history (slide 77)
  • IaaS use has jumped from 11% in 2011 to 56% in 2014 & SaaS has increased from 13% in 2011 to 72% in 2014 (Slide 81)
  • Applications Development growing 50% (Slide 84) <–> with the growth of app dev, we’re also seeing the growth of shadow IT. Check out this on-demand webinar “The Rise of Unauthorized AWS Use. How to Address Risks Created by Shadow IT.”
  • PaaS approaching the tipping point! PaaS has increased from 7% in 20111 to 41% in 2014. (Slide 85) <–> See what one of our bloggers, John Dixon, predicted in regards to the rise of PaaS at the beginning of the year.
  • Database as a Service expected to nearly double, from 23% to 44% among users (Slide 86)
  • By 2017, nearly 2/3rds of all workloads will be processed in cloud data centers. Growth of workloads in cloud data centers is expected to be five times the growth in traditional workloads between 2012 and 2017. (Slide 87)
  • SDN usage will grow among business users almost threefold…from 11% to 30%  (Slide 89) <–> Check out this video blog where Nick Phelps talks about the business drivers behind SDN.
  • 42% use hybrid cloud now (Slide 93)
  • That 42% will grow to 55% in 2 years (Slide 94) <–> This whitepaper gives a nice breakdown of the future of hybrid cloud management.
  • “This second cloud front will be an order of magnitude bigger than the first cloud front.” (Slide 117). <–> hmmm, where have I heard this one before? Oh, that’s right, GreenPages’ CEO Ron Dupler has been saying it for about two years now.

Definitely some pretty interesting takeaways from this study. What are your thoughts? Did certain findings surprise you?

 

 

 

Dropbox Forced to Kill Shared Links Due to Security Snafu

Oops! Dropbox announced it is killing existing shared links where documents include ordinary hyperlinks to websites. The problem is the plain old referrer in the header tells that website the URL the inbound link came from. That’s a standard way sites know where their non-direct traffic is coming from. In this scenario, however, the referrer is the URL of the shared dropbox document.

The symptom Dropbox users will experience? Complaints from recipients that the link they were given doesn’t work (if in doubt check the link yourself).

From the Dropbox post on the issue:

While we’re unaware of any abuse of this vulnerability, for your safety we’ve taken the following steps to make sure this vulnerability can’t be exploited:

  • For previously shared links to such documents, we’ve disabled access entirely until further notice. We’re working to restore links that aren’t susceptible to this vulnerability over the next few days.
  • In the meantime, as a workaround, you can re-create any shared links that have been turned off.
  • For all shared links created going forward, we’ve patched the vulnerability

Here’s how to rebuild affected links.