Archivo de la categoría: Storage

Backblaze launches cheap cloud storage service

BackBlaze B2 screenBackup service provider Backblaze has made a cloud storage service available for beta testing. When launched it could provide businesses with a cheap alternative to the Amazon S3 and the storage services bundled with Microsoft Azure and Google’s Cloud.

According to sources, Backblaze B2 will offer a free tier of service of up to 10GB storage, with 1GB/ per day of outbound traffic and unlimited inbound bandwidth. Developers will be able to access it through an API and command-line interface, but the service will also offer a web interface for less technical users.

Launched in 2007 Backblaze stores 150 petabytes of backup data and over 10 billion files on its servers, having built its own storage pods and software as a policy. Now, it intends to use this infrastructure building knowledge to offer a competitive cloud storage service, according to CEO Gleb Budman.

“We spent 90 per cent of our time and energy on building out the cloud storage and only 10 per cent on the front end,” Glebman told Tech Crunch. The stability of its backup service technology persuaded many users to extend the service into data storage. In response to customer demand,

Backblaze’s engineers spent a year working on the software to make this possible. Now the company is preparing to launch a business to business service that, it says, can compete with the cloud storage market’s incumbents on price and availability.

Backblaze’s service, when launched, will be half the price of Amazon Glacier, and ‘about a fourth’ of Amazon’s S3 service, according to sources. “Storage is still expensive,” Glebman said.

Though the primary use for Backblaze B2 will be to store images, videos and other documents, Budman said he expects some users to use it to store large research data sets.

Amazon Web Services to offer new hierarchical storage options after customer feedback

amazon awsAmazon Web services (AWS) is adding a new storage class to speed up the retrieval of frequently accessed information.

The announcement was made by AWS chief evangelist Jeff Barr on his company blog. Customer feedback had made AWS conduct an analysis of usage patterns, Barr said. AWS’s analytical team discovered that many customers store rarely-read backup and log files, which compete for resources with shared documents or raw data that need immediate analysis. Most users have frequent activity with their files shortly after uploading them after which activity drops off significantly with age. Information that’s important but not immediately urgent needs to be addressed through a new storage model, said Barr.

In response AWS has unveiled a new S3 Standard, within which there is a hierarchy of pricing options, based on the frequency of access. Customers now have the choice of three S3 storage classes, Standard, Standard – IA (infrequent access) and Glacier. All still offer the same level of 99.999999999 per cent durability.‎ The IA Standard for infrequent access has a service level agreement (SLA) of 99 per cent availability and is priced accordingly. Prices start at $0.0125 per gigabyte per month with a 30 day minimum storage duration for billing and a $0.01 per gigabyte charge for retrieval. The usual data transfer and request charges apply.

For billing purposes, objects that are smaller than 128 kilobytes are charged for 128 kilobytes of storage. AWS says this new pricing model will make its storage class more economical for long-term storage, backups and disaster recovery.

AWS has also introduced a lifecycle policy option, in a system that emulates the hierarchical storage model of centralised computing. Users can now create policies that will automate the movement of data between Amazon S3 storage classes over time. Typically, according to Barr, uploaded data using the Standard storage class will be moved by customers to Standard IA class when it’s 30 days old, and on to the Amazon Glacier class after another 60 days, where data storage will $0.01 per gigabyte per month.

Ctera now integrated with HP’s hybrid cloud manager

Cloud storageCtera Networks says it has integrated its storage and data management systems with HP’s cloud service automation (HP CSA) as it seeks way to simplify the management of enterprise file services across hybrid clouds.

The HP CSA ‘architecture’ now officially recognises and includes Ctera’s Enterprise File Services platform. The logic of the collaboration is that as the HP service helps companies build private and hybrid clouds they will need tighter data management in order to deliver new services to enterprise users, according to the vendors.

Ctera, which specialises in remote site storage, data protection, file synchronisation, file sharing and mobile collaboration services, has moved to make it easier to get those services on HP’s systems. According to Ctera, the new services can now be run on any organization’s HP CSA managed private or virtual private cloud infrastructure.

Enterprises that embrace the cloud need to modernise their file services and IT delivery models, according to Jeff Denworth, Ctera’s marketing SVP. The new addition of Ctera to HP CSA means they can easily manage file services from a single control point and quickly roll out the apps using a self-service portal, Denworth said.

“HP CSA helps IT managers become organisational heroes by accelerating the deployment of private and hybrid clouds and IT services,” said Denworth. The partnership with HP will result in a ‘broad suite’ of file services, increased agility and cheaper hybrid cloud services, according to Denworth.

The partnership should make things simpler for cloud managers, who are being forced to take on several roles, according to Atul Garg, HP’s general manager of cloud automation. “Today’s IT teams are becoming cloud services brokers, managing various products and services across hybrid environments and fundamentally changing how they deliver value to the broader organisation,” said Garg. Now file services can be deployed easily to tens of thousands of users, said Garg.

The Storage (R)Evolution or The Storage Superstorm?

The storage market is changing, and it isn’t changing slowly. While traditional storage vendors still dominate the revenue and units sold market share, IDC concludes that direct sales to hyperscale (cloud scale, rack scale) service providers are dominating sales of storage. Hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system; hyperscale datacenters are the type run by Facebook, Amazon, and Google. 

Quote to remember:

“…cloud-based storage, integrated systems, software-defined storage, and flash-optimized storage systems <are selling> at the expense of traditional external arrays.”

In my opinion, this is like the leading edge of a thunderstorm supercell or a “Sandy” Superstorm – the changes that are behind this trend will be tornadoes of upheaval in the datacenter technology business. As cloud services implementations accelerate and software defined storage services proliferate, the impact will be felt not only in the storage market, but also in the server and networking markets. These changes will be reflected in how solutions providers, consulting firms, and VAR/DVARs will help the commercial market solve their technology and business challenges.

EMC is still number one by a very large margin, although down 4% year over year. HP is up nearly 9%; IBM and NetApp are way down. EMC overall (with NAS) has 32.4% revenue share; NetApp number 2 with 12.3%. Even with the apparent domination of the storage vendor market, it is obvious to EMC, their investors, and storage analysts everywhere (including yours truly) that the handwriting on the wall says they must adapt or become irrelevant. The list of great technology firms that didn’t adapt is long, even in New England alone. Digital Equipment Corporation is just one example.

Is EMC next? Not if the leadership team has anything to say about it. The recent announcements by VMware (EMC majority owned) at VMworld 2015 show not only the renewed emphasis on hybrid cloud services but also the intensive focus on software defined storage initiatives enabling the storage stack to be centrally managed within the vSphere Hypervisor. VMware vSphere APIs for IO Filtering are focused on enabling third party data services, such as replication, as part of vSphere Storage Policy-Based Management, the framework for software-defined storage services in vSphere.

EMC is clearly doubling down on the move to Hybrid Clouds with their Federation EMC Hybrid Cloud, as well as all the VMware vCloud Air initiatives. GreenPages is exploring and advising their customers on ways to develop a hybrid cloud strategy, and this includes engaging the EMC FEHC team as well as the VMware vCloud Air­ solution. EMC isn’t the only traditional disk array vendor to explore a cloud strategy, but it seems to be much further along than the others.

Software Defined Storage is the technology to keep an eye on. DataCore and FalconStor software dominated this space before it was even called SDS by default – there were no other SDS solutions out there. EMC came back in a big way with ViPR, arguably the most advanced “true” software defined storage solution in the market place now. Some of the other software-only vendors surging in this space, where software manages advanced data services across different arrays, like provisioning, deduplication, tiering, replication and snapshots, include Nexenta, Hedvig and others. Vendor SDS is a valid share of the market and is enabled by storage virtualization solutions by IBM, NetApp and others. Once “virtualized,” the vendor software enables cross platform data services. Other software-enabled platforms for advanced storage solutions include Coho Data and Pivot3. Hyperconverged solutions such as VSAN, SimpliVity or Nutanix offer more options to new datacenter solutions that don’t include a traditional storage array. “Tier 2” storage platforms such as Nexsan can benefit from this surge because, while the hardware platforms are solid and well-built, those companies haven’t invested as much or as long in the add-on software services that NetApp (for example) has. With advanced SDS solutions in place, this tier of storage can step up with a more “commodity” priced solution for advanced storage solutions.

In addition to the Hybrid Cloud diversification strategy, EMC and other traditional storage manufacturers are keeping a wary eye on the non-traditional vendors such as Nimble Storage, which is offering innovative and easy-to-use alternatives to the core EMC market. There are also a myriad of startups developing new storage services such as Coho, Rubrik, Nexenta, CleverSafe and others. The All Flash Array market is exploding with advanced solutions made possible by the growing maturity of the flash technology and the proliferation of new software designed to leverage the uniqueness of flash storage. Pure Storage grabbed early market share, followed by XtremIO (EMC), but SolidFire, Nexenta, Coho and Kaminario have developed competitive solutions that range from service provider oriented products to software defined storage services leveraging commodity flash storage.

 

What does this coming superstorm of change mean to you, your company, and your data center strategy? It means that when you are developing a strategic plan for your storage refreshes or datacenter refreshes, you have more options than ever to reduce total cost of ownership, add advanced data services such as disaster recovery or integrated backups, and replace parts (or the whole) of your datacenter storage, server and networking stacks. Contact us today to continue this discussion and see where it leads you. 

 

 

 

 

 

By Randy Weis, Principal Architect

The Storage (R)Evolution or The Storage Superstorm?

The storage market is changing, and it isn’t changing slowly. While traditional storage vendors still dominate the revenue and units sold market share, IDC concludes that direct sales to hyperscale (cloud scale, rack scale) service providers are dominating sales of storage. Hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system; hyperscale datacenters are the type run by Facebook, Amazon, and Google. 

Quote to remember:

“…cloud-based storage, integrated systems, software-defined storage, and flash-optimized storage systems <are selling> at the expense of traditional external arrays.”

In my opinion, this is like the leading edge of a thunderstorm supercell or a “Sandy” Superstorm – the changes that are behind this trend will be tornadoes of upheaval in the datacenter technology business. As cloud services implementations accelerate and software defined storage services proliferate, the impact will be felt not only in the storage market, but also in the server and networking markets. These changes will be reflected in how solutions providers, consulting firms, and VAR/DVARs will help the commercial market solve their technology and business challenges.

EMC is still number one by a very large margin, although down 4% year over year. HP is up nearly 9%; IBM and NetApp are way down. EMC overall (with NAS) has 32.4% revenue share; NetApp number 2 with 12.3%. Even with the apparent domination of the storage vendor market, it is obvious to EMC, their investors, and storage analysts everywhere (including yours truly) that the handwriting on the wall says they must adapt or become irrelevant. The list of great technology firms that didn’t adapt is long, even in New England alone. Digital Equipment Corporation is just one example.

Is EMC next? Not if the leadership team has anything to say about it. The recent announcements by VMware (EMC majority owned) at VMworld 2015 show not only the renewed emphasis on hybrid cloud services but also the intensive focus on software defined storage initiatives enabling the storage stack to be centrally managed within the vSphere Hypervisor. VMware vSphere APIs for IO Filtering are focused on enabling third party data services, such as replication, as part of vSphere Storage Policy-Based Management, the framework for software-defined storage services in vSphere.

EMC is clearly doubling down on the move to Hybrid Clouds with their Federation EMC Hybrid Cloud, as well as all the VMware vCloud Air initiatives. GreenPages is exploring and advising their customers on ways to develop a hybrid cloud strategy, and this includes engaging the EMC FEHC team as well as the VMware vCloud Air­ solution. EMC isn’t the only traditional disk array vendor to explore a cloud strategy, but it seems to be much further along than the others.

Software Defined Storage is the technology to keep an eye on. DataCore and FalconStor software dominated this space before it was even called SDS by default – there were no other SDS solutions out there. EMC came back in a big way with ViPR, arguably the most advanced “true” software defined storage solution in the market place now. Some of the other software-only vendors surging in this space, where software manages advanced data services across different arrays, like provisioning, deduplication, tiering, replication and snapshots, include Nexenta, Hedvig and others. Vendor SDS is a valid share of the market and is enabled by storage virtualization solutions by IBM, NetApp and others. Once “virtualized,” the vendor software enables cross platform data services. Other software-enabled platforms for advanced storage solutions include Coho Data and Pivot3. Hyperconverged solutions such as VSAN, SimpliVity or Nutanix offer more options to new datacenter solutions that don’t include a traditional storage array. “Tier 2” storage platforms such as Nexsan can benefit from this surge because, while the hardware platforms are solid and well-built, those companies haven’t invested as much or as long in the add-on software services that NetApp (for example) has. With advanced SDS solutions in place, this tier of storage can step up with a more “commodity” priced solution for advanced storage solutions.

In addition to the Hybrid Cloud diversification strategy, EMC and other traditional storage manufacturers are keeping a wary eye on the non-traditional vendors such as Nimble Storage, which is offering innovative and easy-to-use alternatives to the core EMC market. There are also a myriad of startups developing new storage services such as Coho, Rubrik, Nexenta, CleverSafe and others. The All Flash Array market is exploding with advanced solutions made possible by the growing maturity of the flash technology and the proliferation of new software designed to leverage the uniqueness of flash storage. Pure Storage grabbed early market share, followed by XtremIO (EMC), but SolidFire, Nexenta, Coho and Kaminario have developed competitive solutions that range from service provider oriented products to software defined storage services leveraging commodity flash storage.

 

What does this coming superstorm of change mean to you, your company, and your data center strategy? It means that when you are developing a strategic plan for your storage refreshes or datacenter refreshes, you have more options than ever to reduce total cost of ownership, add advanced data services such as disaster recovery or integrated backups, and replace parts (or the whole) of your datacenter storage, server and networking stacks. Contact us today to continue this discussion and see where it leads you. 

 

 

 

 

 

By Randy Weis, Principal Architect

The cloud is commoditising storage for enterprises – report

Cloud storageLittle known unbranded manufacturers are making inroads into the storage market as the cloud commoditises the industry storage, according to a new report by market researcher IDC. Meanwhile, the market for traditional external storage systems is shrinking, it warns.

The data centres of big cloud companies like Google and Facebook are much more likely to buy from smaller, lesser known storage vendors now, as they are no longer compelled to commit themselves to specialised storage platforms, said IDC in its latest Enterprise Storage report.

Revenue for original design manufacturers (ODMs) that sell directly to hyperscale data-center operators grew 25.8 per cent in the second quarter of 2015, in a period when overall industry revenue rose just 2.1 per cent. However, data centre purchases accounted for US$1 billion in the second quarter, while the overall industry revenue is still larger, for now, at $8.8 billion. However, the growth trends indicate that a shift in buying power will take place, according to IDC analyst Eric Sheppard. Increasingly, the platform of choice for storage is a standard x86 server dedicated to storing data, said Sheppard.

ODMs such as Quanta Computer and Wistron are becoming increasingly influential, said Sheppard. Like many low-profile vendors, based in Taiwan, they are providing hardware to be sold under the badges of better known brand names, as sales of server-based storage rose 10 per cent in the second quarter to reach $2.1 billion.

Traditional external systems like SANs (storage area networks) are still the bulk of the enterprise storage business, which was worth $5.7 billion in revenue for the quarter. But sales in this segment are declining, down 3.9 per cent in that period.

With the cloud transferring the burden of processing to data centres, the biggest purchasers of storage are now Internet giants and cloud service providers. Typically their hyper-scale data centres are software controlled and no longer need the more expensive proprietary systems that individual companies were persuaded to buy, according to the report. Generic, unbranded hardware is sufficient, provided that it is software defined, the report said.

“The software, not the hardware, defines the storage architecture,” said Sheppard. The cloud has made it possible to define the management of storage in more detail, so that the resources can be matched more evenly to each virtual machine. This has cut the long term operating costs. These changes will intensify in the next five years, the analyst predicted.

EMC remained the biggest vendor by revenue with just over 19 per cent of the market, followed by Hewlett-Packard with just over 16 per cent.

Software-defined storage vendor Scality nabs $45m to prep for IPO

Scality has secured $45m in its latest funding round and plans to go public in 2017

Scality has secured $45m in its latest funding round and plans to go public in 2017

Software-defined storage expert Scality has secured $45m in a funding round led by Menlo Ventures, which the company said will be used to fuel its North American and international expansion.

Scality’s offering uses object storage to abstract underlying hardware to create a single pool of storage that can be manipulated with a wide range of protocols and technologies (SMB, Linux FS, OpenStack Swift, etc.).

The company, which offers storage software and has large reseller agreements in place with big box vendors like HP and Dell, has secured over $80m since its founding in 2009. It claims over 50 per cent of the server market is now reselling its SDS software.

“There’s no doubt in my mind that today, Scality is the biggest disruptor of the traditional storage industry, and I am extremely excited to witness their progression,” said Douglas C. Carlisle, managing director at Menlo Ventures.

“Their innovative storage model is meeting demand for scale like no other product on the market, and is poised to keep up with the steep incline in data volumes. With Jerome’s forward-thinking mindset, we expect to see Scality continue to be a trailblazer and to take its RING technology to the next level.”

The company has spent the better part of the past two years scaling up its operations in Asia and Europe, but it said the new funding will go towards bolstering its North American presence, with a view towards releasing and IPO in 2017.

“Over the course of the last year-and-a-half, we’ve seen an unprecedented amount of funding given to software storage startups. At the same time, we’ve seen the traditional storage vendors lose market share, change leadership and shift their business model to mimic the software-defined strategy. This latest funding round comes at a time when Scality and the software-defined storage industry are poised to attract billions of dollars from customers that are rethinking their storage strategies,” said Jerome Lecat, chief executive at Scality.

“Our employees and partners believe in us, and the fact that this last funding round was done at 2x valuation speaks volumes about the overall confidence in the future of Scality. This new capital investment will allow us to massively boost our go-to-market, attract strategic new hires, continue to expand globally, and be primed for a successful IPO by 2017,” Lecat said.

Seagate buys Dot Hill to bolster cloud cred

Seagate hasn't made too many cloud-focused acquisitions

Seagate hasn’t made too many cloud-focused acquisitions

Seagate announced plans to acquire storage software and hardware vendor Dot Hill Systems for $694m, which the company said would help bolster its cloud portfolio of products.

Dot Hill specialises in SAN technology and offers a range of storage array-based systems integrated with its storage and data management software, which are tailored primarily to the needs of cloud and virtualised workloads.

“Dot Hill’s innovative storage systems and IP portfolio are a strategic addition to our storage technology portfolio, enabling us to accelerate the growth of Seagate’s OEM-focused cloud storage system and solutions business,” said Phil Brace, president of Cloud Systems and Electronics Solutions at Seagate.

“We are focused on providing the highest quality storage systems for our OEM customers and Dot Hill’s storage solutions will enable us to advance our strategic efforts.  We look forward to welcoming Dot Hill’s strong team, which has proven experience in developing and delivering best-in-class storage solutions that are trusted by the world’s leading IT manufacturers and their channel partners,” Brace added.

The move will see Seagate pay $9.75 per Dot Hill share, totaling about $694m. Seagate said following the acquisition it will integrate Dot Hill’s portfolio into its cloud systems and electronics business.

“Seagate has a strong reputation in enterprise storage and is focused on building out its best-in-class storage system capabilities, making them the right home for the talented Dot Hill team,” said Dana Kammersgard, chief executive officer of Dot Hill.  “Dot Hill’s customers will benefit from leveraging Seagate’s leading technology and infrastructure to accelerate the delivery of advanced solutions.”

This is the latest cloud-centric acquisition for Seagate since it bought Exabyte last year.

Storage tech provider Tintri bags $125m to take on EMC, NetApp

Tintri secured $125m in series F funding this week

Tintri secured $125m in series F funding this week

Storage specialist Tintri has secured $125m in a funding round the company said would go towards accelerating development of its virtualised storage solution.

The latest funding round, led by Silver Lake Kraftwerk with participation from Insight Venture Partners, Lightspeed Ventures, Menlo Ventures and NEA brings the total investment secured by Tintri since its founding in 2008 to $260m.

Tintri specialises in storage hardware optimised to serve up data for individual virtual machines. The company’s storage servers blend both HDD and SSD tech in order to optimise hot and cold storage and access, making storage more performant by making it smarter.

“The storage industry is going through a dramatic transformation. Virtualization and cloud are forces for change—and conventional DAS, NAS and SAN storage is struggling to keep pace. That’s why our message of VM-aware storage (VAS) is winning in the marketplace,” said Ken Klein, chairman and chief executive for Tintri.

“This funding fuels our mission—we’ll be growing our global footprint and raising visibility of the business benefits of storage built specifically for virtualized enterprises.”

The company’s virtualisation-aware storage wares have enjoyed some solid traction among some of the world’s largest companies and service providers including Chevron, GE, the EIB, NTT, SK Telecom and Rogers Communications.

EMC World 2015: Event Recap

After EMC World 2015, I’m languishing in airports today in post-conference burnout – an ideal time to deliver a report on the news, announcements and my prognostications on what this means to our business.

The big announcements were delivered in General Sessions on Monday (EMC Information Infrastructure & VCE) and on Tuesday (Federation: VMware & Pivotal). The Federation announcements are more developer and futures oriented, although important strategically, so I’ll pass on that for now.

EMC and VCE have updated their converged and Hyperconverged products pretty dramatically. Yes, VSPEX Blue is Hyperconverged, however unfortunate the name is in linking an EVO:RAIL solution to a reference architecture solution.

The products can be aligned as:

  1. Block
  2. Rack
  3.  Appliances

EMC World 2015

The VCE Vblock product line adheres to its core value proposition closely.

  1. Time from order to completely deployed on the data center floor in 45 days. (GreenPages will provide the Deploy & Implementation services. We have three D&I engineers on staff now.)
  2. Cross component Unified upgrade through a Release Candidate Matrix – every single bit of hardware is tested in major and minor upgrades to insure compatibility: storage, switch, blade, add-ons (RecoverPoint, Avamar, VPLEX).
  3. Unified support – one call to VCE, not to all the vendors in the build

However, VCE is adding options and variety to make the product less monolithic.

  1. VXblock – this is the XtremIO version, intended for large VDI or mission critical transactional deployments (trading, insurance, national healthcare claims processing). The Beast is a Vblock of eight 40 TB Xbrick nodes, 320 TB before dedupe and compression, or nearly 2 PB with realistic data reduction. Yes, that is Two Petabytes of All Flash Array. Remote replication is now totally supported with RecoverPoint.
  2. VXRack – this is a Vblock without an array, but it isn’t VSAN either. It is….ScaleIO, a software storage solution that pools server storage into a shared pool. The minimum configuration is 100 compute nodes, which can be dense performance (4 node form factor in 2 U chassis) or capacity. The nodes can be bare metal or hypervisor of any sort. This can scale to 328 Petabytes. Yes, Petabytes. This is web-scale, but they call it “Rack Scale” computing (first generation). More on that later…
  3. Vscale – Networking! This is Leaf and Spine networking in a rack to tie a VXrack or Vblock deployment together, at scale. “One Ring to Rule Them All”. This is big, literally. Imagine ordering a petabyte installation of VXblock, VXrack and Vscale, and rolling it onto the floor in less than two months.

So, that is Block and Rack. What about Appliance?

Enter VSPEX Blue, the EMC implementation of EVO:RAIL. This has definite value in…

  • Pricing
  • Unified management & support
  • The “app store” with
    • integrated backup (VDPA)
    • replication (vRPA)
    • Cloud Array integration (TwinStrata lives!), a virtual iSCSI controller that will present cloud storage to the system as a backup target or a capacity tier.

This post from Mike Colson provides a good explanation.

Future apps will include virus scanning, links to Public IaaS and others.

I set one up in the lab in 15 minutes, as advertised, although I had to wait for the configuration wizard to churn away after I initialized it and input all the networking. Professional Services will be required, as EMC is requiring PS to implement. Our team is and will be prepared to deploy this. We can discuss how this compares to other Hyperconverged appliances. Contact us for more information.

There are other announcements, some in sheer scale and some in desirable new features.

Data Domain Beast: DD9500, 58.7 TB/hr. and 1.7 PB of capacity. This is rated at 1.5x the performance and 4x the scalability of the nearest competitor.

VPLEX News: The VPLEX Witness can now be deployed in the public Cloud (naturally EMC recommends the EMC Hybrid Cloud or vCloud Air). The Witness has to be outside the fault domains of any protected site, so where better than the Cloud? It is a very lightweight VM.

CloudArray (TwinStrata’s Cloud Array Controller) is integrated with VPLEX. You can have a distributed volume spanning on premise and cloud storage. I’m still trying to grasp the significance of this. The local cache for the CloudArray controller can be very fast, so this isn’t limited to low latency applications. The things you could do…

VPLEX is now available in a Virtual Edition (VPLEX/VE). This will obviously come with some caveats and restrictions, but this also is a fantastic new option for smaller organizations looking for the high availability that VPLEX provides, as well as data mobility and federation of workloads across metro distances.

VVOL: Chuck Hollis (@chuckhollis) led an entertaining and informative ‘Birds of a Feather’ session for VVOLs. Takeaway – this is NOT commonly deployed yet. Only a handful of people have even set it up, and mostly for test. This was in a room with at least 150 people, so high interest, but low deployment. Everyone sees the potential and is looking forward to real world policy based deployments on industry standard storage. This is an emerging technology that will be watched closely.

VNX/VNXe: I didn’t see or hear many striking features or upgrades in this product line, but an all flash VNXe was trumpeted. I’ll be looking at the performance and design specifications of this more closely to see how it might fit targeted use cases or general purpose storage for SMB and commercial level customers. There is talk around the virtualization of the VNX array, as well as Isilon, so pretty soon nearly every controller or device in the EMC portfolio will be available as a virtual appliance. This leads me to…

ViPR Controller and ViPR SRM: Software Defined Storage

ViPR Controller is definitely a real product with real usefulness. This is the automation and provisioning tool for a wide variety of infrastructure elements, allowing for creation of virtual arrays with policy based provisioning, leveraging every data service imaginable: dedupe, replication, snapshots, file services, block services and so on.

ViPR SRM is the capacity reporting and monitoring tool that provides the management of capacity that is needed in an SDS environment. This is a much improved product with a very nice GUI and more intuitive approach to counters and metrics.

I’d recommend a Storage Transformation Workshop for people interested in exploring how SDS can change the way (and cost) of how you manage your information infrastructure.

More on EVO:RAIL/VSPEX Blue

I met with Mike McDonough, the mastermind behind EVO:RAIL. He is indeed a mastermind. The story of the rise of EVO:RAIL as a separate business unit is interesting enough (300 business cases submitted, 3 approved, and he won’t say what the other mystery products are), but the implementation and strategy and vision are what matter to us. The big factor here was boiling down the support cases to come up with the 370 most common reasons for support, all around configuration, management and hardware. The first version of EVO:RAIL addressed 240 of those issues. Think of this as having a safety rail around a vSphere appliance to prevent these common and easily avoidable issues, without restricting the flexibility too much. The next version will incorporate NSX, most likely. Security and inspection are the emphases for the next iteration. Partners and distributors were chosen carefully. GreenPages is one of only 9 national partners chosen for this, based on our long history as a strategic partner and our thought leadership! The tightly controlled hardware compatibility list is a strength, as future regression tests for software and other upgrades will keep the permutations down to a minimum. (By the way, the EMC server platform is Intel, for VxRack, VSPEX Blue and I think for all of their compute modules for all their products). The implication here, competitively, is that as competitive appliances that are buying white box hardware with commodity contracts allowing for flexibility in drives, memory and CPU, will have an exponentially more difficult task in maintain the increasing permutations of hardware versions over time.

Final Blue Sky note:

Rack Scale is an Intel initiative that promises an interesting future for increased awareness of the hardware for hypervisors, but is a very future leaning project. Read Scott Lowe’s thoughts on this.

 

As always, contact us for more details and in-depth conversations about how we can help you build the data center of the future, today.

 

By Randy Weis, Practice Manager, Information Infrastructure