Category Archives: hybrid cloud

VMware opens up at VMworld San Francisco

VMWare campus logoVirtualisation pioneer VMware has unveiled a raft of new services tailored for hybrid cloud services and open systems at its annual VMworld conference in San Francisco.

VMware announced the launch of VMware Integrated OpenStack 2.0, the company’s second release of its distribution of the OpenStack open-source cloud software. The new release, based on OpenStack Kilo, will be available on September 30.

“Customers can now upgrade from version one to version two in a more operationally efficient manner and even roll back if anything goes wrong,” said VMware product line manager Arvind Soni.

The move could be seen as a U-turn by VMware, whose revenue streams come from sales of its vSphere virtualization software. The most recent annual VMware report warned that “open source technologies for virtualization, containerization, and cloud platforms such as Xen, KVM, Docker, Rocket, and OpenStack provide significant pricing competition and let competing vendors [use] OpenStack to compete directly with our SDDC initiative.”

However, with OpenStack distributions available from Canonical, HP, Huawei and Oracle – and investment in OpenStack companies from Intel, IBM and other major players, VMware has announced continued support. In October 2014 parent company EMC bought three OpenStack start ups – Cloudscaling, Maginatics and Spanning – to provide a variety of cloud services which adhere to the increasingly popular open standard.

Meanwhile, testing and running disaster recovery plans will be quicker, promises VMWare, now its vCloud Air service has a new cloud-based Site Recovery Manager. The service is now offered on a pay-per-use basis, replacing the more expensive annual subscriptions.

In the event of a disaster recovery event or test, fees will be charged for each virtual machine protected and the storage they consume, said VMware.

Storage could get cheaper as VMware has introduced vCloud Air Object Storage on the Google Cloud Platform. The debut product from VMware’s new Google reseller relationship will be available from September 30th, which will also see an alternative offering launched: vCloud Air Object Storage service, powered by EMC.

The start of the fourth financial quarter should also see VMware release its new vCloud Air SQL database as a service, as the virtualisation vendor looking to match the breadth of features offered the cloud industry’s top service providers.

With a new Hybrid Cloud Manager, VMware aims to help clients to migrate workloads, extend the range of their data centres and fine tune the process of juggling resources between private and public clouds. The management takes place through the interface of VMware’s vSphere Web Client, and will support the migration of virtual machines.

The Six Myths of Hybrid IT

It is time to dispel some hybrid cloud myths

Bennett: It is time to debunk some hybrid cloud myths

Many companies face an ongoing dilemma: How to get the most out of legacy IT equipment and applications (many of which host mission-critical applications like their ERP, accounting/payroll systems, etc.), while taking advantage of the latest technological advances to keep their company competitive and nimble.

The combination of cloud and third-party datacentres has caused a shift in the way we approach building and maintaining our IT infrastructure. A best-of-breed approach previously meant a blending of heterogeneous technology solutions into an IT ecosystem. It now focuses on the services and technologies that remain on-premises and those that ultimately will be migrated off-premises.

A hybrid approach to IT infrastructure enables internal IT groups to support legacy systems with the flexibility to optimise service delivery and performance thru third-party providers. Reconciling resources leads to improved business agility, more rapid delivery of services, exposure to innovative technologies, and increased network availability and business uptime, without having to make the budget case for CAPEX investment. However, despite its many benefits, a blended on-premises and off-premises operating model is fraught with misconceptions and myths — perpetuating a “what-if?” type of mentality that often stalls innovation and business initiatives.

Here are the facts behind some of the most widespread hybrid IT myths:

Myth #1: “I can do it better myself.”

If you’re in IT and not aligned with business objectives, you may eventually find yourself out of a job. The hard truth is that you can’t be better at everything. Technology is driving change so rapidly that almost no one can keep up.

So while it’s not always easy to say “I can’t do everything as well as someone else can,” it’s perfectly acceptable to stick to what you’re good at and then evaluate other opportunities to evolve your business. In this case, outsourcing select IT functionality where you can realise improved capabilities and value for your business. Let expert IT outsource providers do what they do best, managing IT infrastructure for companies 24/7/365, while you concentrate on IT strategy to keep your business competitive and strong.

Myth #2: “I’ll lose control in a hybrid IT environment.”

A functional IT leader with responsibility over infrastructure that management wants to outsource may fear the loss of his or her team’s jobs. Instead, the day-to-day management of the company’s infrastructure might be better served off-premise, allowing the IT leader to focus on strategy and direction of the IT functions that differentiate her business in order to stay ahead of fast-moving market innovation and customer demands.

In the early days of IT, it was one size fits all. Today, an IT leader has more control than ever. For example, you can buy a service that comes with little management and spin resources up using imbedded API interfaces. The days where you bought a managed service and had no control, or visibility, over it are gone. With the availability of portals, plug-ins and platforms, internal resources have more control if they want their environment managed by a third party, or want the ability to manage it outright on their own.

Myth #3: “Hybrid IT is too hard to manage.”

Do you want to differentiate your IT capabilities as a means to better support the business? If you do want to manage it on your own, you need to have the people and processes in place to do so. An alternative is to partner with a service provider offering multiple off-premise options and a more agile operating model than doing all of it on your own.  Many providers bundle management interfaces, orchestration, automation and portals with their offerings, which provides IT with complete transparency and granular control into your outsourced solution.  These portals are also API-enabled to ensure these tools can be integrated into any internal tools you have already invested in, and provide end to end visibility into the entire Hybrid environment.

Myth #4: “Hybrid IT is less secure than my dedicated environment.”

In reality, today’s IT service providers are likely more compliant than your business could ever achieve on its own. To be constantly diligent and compliant, a company may need to employ a team of internal IT security professionals to manage day-to-day security concerns. Instead, it makes sense to let a team of external experts worry about data security and provide a “lessons-learned” approach to your company’s security practice.

There are cases where insourcing makes sense, especially when it comes to the business’ mission-critical applications. Some data should absolutely be kept as secure and as close to your users as possible. However, outsourced infrastructure is increasingly becoming more secure because providers focus exclusively on the technology and how it enables their users. For example, most cloud providers will encrypt your data and hand the key to you only. As a result, secure integration of disparate solutions is quickly becoming the rule, rather than the exception.

Myth #5: “Hybrid IT is inherently less reliable than the way we do it now.”

Placing computing closer to users and, in parallel, spreading it across multiple locations, will result in a more resilient application than if you had it in a fixed, single location. In fact, the more mission-critical the application becomes, the more you should spread it across multiple providers and locations. For example, if you build an application for the cloud you’re not relying on any one application component being up in order to fulfil its availability. This “shared nothing” approach to infrastructure and application design not only makes your critical applications more available, it also adds a level of scalability that is not available in traditional in-house only approaches.

Myth #6: “This is too hard to budget for.”

Today’s managed service providers can perform budgeting as well as reporting on your behalf. Again, internal IT can own this, empowering it to recommend whether to insource or outsource a particular aspect of infrastructure based on the needs of the business. However, in terms of registration, costs, and other considerations, partnering with a third-party service can become a huge value-add for the business.

Adopting a hybrid IT model lowers the risk of your IT resources and the business they support. You don’t have to make huge investments all at once. You can start incrementally, picking the options that help you in the short term and, as you gain experience, allow you the opportunity to jump in with both feet later. Hybrid IT lets you evolve your infrastructure as your business needs change.

If IT and technology has taught us anything, it’s that you can’t afford to let fear prevent your company from doing what it must to remain competitive.

Written by Mike Bennett, vice president global datacentre acquisition and expansion, CenturyLink EMEA

Hybrid cloud issues are cultural first, technical second – Ovum

CIOs are still struggling with their hybrid cloud strategies

CIOs are still struggling with their hybrid cloud strategies

This week has seen a number of hybrid cloud deals which would suggest the industry is making significant progress delivering the platforms, services and tools necessary to make hybrid cloud practical. But if anything they also serve as a reminder that IT will forever be multimodal which creates challenges that begin with people, not technology, explains Ovum’s principle analyst of infrastructure solutions Roy Illsley.

There has been no shortage of hybrid cloud deals this week.

Rackspace and Microsoft announced a deal that would see the hosting and cloud provider expand its Fanatical Support to Microsoft Azure-based hybrid cloud platforms.

Google both announced it would support Windows technologies on its cloud platform, and that it would formally sponsor the OpenStack foundation – a move aimed at supporting container portability between multiple cloud platforms.

HP announced it would expand its cloud partner programme to include CenturyLink, which runs much of its cloud platform on HP technology, in a move aimed at bolstering HP’s hybrid cloud business and CenturyLink’s customer reach.

But one of the more interesting hybrid cloud stories this week came from the enterprise side of the industry. Copper and gold producer Freeport-McMoRan announced it is embarking on a massive overhaul of its IT systems. In a bid to become more agile the firm said it would deploy its entire application estate on a combination of private and public cloud platforms – though, and somewhat ironically, the company said the entire project would wrap up in five years (which, being pragmatic about IT overhauls, could mean far later).

“The biggest challenge with hybrid cloud isn’t the technology per se – okay, so you need to be able to have one version of the truth, one place where you can manage most the platforms and applications, one place where to the best of your abilities you can orchestrate resources, and so forth,” Illsley explains.

Of course you need all of those things, he says. There will be some systems that won’t fit into that technology model, that will likely be left out (i.e. mainframes). But there are tools out there to fit current hybrid use cases.

“When most organisations ‘do’ hybrid cloud, they tend to choose where their workloads will sit depending on their performance needs, scaling needs, cost and application architecture – and then the workloads sit there, with very little live migration of VMs or containers. Managing them while they sit there isn’t the major pain point. It’s about the business processes; it’s the organisational and cultural shifts in the IT department that are required in order to manage IT in a multimodal world.”

“What’s happening in hybrid cloud isn’t terribly different from what’s happening with DevOps. You have developers and you have operations, and sandwiching them together in one unit doesn’t change the fact that they look at the world – and the day-to-day issues they need to manage or solve – in their own developer or operations-centric ways. In effect they’re still siloed.”

The way IT is financed can also create headaches for CIOs intent on delivering a hybrid cloud strategy. Typically IT is funded in an ‘everyone pitches into the pot’ sort of way, but one of the things that led to the rise of cloud in the first place is line of businesses allocating their own budgets and going out to procure their own services.

“This can cause both a systems challenge – shadow IT and the security, visibility and management issues that come with that – and a cultural challenge, one where LOB heads see little need to fund a central organisation that is deemed too slow or inflexible to respond to customer needs. So as a result, the central pot doesn’t grow.”

While vendors continue to ease hybrid cloud headaches on the technology front with resource and financial (i.e. chargeback) management tools, app stores or catalogues, and standardised platforms that bridge the on-prem and public cloud divide, it’s less likely the cultural challenges associated with hybrid cloud will find any straightforward solutions in the short term.

“It will be like this for the next ten or fifteen years at least. And the way CIOs work with the rest of the business as well as the IT department will define how successful that hybrid strategy will be, and if you don’t do this well then whatever technologies you put in place will be totally redundant,” Illsley says.

Cloud News Daily 2015-07-14 05:38:21

Rackspace Hosting and has paired up with Microsoft to manage and offer technical support to Microsoft’s public cloud computing platform known as Azure. Azure support and managed services are currently available and expansion to overseas customers will begin in 2016.

Rackspace has struggled to compete with larger companies and their cloud platforms, such as Amazon Web Services, and this agreement with Microsoft marks its first major deal to support public cloud services other than its own.

Rackspace Chief Technology Officer, John Engates, has said, “Stay tuned. As part of our managed cloud strategy, a tenet of that is we want to support industry-leading technologies. Our strategy really gives us the opportunity to apply fanatical support to any leading industry platform in the future. So stay tuned in terms of announcements.”

logo

Rackspace hopes to improve profit margins and reduce capital spending by offering managed services and technical support for public clouds, and it is starting with Microsoft’s Azure. Rackspace’s main strength has been providing fanatical service, training and technical support to smaller businesses.

Rackspace technical support will be available directly to clients through Microsoft. Rackspace may also resell Microsoft’s IaaS services to its cutomers. In the fourth quarter of 2014, IaaS Services accounted for thirty one percent of Rackspace’s total revenue.

Engates also added Rackspace will help customers build apps that run in hybrid, private-public cloud environments. Many companies are becoming interested in the public-private cloud model, with important business apps ran on private servers with accessing public IaaS providers on a as needed basis.

The post appeared first on Cloud News Daily.

Living in a hybrid world: From public to private cloud and back again

Orlando Bayter, chief exec and founder of Ormuco

Orlando Bayter, chief exec and founder of Ormuco

The view often propagated by IT vendors is that public cloud is already capable of delivering a seamless extension between on-premise private cloud platforms and public, shared infrastructure. But Orlando Bayter, chief executive and founder of Ormuco, says the industry is only at the outset of delivering a deeply interwoven fabric of private and public cloud services.

Demand for that kind of seamlessness hasn’t been around for very long, admittedly. It’s no great secret that in the early days of cloud demand for public cloud services was spurred largely by the slow-moving pace traditional IT organisations are often set. As a result every time a developer wanted to build an application they would simply swipe the credit card and go, billing back to IT at some later point. So the first big use case for hybrid cloud emerged when developers then needed to bring their apps back in-house, where they would live and probably die.

But as the security practices of cloud service providers continue to improve, along with enterprise confidence in cloud more broadly, cloud bursting – the ability to use a mix of public and private cloud resources to fit the utilisation needs of an app – became more widely talked about. It’s usually cost prohibitive and far too time consuming to scale private cloud resources quick enough to meet the changing demands of today’s increasingly web-based apps, so cloud bursting has become the natural next step in the hybrid cloud world.

Orlando will be speaking at the Cloud World Forum in London June 24-25. Click here to register.

There are, however, still preciously few platforms that offer this kind of capability in a fast and dynamic way. Open source projects like OpenStack or more proprietary variants like VMware’s vCloud or Microsoft’s Azure Stack (and all the tooling around these platforms or architectures) are at the end of the day all being developed with a view towards supporting the deployment and management of workloads that can exist in as many places as possible, whether on-premise or in a cloud vendor’s datacentre.

“Let’s say as a developer you want to take an application you’ve developed in a private cloud in Germany and move it onto a public cloud platform in the US. Even for the more monolithic migration jobs you’re still going to have to do all sorts of re-coding, re-mapping and security upgrades, to make the move,” Bayter says.

“Then when you actually go live, and have apps running in both the private and public cloud, the harsh reality is most enterprises have multiple management and orchestration tools – usually one for the public cloud and one for the private; it’s redundant, and inefficient.”

Ormuco is one company trying to solve these challenges. It has built a platform based on HP Helion OpenStack and offers both private and public instances, which can both be managed in a single pane of glass; it has built its own layer in between to abstract resources underneath).

It has multiple datacentres in the US and Europe from which it offers both private and public instances, as well as the ability to burst into its cloud platform using on-premise OpenStack-based clouds. The company is also a member of the HP Helion Network, which Bayter says gives it a growing channel and the ability to offer more granular data protection tools to customers.

“The OpenStack community has been trying to bake some of these capabilities into the core open source code, but the reality is it only achieved a sliver of these capabilities by May this year,” he said, alluding to the recent OpenStack Summit in Vancouver where new capabilities around federated cloud identity were announced and demoed.

“The other issue is simplicity. A year and a half ago, everyone was talking about OpenStack but nobody was buying it. Now service providers are buying but enterprises are not. Specifically with enterprises, the belief is that OpenStack will be easier and easier as time goes on, but I don’t think that’s necessarily going to be the case,” he explains.

“The core features may become a bit easier but the whole solution may not, but there are so many things going into it that it’s likely going to get clunkier, more complex, and more difficult to manage. It could become prohibitively complex.”

That’s not to say federated identity or cloud federation is a lost cause – on the contrary, Bayter says it’s the next horizon for cloud. The company is currently working a set of technologies that would enable any organisation with infrastructure that lies significantly underutilised for long periods to rent out their infrastructure in a federated model.

Ormuco would verify and certify the infrastructure, and allocate a performance rating that would change dynamically along with the demands being placed on that infrastructure – like an AirBnB for OpenStack cloud users. Customers renting cloud resources in this market could also choose where their data is hosted.

“Imagine a university or a science lab that scales and uses its infrastructure at very particular times; the rest of the time that infrastructure is fairly underused. What if they could make money from that?”

There are still many unanswered questions – like whether the returns for renting organisations would justify the extra costs (i.e. energy) associate with running that infrastructure, or where the burden of support lies (enterprises need solid SLAs for production workloads) and how that influences what kinds of workloads ends up on rented kit, but the idea is interesting and definitely consistent with the line of thinking being promoted by the OpenStack community among others in open source cloud.

“Imagine the power, the size of that cloud,” says Bayter . “That’s the cloud that will win out.”

This interview was produced in partnership with Ormuco

Is force of habit defining your hybrid cloud destiny?

Experience breeds habit, which isn't necessarily the best thing strategically

Experience breeds habit, which isn’t necessarily the best thing strategically

I’ve been playing somewhat of a game over recent months.  It’s a fun game for all the family and might be called “Guess my job”.  It’s simple to play.  All you need to do is ask someone the question; “What is a hybrid cloud?” then based upon their answer you make your choice.  Having been playing this for a while I’m now pretty good at being able to predict their viewpoint from their job role or vice versa.

And the point of all this?  Simply, that people’s viewpoints are constrained by their experiences and what keeps them busy day-to-day, so often they miss an opportunity to do something different.  For those people working day-to-day in a traditional IT department , keeping systems up and running,  hybrid cloud is all about integrating an existing on-site system with an off-site cloud.  This is a nice, easy one to grasp in principal but the reality is somewhat harder to realize.

The idea of connecting an on-site System of Record to a cloud-based System of Engagement:  pulling data from both to generate new insights is conceptually well understood.  That said, the number of organisations making production use of such arrangements is few and far between.  For example, combining historical customer transaction information with real-time geospatial, social and mobile data and then applying analytics to generate new insights which uncover new sales potential.  For many organizations though, the challenge in granting access to the existing enterprise systems is simply too great.  Security concerns, the ability to embrace the speed of change that is required and the challenge to extract the right data in a form that is immediately usable by the analytical tools may be simply a hurdle too high.  Indeed, many clients I’ve worked with have stated that they’re simply not going to do this.  They understand the benefits, but the pain they see themselves having to go through to get these makes this unattractive to pursue.

So, if this story aligns with your view of hybrid cloud and you’ve already put it in the “too hard” box then what is your way forward?

For most organizations, no single cloud provider is going to provide all of the services they might want to consume.  Implicitly then, if they need to bring data from these disparate cloud services together then there is a hybrid cloud use case:  linking cloud to cloud.  Even in the on-site to off-site hybrid cloud case there are real differences when the relationship is static compared to when you are dynamically bursting in and out of off-site capacity.  Many organizations are looking to cloud as a more-effective and agile platform for backup and archiving or for disaster recovery.  All of these are hybrid cloud use cases to but if you’ve already written off ‘hybrid’ then you’re likely missing very real opportunities to do what is right for the business.

Regardless of the hybrid cloud use case, you need to keep in mind three key principals which are:

  1. Portability – the ability to run and consume services and data from wherever it is most appropriate to do so, be that cloud or non-cloud, on-site or off-site.
  2. Security, visibility and control – to be assured that end-to-end, regardless of where the ‘end’ is, you are running services in such a way that they are appropriately secure, well managed and their characteristics are well understood.
  3. Developer productivity – developers should be focused on solving business problems and not be constrained by needing to worry about how or when supporting infrastructure platforms are being deployed.  They should be able to consume and integrate services from many different sources to solve problems rather than having to create everything they need from scratch.

Business applications need to be portable such that they can both run as well as consume other services from wherever is most appropriate.  To do that, your developers need to be more unconstrained by the underlying platform(s) and so can develop for any cloud or on-site IT platform.  All this needs to be done in a way that allows enterprise controls, visibility and security to be extended to the cloud platforms that are being used.

If you come from that traditional IT department background, you’ll be familiar with the processes that are in place to ensure that systems are well managed, change is controlled and service levels are maintained.  These processes may not be compatible with the ways that clouds open up new opportunities.  This leads to the need to look a creating a “two-speed” IT organisation to provide the rigor where needed for the Systems of Record whilst enabling rapid change and delivery in the Systems of Engagement space.

Cloud generates innovation and hence diversity.  Economics, regulation and open communities drive standardization and it is this, and in particular open standards, which facilitates integration in all of these hybrid cases.

So, ask yourself.  With more than 65 per cent of enterprise IT organizations making commitments on hybrid cloud technologies before 2016 are you ensuring that your definitions – and hence your technologies choices – reflect future opportunities rather than past prejudices?

Written by I’ve been playing somewhat of a game over recent months.  It’s a fun game for all the family and might be called “Guess my job”.  It’s simple to play.  All you need to do is ask someone the question; “What is a hybrid cloud?” then based upon their answer you make your choice.  Having been playing this for a while I’m now pretty good at being able to predict their viewpoint from their job role or vice versa.

Written by John Easton, IBM distinguished engineer and leading cloud advisor for Europe

Cisco Systems Inc. Announces Plans for Intercloud

A year ago, Cisco Systems Inc. announced its plans to invest one billion dollars in a cloud computing company to compete with six billion dollar Amazon Web Services. This plan was christened with the name Intercloud.

At the Cisco Live annual conference, the company revealed its plans to take the Intercloud a step forward. The Intercloud will not offer this cloud itself from its own data centers but will instead unify smaller cloud service providers onto a large platform of products that will be compatible with each other.  The Intercloud will prevent the smaller providers from losing customers to Amazon while allowing Cisco to continue to sell these providers hardware as they grow and develop. So, the aim of Intercloud is to enable these smaller providers and Cisco to unify and compete with Amazon.

intercloud

 

In addition, Cisco is opening the Intercloud marketplace, an app store that gives customers the tools, software, and technology they need to quickly and efficiently use their cloud. Cisco is partnering up with many tech companies like Hortonworks and Docker for this marketplace, which will 35 apps.

Cisco also announced the development of the Intercloud Fabric, which will allow customers to manage and control their data centers and Intercloud at the same time. The Inter cloud Fabric makes it easier for customers to manage what can be a very tough technology. Cloud service providers like Datalink, Peak 10, and Sungard Availability Services have already backed Cisco’s plan to develop the Intercloud Fabric.

Cisco insists that while Amazon Web Services may have a head start in the cloud computing market, cloud computing still has much room to grow, making it anyone’s game.

The post Cisco Systems Inc. Announces Plans for Intercloud appeared first on Cloud News Daily.

Ormuco taps HP Helion for mid-market hybrid cloud offering

Ormuco is partnering with HP on hybrid cloud

Ormuco is partnering with HP on hybrid cloud

Ormuco is partnering with HP to launch a Helion OpenStack-based hybrid cloud solution the company said is designed specifically with workload portability in mind.

Hybrid cloud is still high on the agenda for many CIOs but challenges abound – security and compliance management, service automation and orchestration and of course, workload portability. The company is relying on HP’s implementation of OpenStack to solve some of those challenges, and said its ConnectedCloud offering will help enterprises move their workloads across OpenStack-based private and public clouds.

“Ormuco is entering the cloud services market since there is a vital need for a hybrid cloud solution with streamlined functionality for enterprise customers,” said Ormuco chief executive Orlando Bayter. “HP Helion OpenStack and Ormuco’s new data centres enable us to create environments that focus on service delivery regardless of the underlying infrastructure.”

Ormuco has datacentres in Dallas, Texas; Sunnyvale, California and Montreal, Quebec, and said it has others planned for New York and Seattle as well as an expansion into Europe with datacentres in Frankfurt and London.

The company is a member of HP’s Helion Partner Network, a federation of HP-certified OpenStack cloud incumbents globally that private cloud users can burst into, which is for the time being primarily how the company delivers scale.

“Ormuco requires extensive geographic reach and the ability to meet customers’ in-country or cross-border cloud requirements,” said Steve Dietch, vice president, HP Helion, HP. “With HP Helion OpenStack and the HP Helion Network, Ormuco’s Connected Cloud customers will have access to hybrid cloud services from a global, open ecosystem of service providers.”

EMC World 2015: Event Recap

After EMC World 2015, I’m languishing in airports today in post-conference burnout – an ideal time to deliver a report on the news, announcements and my prognostications on what this means to our business.

The big announcements were delivered in General Sessions on Monday (EMC Information Infrastructure & VCE) and on Tuesday (Federation: VMware & Pivotal). The Federation announcements are more developer and futures oriented, although important strategically, so I’ll pass on that for now.

EMC and VCE have updated their converged and Hyperconverged products pretty dramatically. Yes, VSPEX Blue is Hyperconverged, however unfortunate the name is in linking an EVO:RAIL solution to a reference architecture solution.

The products can be aligned as:

  1. Block
  2. Rack
  3.  Appliances

EMC World 2015

The VCE Vblock product line adheres to its core value proposition closely.

  1. Time from order to completely deployed on the data center floor in 45 days. (GreenPages will provide the Deploy & Implementation services. We have three D&I engineers on staff now.)
  2. Cross component Unified upgrade through a Release Candidate Matrix – every single bit of hardware is tested in major and minor upgrades to insure compatibility: storage, switch, blade, add-ons (RecoverPoint, Avamar, VPLEX).
  3. Unified support – one call to VCE, not to all the vendors in the build

However, VCE is adding options and variety to make the product less monolithic.

  1. VXblock – this is the XtremIO version, intended for large VDI or mission critical transactional deployments (trading, insurance, national healthcare claims processing). The Beast is a Vblock of eight 40 TB Xbrick nodes, 320 TB before dedupe and compression, or nearly 2 PB with realistic data reduction. Yes, that is Two Petabytes of All Flash Array. Remote replication is now totally supported with RecoverPoint.
  2. VXRack – this is a Vblock without an array, but it isn’t VSAN either. It is….ScaleIO, a software storage solution that pools server storage into a shared pool. The minimum configuration is 100 compute nodes, which can be dense performance (4 node form factor in 2 U chassis) or capacity. The nodes can be bare metal or hypervisor of any sort. This can scale to 328 Petabytes. Yes, Petabytes. This is web-scale, but they call it “Rack Scale” computing (first generation). More on that later…
  3. Vscale – Networking! This is Leaf and Spine networking in a rack to tie a VXrack or Vblock deployment together, at scale. “One Ring to Rule Them All”. This is big, literally. Imagine ordering a petabyte installation of VXblock, VXrack and Vscale, and rolling it onto the floor in less than two months.

So, that is Block and Rack. What about Appliance?

Enter VSPEX Blue, the EMC implementation of EVO:RAIL. This has definite value in…

  • Pricing
  • Unified management & support
  • The “app store” with
    • integrated backup (VDPA)
    • replication (vRPA)
    • Cloud Array integration (TwinStrata lives!), a virtual iSCSI controller that will present cloud storage to the system as a backup target or a capacity tier.

This post from Mike Colson provides a good explanation.

Future apps will include virus scanning, links to Public IaaS and others.

I set one up in the lab in 15 minutes, as advertised, although I had to wait for the configuration wizard to churn away after I initialized it and input all the networking. Professional Services will be required, as EMC is requiring PS to implement. Our team is and will be prepared to deploy this. We can discuss how this compares to other Hyperconverged appliances. Contact us for more information.

There are other announcements, some in sheer scale and some in desirable new features.

Data Domain Beast: DD9500, 58.7 TB/hr. and 1.7 PB of capacity. This is rated at 1.5x the performance and 4x the scalability of the nearest competitor.

VPLEX News: The VPLEX Witness can now be deployed in the public Cloud (naturally EMC recommends the EMC Hybrid Cloud or vCloud Air). The Witness has to be outside the fault domains of any protected site, so where better than the Cloud? It is a very lightweight VM.

CloudArray (TwinStrata’s Cloud Array Controller) is integrated with VPLEX. You can have a distributed volume spanning on premise and cloud storage. I’m still trying to grasp the significance of this. The local cache for the CloudArray controller can be very fast, so this isn’t limited to low latency applications. The things you could do…

VPLEX is now available in a Virtual Edition (VPLEX/VE). This will obviously come with some caveats and restrictions, but this also is a fantastic new option for smaller organizations looking for the high availability that VPLEX provides, as well as data mobility and federation of workloads across metro distances.

VVOL: Chuck Hollis (@chuckhollis) led an entertaining and informative ‘Birds of a Feather’ session for VVOLs. Takeaway – this is NOT commonly deployed yet. Only a handful of people have even set it up, and mostly for test. This was in a room with at least 150 people, so high interest, but low deployment. Everyone sees the potential and is looking forward to real world policy based deployments on industry standard storage. This is an emerging technology that will be watched closely.

VNX/VNXe: I didn’t see or hear many striking features or upgrades in this product line, but an all flash VNXe was trumpeted. I’ll be looking at the performance and design specifications of this more closely to see how it might fit targeted use cases or general purpose storage for SMB and commercial level customers. There is talk around the virtualization of the VNX array, as well as Isilon, so pretty soon nearly every controller or device in the EMC portfolio will be available as a virtual appliance. This leads me to…

ViPR Controller and ViPR SRM: Software Defined Storage

ViPR Controller is definitely a real product with real usefulness. This is the automation and provisioning tool for a wide variety of infrastructure elements, allowing for creation of virtual arrays with policy based provisioning, leveraging every data service imaginable: dedupe, replication, snapshots, file services, block services and so on.

ViPR SRM is the capacity reporting and monitoring tool that provides the management of capacity that is needed in an SDS environment. This is a much improved product with a very nice GUI and more intuitive approach to counters and metrics.

I’d recommend a Storage Transformation Workshop for people interested in exploring how SDS can change the way (and cost) of how you manage your information infrastructure.

More on EVO:RAIL/VSPEX Blue

I met with Mike McDonough, the mastermind behind EVO:RAIL. He is indeed a mastermind. The story of the rise of EVO:RAIL as a separate business unit is interesting enough (300 business cases submitted, 3 approved, and he won’t say what the other mystery products are), but the implementation and strategy and vision are what matter to us. The big factor here was boiling down the support cases to come up with the 370 most common reasons for support, all around configuration, management and hardware. The first version of EVO:RAIL addressed 240 of those issues. Think of this as having a safety rail around a vSphere appliance to prevent these common and easily avoidable issues, without restricting the flexibility too much. The next version will incorporate NSX, most likely. Security and inspection are the emphases for the next iteration. Partners and distributors were chosen carefully. GreenPages is one of only 9 national partners chosen for this, based on our long history as a strategic partner and our thought leadership! The tightly controlled hardware compatibility list is a strength, as future regression tests for software and other upgrades will keep the permutations down to a minimum. (By the way, the EMC server platform is Intel, for VxRack, VSPEX Blue and I think for all of their compute modules for all their products). The implication here, competitively, is that as competitive appliances that are buying white box hardware with commodity contracts allowing for flexibility in drives, memory and CPU, will have an exponentially more difficult task in maintain the increasing permutations of hardware versions over time.

Final Blue Sky note:

Rack Scale is an Intel initiative that promises an interesting future for increased awareness of the hardware for hypervisors, but is a very future leaning project. Read Scott Lowe’s thoughts on this.

 

As always, contact us for more details and in-depth conversations about how we can help you build the data center of the future, today.

 

By Randy Weis, Practice Manager, Information Infrastructure

vCloud Air: Helping a customer move to a hybrid cloud environment

As you most likely know, vCloud Air is VMware’s offering in the hybrid/public cloud space. In my opinion, it’s a great offering. It allows you to take existing virtual machines and migrate those up to the cloud so that you can manage everything with your existing virtual center. It’s also a very good option to do disaster recovery.

I worked on a project recently where the client wanted to know what they needed to do with their infrastructure. They were looking for solid options to build a foundation for their business, whether it was on-prem, a cloud-based offering, or a hybrid approach.

In this project, we ended up taking their VMs and physical servers and put a brand new host on site running VMware that’s running a domain controller and a file server. We put the rest of the production servers and test dev environment in vCloud Air. Additionally, this helped them address their disaster recovery needs. It gave them a place where they could take their systems without a lot of upfront money and have a place where they could recover their VMs in case of the event of a disaster.

 

http://www.youtube.com/watch?v=OP3qO-SI6SY

 

Are you interested in learning more about vCloud Air? Reach out!

 

By Chris Chesley, Solutions Architect