All posts by solidfire

Traditional banks vs. disruptive competitors: Why agile infrastructure matters

(c)iStock.com/urbancow

By Martin Cooper, Technical Director of SolidFire

There are few industries which can match the financial services sector for the sheer volume of rules and regulations governing personal information and data. This complex set of requirements and regulation creates unique tension within well-established financial services companies. They have to operate and update existing infrastructure, but also need to develop and expose new services to a customer base that expects to do everything online.

Fast growing newcomers are using a ‘digital-first’ approach to fundamentally change the way consumers manage their money. In doing so, they are significantly disrupting the status quo within the industry and established institutions know they must evolve quickly to beat the newcomers at their own game or risk becoming obsolete.

Accenture’s 2014 UK Financial Services Customer Survey found that usage of internet banking has stabilised at around 80% amongst UK consumers over the last three years, but the real growth area at present is in mobile banking. Fuelled by the ubiquity of mobile devices, low-cost data and the widening availability of mobile apps, 27% of UK consumers now use mobile banking at least once a month (compared to just 10% in 2011), with the trend showing no signs of slowing down.

The key strength of new disruptive challengers, such as Apple Pay and PayPal, is their willingness to embrace cloud infrastructure

The same report also found that consumers are becoming less fussy about who they bank with. One in five consumers would hypothetically consider banking with brands such PayPal and The Post Office, and one in eight with Tesco or John Lewis if they delivered a more seamless digital banking experience than the traditional high street banks.

The key strength of new disruptive challengers such as Apple Pay and indeed, PayPal, is their willingness to embrace agile technologies such as cloud infrastructure in order to offer consumers the services they want, when they want them. Until recently, lingering doubts over the safety and security of cloud solutions meant many traditional financial services organisations didn’t consider them as viable. However in the face of dangerous new competition, a growing body of evidence supporting the security and performance of cloud-based platforms, and the expanding range of options available, the opportunity cloud presents is simply too big to ignore any longer.

Not only can cloud solutions offer greater flexibility and collaboration opportunities, but their scalable nature means they can be used as DevOps environments for the rapid prototyping of potential new digital products and services, with banks able to easily discard the ones that don’t work and quickly move forward with the ones that do. So as security fears are allayed and cloud adoption begins to gather pace, much of the debate has shifted to the technological aspects of adoption.

For many, the risk of integrating new applications and technology with legacy systems is now the biggest concern. They are right to be worried. Across the cloud industry a perfect technology storm is forming. Virtualisation, automation and software-defined networking are all reaching maturity but the storage layer upon which everything else is built is still largely dominated by ageing technology and outdated thinking. It is the technological equivalent of building a new house on sand, an approach doomed to fail from the start. However, recent advances in technology – made possible by falling prices of solid-state (or flash) storage – are bringing some big changes.

Foremost among these is the ability to narrowly define the performance characteristics of each and every application.  It’s now possible for organisations to dial-up or dial-down the performance of each app, so ‘supply’ of performance can be closely matched to end-user ‘demand’.

Traditional high street banks know they must evolve or die in the face of disruptive new competition

These new storage foundations can also speed new apps’ progress from the sandbox to the production environment.  As the volumes of data each new app handles grow, technical personnel can very easily add new storage devices – increasing capacity and hiking performance at the same time.

Capabilities like these are critical for a 24/7 industry which simply can’t afford disruptions. This is because scale-out flash storage has no single points of failure: that compared to other storage options, it is far less vulnerable to unexpected downtime.

Traditional high street banks know they must evolve or die in the face of disruptive new competition. After a slow start, many are now starting to capitalise on the latest digital technology to take on the competition at their own game, but this cannot be done half-heartedly. Banks cannot afford to shut up shop completely while they rebuild their IT infrastructure from scratch – instead, they must innovate over the top of existing systems, which may be decades old.

A scale-out flash storage platform that can guarantee both performance and uptime is undoubtedly one of the essential building blocks, helping financial institutions launch innovative new services – and giving their new competitors a dose of their own medicine in the process.

Keeping up with generation cloud: Choose your technology wisely

(c)iStock.com/roibu

By Martin Cooper, Technical Director, SolidFire

As enterprises turn to public cloud services and managed service providers to meet their IT needs, colocation providers have seen their customer base gradually diminish. Infact, managed service providers now bring in five-times more revenue.

Colo providers are aware their traditional business models are no longer relevant and that they need to give enterprises what they want: the cloud. But incorporating cloud into their provision in such a saturated market won’t be simple. To have any chance of success, colo providers need to carve out what differentiates them from their competitors and implement the right technology to underpin their offering. But where do they start?

Service providers who got into the cloud game early seemed, at the time, to be more visionary. But for a colo provider looking to get into the cloud business now, there are advantages to being a little later to the table. Early cloud adopters are now burdened by a number of things, including poorly-performing legacy equipment that still has one to three years of remaining depreciation and older storage technologies that costs them a lot to maintain. They are also required to carry out deep quality assurance and testing in response to any change to a legacy system that typically has very poor APIs, or none at all.

Even if service providers were to adopt newer technologies like all-flash storage and iSCSI networking, migrating customers from one system to another can be disruptive and cause severe customer satisfaction issues. With the right strategy and technology, colo providers can become the proverbial small, agile ship in the harbour while the larger cloud providers are the lethargic vessels struggling to keep up with the fast pace of innovation.

Understand the need to differentiate

Determining the kind of cloud offering that makes sense for a colo’s business operationally, based on who their customers, is crucial for identifying who they’ll likely be competing against. Everyone uses Amazon Web Services or Rackspace as a point of reference, but many compete on differing value propositions. It is key for colos to build their value proposition based on why their current customers chose to do business with them in the first place. By looking at why they chose them for colocation, there is most likely a recurring theme.

Whether it’s proximity to a major point of presence, the availability of multiple bandwidth providers, or a trusted local brand, these are a colo’s established differentiators. And they need to build their cloud value proposition on the same core pillars.

They’ll find that by understanding why their customers chose them and what cloud services they are competing against; extending their business value proposition will be much faster, easier and less costly than trying to build a new cloud services brand from scratch.

Choose technology wisely

Now consider the technology used behind the scenes. Using strategic technologies to bridge the colocation-to-cloud gap can be a solid first step. The choice of technology is key to architecting a colos cloud offering (as is the decision to either build their offering from scratch or buy a “cloud in a box,” but more on that later).

If colos automate their solution as much as possible, then it means they’ll be delivering consistent quality services to their customers. And it will totally differentiate the service offerings from most of the competition.  Enterprises are simply looking for a guaranteed and predictable platform from which to host their applications. The hypervisor has become ubiquitous and most enterprises don’t care what is under the covers of the infrastructure until it comes to its underlying storage systems.

Colocation to the cloud: The hybrid approach

It’s unlikely that a colo will go from colocation to an Amazon-like cloud straight away due to complexity and the cost. Many colo providers begin by delivering private clouds to their colocation customers in what is called a hybrid model. Delivering hybrid cloud hosting services on top of their colocation business makes the jump into the cloud business a financially attainable and realistic business proposition.

Taking small steps by delivering hybrid-hosting offerings to their current colocation customers is an easy way to hone their cloud delivery and implementation skills. Their current customers will be more forgiving and more tolerant of small issues as they grow their hybrid cloud offerings.

The journey from colo to the cloud can be a little scary, but with some advance planning, the latest all-flash storage, a good understanding of customers and good timing —colocation providers can make the journey a successful one for their business.

Four ways OpenStack improves enterprise IT

Jeremiah Dooley, Cloud Architect at SolidFire

Industry discussion around the role OpenStack can play in the enterprise remains rife – given its open source nature, it is still viewed with suspicion by a number of businesses. When taking a closer look at how the industry is using OpenStack, however, it becomes clear that there are plenty of examples to prove OpenStack can be used in an enterprise IT context. This means that now is the time to move the discussion forward, and look at how OpenStack makes enterprise IT better.  

Looking at the current enterprise IT landscape, there are a significant number of “legacy” workloads that require support, but have increasingly been moving (especially on the customer-facing side) to a lightweight, web-scale deployment model. These new applications have different challenges than SAP, Oracle and Exchange workloads, and are delivered by specific teams, employing varying methodologies onto different infrastructure – often via public clouds.

It’s here that OpenStack can really support enterprise IT, and as a result the challenge has moved from “how do we deploy and manage these new kinds of applications?” to “how do we integrate that process into our existing operational model, so that enterprise IT as a whole improves?”.

OpenStack can be the enabler of this, and here are four top ways that it can support enterprise IT:

1. OpenStack Can Extend Your Investments in AWS

You probably have developers in your company using (and paying for) AWS right now, whether or not you know it (or admit it). The fundamental challenge is that their consumption model (programmatic, API-driven, use and dispose on demand) doesn’t match up with a traditional IT procurement and provisioning model, causing issues at many levels.

OpenStack can extend the AWS consumption model and all of the skills that your developers have learned by making internally hosted and managed infrastructure available in the same manner. Cloudscaling CEO Randy Bias lists this as requirement #4 of enterprise-grade OpenStack in his excellent series of enterprise cloud blog posts.

2. OpenStack Can Embrace Your Existing Enterprise Hardware

Every enterprise I work with has a varying tolerance for change, and in some part that tolerance is driven by how much existing investment needs to be protected. Hardware is expensive, and while change can happen quickly, amortization happens on a fixed schedule.

Rather than looking at new application models as an either/or proposition, or as a completely new operational model to manage, OpenStack can take advantage of your existing hardware, your existing staff that administers that hardware and your existing processes used to manage those assets.  The list of hardware companies that are actively involved in OpenStack (even if it’s just contributing drivers) is long and growing.

3. OpenStack Can Help Drive IT Transformation

Don’t underestimate the amount of money that large enterprises are willing to spend in order to drive more efficiency into their operational model.  The only way to do more, faster, is to become more efficient, and to use a “cloud first” model that embraces both on-premises and public cloud deployment models.  

By standardizing on a cloud management and deployment model, enterprises can start winding down the silos that have been created around hardware and its associated vendors. It’s amazing how misaligned those legacy silos are with the business process they are supposed to support, and the efficiency gains made here can be significant and lasting.

4. OpenStack Can Embrace Yesterday While Preparing IT For Tomorrow

Enterprise IT faces many challenges from many directions.  If the financial, business-alignment and operational-efficiency struggles we’ve discussed earlier in this list aren’t enough, challenges with staff retention, development methodologies, architecture adoption, learning curves and even basic troubleshooting are all increasing as the enterprise pivots into a new cloud era. Yesterday’s hardware vendors have become today’s liabilities. Yesterday’s applications have becomes today’s boat anchors.

Meanwhile, today’s development methodologies and workload patterns have become the pattern for how IT will operate in the future, and that’s where the best and brightest in this industry want to be. For every one server or disk hugger out there who is more interested in his Java-based GUI and annual trip to Vegas, there are dozens of people more interested in tackling the interesting questions around what cloud becomes when it grows up, and how to make every IT process align with the horizontal business policy it’s designed to support.

For every virtualization administrator who is content to sit and let the industry come to him, there are many who are instead trying to help decide where the industry is going.  OpenStack (even with its in-fighting, politics, and open source soul) is a huge part of that process, and embracing it allows enterprises a front row seat and a license to participate.

Ultimately, there’s still a long way to go until OpenStack is truly embraced by this community, and there will be bumps ahead. Still, it’s poised to not just be relevant in the enterprise, but become extremely beneficial for it. We are already seeing the front of the adoption curve jump in and find that the water isn’t as cold as they feared.

How far can it go? We’re about to find out.

Storage: The heart of the next generation data centre

By Jeramiah Dooley, Cloud Architect at SolidFire

Public cloud services have put huge pressures on enterprise IT to compete in a more agile way. When it can take days or even weeks for IT departments to procure and manually set-up necessary networking and storage hardware to support new applications, why wouldn’t employees turn to providers who can meet their needs within minutes?

To meet these demands, the hardware infrastructure needs to be more than fast; it needs to be flexible and scalable with rapid automation to meet the needs of its users. Storage has a key role to play here, with a sophisticated management layer that can separate performance and capacity – independently controlling access speeds and available drive space on an app-by-app basis – it has now become possible to virtualise performance independent of capacity.

Indeed, data centre storage is going through an interesting time, with $1.2bn of publicly disclosed funding pouring into storage start-ups in the last year alone. However, it’s not just flash storage that’s responsible for the flurry of activity. It’s the functionality that storage vendors are wrapping around flash technology that is enabling true next generation data centres that are getting people excited.

Across the industry we can see a perfect storm of other data centre technology forming, with virtualisation and software-defined networking both reaching maturity. But storage is the one area still dominated by legacy technology and thinking.

Beyond performance

With IDC predicting that the global market for flash storage will continue its growth to $4.5bn in 2015, flash becoming the de facto standard is almost inevitable. But merely retrofitting complex legacy storage systems to incorporate flash is insufficient in the face of current market dynamics that require rapid application deployment, dramatic system scalability, and the ease of use of end-user self service.

Of course, the performance of flash has become table stakes in the storage race. Next generation enterprise storage arrays will instead be measured on simplicity of operation, deep API automation and thoughtful integrations with cloud management platforms like VMware and OpenStack, rich data services, broad protocol support and fine-grain Quality of Service (QoS) controls. Any array using flash is expected to be fast; it’s the rest of the services that will differentiate them.  

So, the impending storage battle that will lay waste to the legacy data centre won’t be won by raw performance alone. Instead, it will be the rich features and functions over and above the flash layer, tailored for specific use cases or workloads that will drive significant capital and operational cost savings.

Storage for the next generation data centre

A world that relies increasingly on cloud services, or cloud-like internal IT services, is one that thrives on guaranteed performance. Cloud contracts are underpinned by all manner of “availability” SLAs. As cloud computing continues to gain in enterprise popularity, cloud providers are coming under increasing pressure to supply cast-iron guarantees that focus on application performance and predictability.

This level of “performance reliability” underpins the next generation data centre and is enabled by robust Quality of Service tools. It is the combination of Quality of Service, agile management and performance virtualisation capabilities that will define storage architectures within next generation data centres – to demand less will be to accept siloed computing and legacy IT thinking as the status quo.

As the IT industry shifts away from the classic monolithic and static operational models into the new era of dynamic, agile workflows of cloud computing, it’s time to up the ante and look beyond traditional storage hardware architectures and consider products that are built from the ground up for the next generation of applications and operations.