Todas las entradas hechas por Neil.McEvoy

AWS, VMware and enterprise cloud adoption maturity: VMware Cloud on AWS

According to IDC, only 25% of organizations have repeatable strategies for cloud adoption, with 32% having no cloud strategy at all, describing the need for a best practice based repeatable framework for planning cloud adoption that drives business success.

This Forbes posting from Joe McKindrick also references this research, describing that «only about one in seven organizations with multiple cloud workloads (14%) actually have managed, or optimized cloud strategies. The largest segment, 47%, say their cloud strategies tend to be on the fly — opportunistic, or ad hoc», and that «only a somewhat larger group, 11%, were at the next-best level, “managed,” in which their enterprises are “implementing a consistent, enterprisewide best-practices approach to cloud;” and “orchestrating service delivery across an integrated set of resources.”

Vendors like AWS and VMware offer ready to use best practices that can help plug this gap.

AWS: Enterprise cloud adoption maturity

These challenges correlate with a simple adoption planning model offered by Stephen Orban, head of enterprise strategy at Amazon AWS and previously CIO of Dow Jones.

From his experiences enterprise organisations progress through four main stages of enterprise cloud adoption maturity, consistent with the IDC research:

Organising for the cloud: Building a cloud centre of excellence

In VMware’s whitepaper ‘Organizing for the Cloud’ (30-page PDF) they say the key to this transformation of IT is the implementation of a ‘Cloud Operating Model’. Central to this blueprint is that the IT team should become a cloud service broker, an incremental step up in a maturity model that they describe as a cloud capability model.

They also describe that creation of a ‘Cloud Centre of Excellence’ is the best way to achieve the required changes to the IT organisation itself. This COE should create an online knowledge base of best practices, and defining job roles and responsibilities, such as cloud leader, architect, analyst, administrator and developer and a service catalog manager among others.

Having implemented this matrix of new capabilities the IT team can then seek to identify and achieve the organisational improvements that will be of value to their business, such as:

  • Faster response to business needs
  • Faster incident resolution
  • Improved infrastructure deployment coordination
  • Improved ability to meet SLAs

Fundamentally what VMware recommend that is the headline message of Enterprise Cloud is that it will achieve an increased focus on higher value initiatives.

IT value transformation

The headline resource from VMware to answer this question is this study commissioned from the IT Process Institute, their white paper: ‘IT Value Transformation Roadmap‘ 3 (24 page PDF).

In this document they offer a blueprint for a Cloud Maturity Model, a ladder of maturing capability that you can compare your organisation to, and use as a framework to plan your own business transformations, where:

“This cloud computing strategy brief presents a virtualisation- and private-cloud-centric model for IT value transformation. It combines key findings from several primary research studies into a three-stage transformation road map.”

In short this is an ideal strategy blueprint for any existing VMware customers. It proposes a three step maturity model that begins with virtualisation and grows into full utilization of cloud computing across three stages of:

  • IT production – Focus on delivering the basics and proving value for money.
  • Business production – Utilise technology to better optimise business processes.
  • ITaaS – Fully embrace utility IT as a Service, and leverage technology for enabling new service innovation.

This corresponds with an increasing maturity in the use of virtualisation, SaaS and other cloud architecture principles and external services, that begins with where most customers are now, mostly halfway through phase one.

Becoming a transformational leader: Start your journey

It also corresponds with a journey for the CIO as well; from operational manager of a cost centre with poor value for money perceptions, through to a boardroom-level change agent who is directly driving new profit-making initiatives.

Specifically the paper makes the point that this evolution results in the CIO being recognised for delivering strategic IT value:

What is strategic IT value? Strategic IT value is demonstrated when IT plays a key role in a company’s achievement of overall business strategy. In other words, when IT is keenly focused on business outcomes and plays a significant role in optimising and improving core value chain processes. Or, when the IT organisation drives innovation that enables new technology-enabled product and service revenue streams. When IT is effective, results can be measured by improved customer satisfaction and market share gains.

In contrast many CIOs can find themselves in somewhat of an operational corner – responsible for keeping the lights on but perceived as a poor value-for-money cost base for doing so. The IT Process Institute describe how CIOs can break this constraint cycle and shift from a cost focus to delivering strategic value for the business, through this three step progression.

VMware Cloud on AWS

In Taming the Digital Dragon McKinsey describe the hybrid cloud model as the blueprint for digital transformation, and AWS and VMware have released a major innovation to accelerate its adoption.

Announced on 28 Aug 2017 Amazon has launched VMware Cloud on AWS. With this update, VMware’s Software-Designed Data Center (SDDC) can now be used on Amazon’s AWS infrastructure, enabling users to run VMware applications across consistent public, private, or hybrid vSphere-based cloud environments, while also having optimized access to AWS services. This service was designed to support popular use cases, including data centre extension, as well as application development, testing, and migration.

The post AWS / VMware Enterprise Cloud adoption maturity – VMware Cloud on AWS appeared first on CBPN.

Why Netflix is the ideal blueprint for cloud-native computing

The uber poster child of migrating legacy applications and IT systems via the ‘cloud native’ approach is Netflix. Not only do they share their best practices via blogs, they also share the software they’ve created to make it possible via open source.

Migrating to web-scale IT

In a VentureBeat article the author envisions ‘the future of enterprise tech’. They describe how pioneering organisations like Netflix are entirely embracing a cloud paradigm for their business, moving away from the traditional approach of owning and operating their own data centre, populated by EMC, Oracle and VMware.

Instead they are moving to ‘web scale IT’ via on demand rental of containers, commodity hardware and NoSQL databases, but critically, it’s not just about swapping out the infrastructure components.

Cloud migration best practices

In this blog they focus on the migration of the core Netflix billing systems from their own data centre to AWS, and from Oracle to a Cassandra/MySQL combination, emphasising in particular the scale and complexity of this database migration part of the cloud migration journey.

This inital quote from the Netflix blog sets the scene accordingly:

«On January 4, 2016, right before Netflix expanded itself into 130 new countries, Netflix Billing infrastructure became 100% AWS cloud-native.»

They also reference a previous blog also describing this overall AWS journey, again quickly making the most incisive point – this time describing the primary inflection point in CIO decision making that this shift represents, a move to ‘Web Scale IT‘:

«That is when we realised that we had to move away from vertically scaled single points of failure, like relational databases in our data centre, towards highly reliable, horizontally scalable, distributed systems in the cloud.»

Cloud migration: Migrating mission-critical systems

They then go on to explain their experiences of a complex migration of highly sensitive, operational customer systems from their own data centre to AWS.

As you might imagine the core customer billing systems are the backbone of a digital delivery business like Netflix, handling everything from billing transactions through reporting feeds for SOX compliance, and face a ‘change the tyre while the car is still moving’ challenge of keeping front-facing systems available and consistent to ensure unbroken service for a globally expanding audience, while conducting a background process of migrating terabytes of data from on-site enterprise databases into the AWS service.

  • We had billions of rows of data, constantly changing and composed of all the historical data since Netflix’s inception in 1997. It was growing every single minute in our large shared database on Oracle. To move all this data over to AWS, we needed to first transport and synchronise the data in real time, into a double digit Terabyte RDBMS in cloud.
  • Being a SOX system added another layer of complexity, since all the migration and tooling needed to adhere to our SOX processes.
  • Netflix was launching in many new countries and marching towards being global soon.
  • Billing migration needed to happen without adversely impacting other teams that were busy with their own migration and global launch milestones.

The scope of data migration and the real-time requirements highlight the challenging nature of Cloud Migrations, and how it goes far beyond a simple lift and shift of an application from one operating environment to another.

Database modernisation

The backbone of the challenge was how much code and data was interacting with Oracle, and so their goal was to ‘disintegrate’ that dependency into a services based architecture.

“Moving a database needs its own strategic planning:

Database movement needs to be planned out while keeping the end goal in sight, or else it can go very wrong. There are many decisions to be made, from storage prediction to absorbing at least a year’s worth of growth in data that translates into number of instances needed, licensing costs for both production and test environments, using RDS services vs. managing larger EC2 instances, ensuring that database architecture can address scalability, availability and reliability of data. Creating disaster recovery plan, planning minimal migration downtime possible and the list goes on. As part of this migration, we decided to migrate from licenced Oracle to open source MYSQL database running on Netflix managed EC2 instances.”

Overall this transformation scope and exercise included:

  • APIs and integrations: The legacy billing systems ran via batch job updates, integrating messaging updates from services such as gift cards, and billing APIs are also fundamental to customer workflows such as signups, cancellations or address changes.
  • Globalisation: Some of the APIs needed to be multi-region and highly available, so data was split into multiple Cassandra data stores. A data migration tool was written that transformed member billing attributes spread across many tables in oracle into a much smaller Cassandra structure.
  • ACID: Payment processing needed ACID transaction, and so was migrated to MySQL. Netflix worked with the AWS team to develop a multi-region, scalable architecture for their MySQL master with DRBD copy and multiple read replicas available in different regions, with toolingn and alerts for MySQL instances to ensure monitoring and recovery as needed.
  • Data/code purging: To optimise how much data needed migrated, the team conducted a review with business teams to identify what data was still actually live, and from that review purged many unnecessary and obsolete data sets. As part of this housekeeping obsolete code was also identified and removed.

A headline challenge was the real-time aspect, ‘changing the tyre of the moving car’, migrating data to MySQL that is constantly changing. This was achieved through Oracle GoldenGate, which could replicate their tables across heterogeneous databases, along with ongoing incremental changes. It took a heavy testing period of two months to complete the migration via this approach.

Downtime switchover

Downtime was needed for this scale of data migration, and to mitigate impact for users Netflix employed an approach of ‘decoupling user facing flows to shield customer experience from downtimes or other migration impacts’.

All of their tooling was built around ability to migrate a country at a time and funnel traffic as needed. They worked with ecommerce and membership services to change integration in user workflows to an asynchronous model, building retry capabilities to rerun failed processing and repeat as needed.

An absolute requirement was SOX Compliance, and for this Netflix made use of components available in their OSS open source suite:

«Our cloud deployment tool Spinnaker was enhanced to capture details of deployment and pipe events to Chronos and our big data platform for auditability. We needed to enhance Cassandra client for authentication and auditable actions. We wrote new alerts using Atlas that would help us in monitoring our applications and data in the cloud.»

Building high availability, globally distributed cloud applications with AWS

Netflix provides a detailed, repeatable best practice case study for implementing AWS cloud services, at an extremely large scale, and so is an ideal baseline candidate for any enterprise organisation considering the same types of scale challenges, especially with an emphasis on HA – High Availability.

Two Netflix presentations: Globally Distributed Cloud Applications, and From Clouds to Roots provide a broad and deep review of their overall global architecture approach, in terms of exploiting AWS with the largest and most demanding of of capacity and growth requirements, such as hosting tens of thousands of virtual server instances to operate the Netflix service, auto-scaling by 3k/day.

This goes into a granular level of detail of how they monitor performance, and then additionally in they focus specifically on High Availability Architecture, providing a broad and deep blueprint for this scenario requirements.

Netflix Spinnaker – global continuous delivery

In short these address the two core, common requirements of enterprise organisations, their global footprint and associated application hosting and content delivery requirements, and also their own software development practices – How better can they optimise the IT and innovation processes that deploys the software systems that needs this infrastructure.

Build code like Netflix – continuous deployment

The ideal of our ‘repo guide’ for the Netflix OSS suite is for it to function as a ‘recipe’ for others to follow, ie You too can Build Code Like Netflix.

Therefore it’s apt one of the best starting points is their blog with the same title – How We Build Code At Netflix.

Most notably because this introduces the role of Continuous Deployment best practices, and how one of their modules ‘Spinnaker‘ is central to this.

Cloud native toolchain

In this blog Global Continuous Delivery With Spinnaker they explain how it addresses this scope of the code development lifecycle, across global teams, and forms the backbone of their DevOps ‘toolchain’, integrating with other tools such as Git, Nebula, Jenkins and Bakery.

As they describe:

«Spinnaker is an open source multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. Spinnaker is designed with pluggability in mind; the platform aims to make it easy to extend and enhance cloud deployment models.»

Their own quoted inspirations include Jez Humble’s blog and book on Continuous Delivery, as well as experts such as Martin Fowler and working ideals such as ‘Blue Green Deployments‘.

Moving from Asgard

Their history leading up to the conception and deployment of Spinnaker is helpful reading too; previously they utilised a tool called ‘Asgard’, and in Moving from Asgard:, describe the limitations they reached using that type of tool, and how instead they sought a new tool that could achieve:

  • “enable repeatable automated deployments captured as flexible pipelines and configurable pipeline stages
  • provide a global view across all the environments that an application passes through in its deployment pipeline
  • offer programmatic configuration and execution via a consistent and reliable API
  • be easy to configure, maintain, and extend”

These requirements formed into Spinnaker and the deployment practices they describe, which you can repeat through the Github Download.

Enterprise cloud strategies and taming the digital dragon

(c)iStock.com/-asi

In their 2014 CIO Agenda report Gartner describes how ‘Taming the Digital Dragon’ (12 page PDF) is key to digital transformation strategies, with the hybrid cloud platform model as the enabling technology blueprint and business model.

As their ‘We are here’ arrow describes, Gartner proposes we are at the transition point from a second to a third generation of IT, the shift from IT industrialization to digitization, where the key skills of a CIO are business models and digital leadership, rather than just IT service management aka a shift from the CIO to the CDO – chief digital officer.

This corresponds with the view of Mike Rosen of IDC, who describes platforms as a third generation.

As Gartner concludes: “CIOs now face the challenge of straddling the second era of enterprise IT and a new, third “digitalisation” era — moving from running IT like a business within a business, into a period characterised by deep innovation beyond process optimisation, exploitation of a broader universe of digital technology and information, more-integrated business and IT innovation, and a need for much faster and more agile capability.”

Digital dragons

Gartner principally characterizes this heightened capability in terms of competitive threat and advantage: “All industries in all geographies are being radically reshaped by digital disruption — a “digital dragon” that is potentially very powerful if tamed but a destructive force if not. It’s a CIO’s dream come true, and also a career-changing leadership challenge.”

They describe it as a dragon because it so effectively destroys the competition in its field, through a massive scale of technology leverage such as Netflix, Airbnb and Uber, with brands like Kodak or the Blockbuster video rental chain examples of those being destroyed by failing to adapt to this digital disruption.

These ‘digital unicorn’ startup firms have generated billions in shareholder value in only a few short years specifically through this principle, achieving inventory levels of the world’s largest players simply through smart use of IT.

To replicate this level of IT-driven success experts recommend CIO’s embrace the threat as a career opportunity, such as Harvard urging CIOs to take a leadership role, and also to become ‘digital mentors.’

Research and insights from Deloitte and Gartner show that the demand for implementation of new digital capabilities will ultimately mean a large and sustained market for digital transformation skills, with considerable recognition and reward for those CIOs synonymous with advanced, successful digital programs.

Hybrid cloud business model

This article provides the introduction for our new eBook: Enterprise Cloud – Data Centre and Application Modernisation Blueprint.

Utilising design models like hybrid cloud is central to the scope of this paper: the overall goal is to provide enterprise decision makers with the full spectrum of topics they need to address that scale of cloud adoption, and these high level strategy frameworks are ideal to set the scene.

Gartner describes how hybrid cloud enables increased adoption of public and private IaaS, PaaS, SaaS and BPaaS, making a ‘Bi-modal IT’ framework possible, made up of three core foundations:

  • Moving to a more loosely coupled “postmodern-ERP” paradigm – More federated ERP, multi-enterprise solutions, cloud components, mobile support, embedded analytics.
  • Creating the information architecture ad capabilities to exploit big data – Handled through in-memory databases, advanced analytics, unstructured and multimedia data.
  • SME innovation ecosystem – Augmenting conventional sourcing with more innovation, including sourcing from, and partnering with, smaller and less mature enterprises, in key categories of partners: mobile, design, etc.

Their central thesis is that the hybrid cloud model is key to the CIO strategy to ‘renovate the IT core’, modernizing the legacy estate to enable new digital strategies and thus provide the tool set for meeting the challenge of digital native competitors.

Bimodal IT and legacy modernisation

Fundamentally the emergence of a ‘bimodal IT‘ capability represents the evolution to the third era of IT that Gartner introduces.

Establishing DevOps teams and Agile software practices atop a Hybrid Cloud platform builds a second, much faster and adaptive layer of IT innovation that extends legacy business IT into new digital business models.

It’s such an important concept because the largest issue most enterprises will face is their legacy IT estate. Sectors like government and banking in particular operate very large complex estates, of very large, complex applications, many still operating on mainframes et al.

A bimodal IT framework enables an organization to establish the required new skills and tools, and empowered with a hybrid cloud information model that ‘overlays’ across the existing estate and opened up to an innovation ecosystem of developers and other key partners.

The post Enterprise Cloud – Taming the Digital Dragon appeared first on CBPN.

XaaP and the platform business model: How Netflix, Uber, Airbnb are forging ahead

(c)iStock.com/sd619

The primary business context for cloud native case studies like Netflix is the ‘Platform Business Model‘, the conceptual design model for the particular commercial models they implement as well, one that is directly enabled by the technology.

Considering the business wealth a ‘simple’ mobile phone app has generated for Uber, this relationship is not hard to identify in action, and is often communicated through social media, highlighting that neither owns any taxis or hotels but in a few short years now command the largest fleets and room availability, vastly larger than their traditional competitors who took decades to do so.

The platform revolution

The repeatable secret sauce is the Platform Business Model, described in detail through academic literature and popular business books; for example, the MIT book ‘Platform Revolution’ sets the scene for the market trend that Cloud Foundry is addressing and is ideal for.

The Platform Business Model has emerged as the moniker for defining the hyper-scale disruptors like Netflix, Uber, Airbnb, Facebook, Twitter et al, as the book describes:

“Facebook, PayPal, Alibaba, Uber-these seemingly disparate companies have upended entire industries by harnessing a single phenomenon: the platform business model.”

The book builds on prior MIT research, such as this detailed 2007 research report on Platform Networks, this highly recommended presentation Platform Strategy and Open Business Models, and in a simpler format in this presentation, which defines:

“A “network platform” is defined by the subset of components used in common across a suite of products (Boudreau, 2006) that also exhibit network effects. Value is exchanged among a triangular set of relationships including users, component suppliers (co-developers), and platform firms.”

Throughout these materials they provide an anatomy of these business models, exploring dynamics such as “multi-sided pricing“.

Platforms are marketplace models, ranging from the people-centric services like Uber taxis and Airbnb accommodation, through to electronic distribution channels like Apple and Xbox. MIT examines the different permutations and shares those best practice insights.

The sharing economy meets elastic cloud business models

The MIT research explores the different dynamics of platform scenarios and markets, like Microsoft vs Apple across computers, portable music players and ultimately smartphones.

The trend has also been described as ‘The Sharing Economy‘, a wave that the disruptors ride through technology innovation.

The dynamism of cloud computing is a perfect match for this equally ‘organic’ approach to resource management, both scaling in real-time to actual market demand experienced.

As the new breed of hyper-scale startups like Uber, Netflix, Airbnb are demonstrating when it is combined with massive investment financing and highly scalable mobile and cloud application services, it very quickly becomes an all-dominating behemoth, unstoppable unless you compete on the same level.

Hence why the business model itself has become so important and popular. For example Dan Woods says your CDO efforts will fail unless you adopt platforms.

XaaP: Industry scenarios for platform adoption

The enabling relationship between cloud computing and the increasing maturity of the model, as demonstrated by real-world adoption successes, is characterised by how they’re described as “XaaP” initiatives, by which I mean for example:

This included the US Government showcasing their recently launched Cloud.gov, a Cloud Foundry-based PaaS for the public sector to utilise to grow their rates of software innovation.

The post XaaP! It’s the Platform Business Model appeared first on Cloud Best Practices.

Enterprise SDN: Harnessing ‘containers as a service’

(c)iStock.com/Sjoerd van der Wal

Hyper-scale SDDC and enterprise SDN use cases

A major market for telcos will be the enterprise adoption of SDN. They will be able to harness the SDN/NFV innovations from the telco industry, and apply the technologies within their data centres, as well as using new telco services that they enable.

As SearchDataCenter describes, NFV offers the potential to unify the data centre, and will be driven under an overall umbrella of SDDC: the software defined data centre.

VMware explains why enterprises are ready for network virtualisation, and positions their NSX technology as a platform for SDDC. Example use case scenarios are being pioneered such as Virtual Customer Edge – the ability to virtualise the customer edge either through creation of a virtualised platform on customer premises.

The trend will see different industry partnership models emerge, exploring ways for industry supply chain combinations to exploit this specific opportunity. For example Telehouse partnered with Aryaka to offer SDN-augmented WAN services.

Also the hardware vendors are marketing specific programs targeting the sweet spot of telcos looking to compete with Amazon et al, via a hyper-scale data centre, announcing an early implementation partner through SKT Telecom. This type of ‘hyper-scale’ data centre technology will be harnessed by the large enterprise market and also as part of adopting SDN.

Containers as a service

The scenario and market opportunity is particularly exciting when you consider the sheer size of this enterprise sector, and also the potential for disruptive innovations to radically transform how some providers achieve competitive advantage through unique product positioning, those that best exploit the Cloud Native platform.

For example in this VentureBeat article Peter Yared, CTO of Sapho, explores a scenario he describes as ‘containers as a service’. In short, a “hybrid SaaS” capability that delivers subscription-based software via a SaaS-like model, but it is deployed on-premise to the client via containers. With containers hosting SDN services as well as these apps, then the network market could be attacked via the same disruptive principle and enabling technical architecture.

Vendors like Nuage offer SDN platform solutions, and in this article discuss the scenario of using containers this way.

The post Enterprise SDN – Harnessing ‘Containers as a Service’ appeared first on Cloud Best Practices.

Enterprise PaaS: Agile architecture for continuous innovation

(c)iStock.com/PeskyMonkey

Although the MIT makes the specific point that the Platform Business Model is exactly that – a business model, not a technology – there is naturally a clear and powerful link with the cloud model PaaS (platform as a service).

This offers literally that – a platform as a service – and so it can play a central component part in enabling the Platform Business Model.

Enterprise vs cloud PaaS

Enterprise PaaS refers to the internal application of the platform as a service model, with the goal of boosting software productivity through standardised developer tools and common components.

PaaS can be utilised via public or private Cloud deployment models. Public Cloud services include Microsoft Azure and Google, and vendor software for building your own in-house PaaS includes Cloud Foundry and Red Hat Openshift.

In their paper and ‘PaaS: Open for Business‘, Pivotal describes the essential ingredient:

“Platform as a service is a key enabler of software-driven innovation – facilitating rapid iteration and developer agility. It comprises a set of tools, libraries and services for deploying, managing and scaling applications in the cloud. Adopting an enterprise-grade, multi-cloud PaaS solution frees developers to create game-changing web and mobile applications. It also allows these applications to scale across cloud environments, based on the business need.”

In their report,‘Essential Elements of Enterprise PaaS’, Pivotal lays out a recipe for what constitutes Enterprise PaaS.

From concept to cash

Agile software practices are introduced in the Agile Manifesto, and described here by the Scrum Alliance explaining the relationship to DevOps, the integration of software development and operations management. As the vendor DB Maestro describes in this blog, DevOps builds on the software development best practices like version control and application lifecycle management, with additional functions to further automate the deployment to cloud procedures.

Application performance management provider Stackify makes a great observation that agile and DevOps combine to holistically address the full lifecycle of translating business ideas into working code running in the cloud hosting delivery environment.

Microservices continuous deployment: Infrastructure as code

This integration is conveyed through the idea of ‘Infrastructure as Code’, explained by one of the G-Cloud Digital team Gareth Rushgrove in the Continuous Integration for Infrastructure presentation.

Organisations such as Netflix and Nike aren’t just pioneering new business models, they are also new pioneering new technologies that accelerate these models, new cloud hosting and software design methods like ‘microservices’ and Continuous Deployment.

The cloud is not only changing how software is hosted and executed. It’s also changing how it is written and maintained as well as how it is architected and developed, achieved through DevOps practices and microservices design patterns.

The post Enterprise PaaS – Agile Architecture for Continuous Innovation appeared first on Cloud Best Practices.

Analysing the enterprise ‘multi-cloud’ market: IBM and Virtustream lead the way

(c)iStock.com/hocus-focus

By Neil McAvoy, CEO, CloudBestPractices.net

Although the hybrid cloud concept has become the popularised idea of how the enterprise market will inch its way into adoption of cloud services, twinning their own internal private cloud with public resources, this will be a short term label soon replaced by ‘multi-cloud‘.

I agree with the general direction of travel; however, I feel we’ll soon see the hybrid cloud definition as somewhat inadequate, as we’re really dealing with a generalised evolution to a ‘multi-cloud’ environment, as Search Cloud computing similarly describes in their article on the same topic.

Enterprise data centre transformation: Harnessing an enterprise cloud marketplace

The shift to this trend will go hand in hand with an associated evolution of enterprise data centre practices, including one aspect of multi-cloud implementation – the enterprise cloud marketplace.

In this HP article they say the enterprise IT organisation should evolve to become a service broker, as a foundation for organisational change, while Gartner lays out a data centre transformation framework that encompasses aspects like hybrid cloud outsourcing.

The common themes are establishing more of a brokerage operation, as part of increasing maturity of procurement of cloud services.

As the organisations go beyond one-off, Internet-centric apps that Amazon is ideal for, into their broad portfolio of IT, they will increasingly look more for tools that aid in this portfolio analysis, planning and migration.

An ‘enterprise cloud marketplace’ is an ideal platform for this type of functionality. Vendors like Gravitant offers a suite of tools that manages the life-cycle of matching application design blueprints to possible cloud hosting options, conducting price comparisons and so forth. As part of illustrating this type of function they performed a test to determine the best enterprise cloud provider, with Virtustream and IBM leading the pack. A very interesting development then is that IBM has now acquired Gravitant.

Open standards SDDC

A second aspect of multi-cloud capabilities is best understood when we also consider it in union with another key ongoing trend, the data centre virtualisation spreading into the telco industry, headlining their drive towards SDN and NFV powered telco networks.

Pioneering telco providers like AT&T are transforming their core network systems to a Cloud-centric approach via their Domain 2.0 program, describing how it will enable IoT innovations.

The enterprise market will be able to harness and build upon this wave of innovation, applying the technologies within their data centres, as well as using new telco services that they enable. As SearchDataCenter describes NFV offers to unify the data centre.

Industry forum the TMF is pioneering the best practices that will enable other service providers to undertake this transformation, working with vendors via ‘catalyst projects’ such as this case study with Microsoft to define a combined multi-cloud SDN, who also offer the multi-cloud reference architecture.

Open standards like TOSCA from OASIS are key to this scenario, offering the cloud standards for matching blueprints to Cloud providers, and orchestrating the following provisioning of services.

Sadhav of IBM discusses his involvement with the standard, and how it might be applied with OpenStack, the open source cloud platform, is discussed in detail in this whitepaper, and In this video:

“OpenStack Heat is gaining momentum as a DevOps tool to orchestrate the creation of OpenStack cloud environments. Heat is based on a DSL describing simple orchestration of cloud objects, but lacks better representation of the middleware and the application components as well as more complex deployment and post-deployment orchestration workflows.

“The Heat community has started discussing a higher level DSL that will support not just infrastructure components. This session will present a further extended suggestion for a DSL based on the TOSCA specification, which covers broader aspects of an application behavior and deployment such as the installation, configuration management, continuous deployment, auto-healing and scaling.” 

We’re additionally seeing how these same innovations can apply to the telco SDN scenario. For example, in this blog Cloudify describe how they implement TOSCA on their platform and can use this to also configure NFV services too.

DRaaS maturity

From an ROI and enterprise strategy point of view, the most interesting dynamic of Multi-Cloud capabilities is that they multiple benefits in different areas. One of the first is improved business continuity capacity.

This Redmond article describes how Microsoft is building Multi-Cloud orchestration into Windows Server, to offer DRaaS via Azure, and this presentation explains how DRaaS can be achieved using Openstack.

The post IBM and Virtustream lead Enterprise Multi-Cloud market appeared first on Cloud Best Practices.

The case for disaster recovery services beyond business continuity

(c)iStock.com/shutter_m

Disaster recovery isn’t a new concept for IT. We’ve been backing up data for years to offsite locations, and used in-house data duplication in order to prevent the risks of losing data stores. But now that cloud adoption has increased, there have been some shifts in how traditional disaster recovery is being handled.

First, we’re seeing increased adoption of cloud-based backup and disaster recovery. Gartner stated that between 2012 and 2016, one third of organisations are going to be looking at new solutions to replace current ones particularly because of cost, complexity or capability. These new solutions not just address data, but the applications themselves, and are paving way for Disaster recovery as a services (DRaaS).  

Unfortunately, there is still some confusion as to when cloud services may suffice for disaster recovery, or if looking at fully-fledged DRaaS makes more sense for organisations. Let’s explore four of the key considerations when it comes to DRaaS and cloud backup services.

DRaaS isn’t just for emergency situations

A lot of organisations still view Disaster Recovery as a reactive solution, and forget that sometimes just by having cloud based services in the first place, especially with a provider who utilises business continuity best practices in their services, there might be inherent DR/failover protection in place.

This means less downtime risks overall, and a more proactive approach to ensuring that your organization is up and running at all times.  This helps organisations ensure that they can react to their customers 24/7/365.

Cloud services might help boost a small IT department’s overall security profile

While you should absolutely do your homework before signing up for cloud services, the real fact is that often these services are more secure than many organisations, and come with enterprise security-grade solutions that are specifically configured to address the unique characteristics of the individual services.

This means if you are a smaller organisation who might not have a ton of security resources to do all the legwork for an in-house build, looking at a cloud solution might give you more bang for your buck in terms of reducing your onsite data protection costs, personnel costs and the day to day management of ensuring security controls are in place.

Consider the skillsets required for disaster recovery

There are a lot of solutions that you can leverage for in-house builds that can deliver not just lower costs, but also provide better control and the ability to work with multiple different platforms and projects. But the reality is that Disaster Recovery needs to be at the forefront of these projects (in addition to security and functionality) and if you don’t have the right skillsets to ensure it is not just built in, but constantly reviewed and updated it might be best to look at a service provider who does. The last thing your organization can afford should something happen, is not having the right resources to ensure business continuity during the outage and scrambling to figure out how to fix it.

Cloud storage isn’t a way to get around disaster recovery

While it’s important to be able to access your files no matter what happens, if you can’t run the front ends to get to the data, it’s going to be a nightmare. By looking at a DRaaS versus a Cloud storage solution, having multiple failover sites for applications as well, you will still be able to run your systems themselves should there be an outage This is why we will continue to see large enterprises start to look at IT services failover across multiple data centres as a disaster recovery strategy, making cloud more of an data centre on demand type of service.

Conclusion

No matter what service you ultimately decide to go with, the real thing is to make sure that you do your research. You need to really take a good inventory of what systems are involved, from application and data servers (physical and virtual), and endpoints, along with the usual SQL, Exchange and CRM systems. You should also be aware of what the disaster recovery process would look like, to ensure that if the vendor needs to be involved, you know ahead of time.

Most importantly, be realistic with the skill sets available on your IT team, and if there is a gap, this could be a good indicator that it makes sense to look at hosted or managed solutions. The last thing you want to do in the case of an outage is to go back through SLAs to figure out whom you need to contact for help, or who is ultimately responsible for different functions. The more control you have over the DR environment, the easier it will be for you to get back up and running.

The post The Case for Disaster Recovery Services Beyond Business Continuity appeared first on Cloud Best Practices.

Why hybrid cloud is so important – and why the market prediction is so large

(c)iStock.com/Spondylolithesis

Now we’ve had a few years of cloud adoption under our belts, it’s a good time to take a look at how some of the models are performing.  Public cloud has its own great case study with Amazon AWS, and private clouds see strong supporters with forward-thinking IT teams.  But there is another model that is winning over IT teams, the hybrid cloud, and it has good reason to.

With the rise of cloud models, we’ve heard a lot about the benefits of public and private clouds. Public clouds gave us the ability to leverage low-cost services to help organisations transition to cloud models through availability of services such as Amazon AWS. Private clouds were either built in-house to start taking advantage of the same type of technologies that make public clouds so attractive, but sadly the scale of efficiencies often doesn’t work for small organisations because the upfront costs of purchasing hardware and licenses can be more than simply leveraging cloud services from a third party provider.

Hybrid clouds came out of the evolution of data centres into cloud environments. IT folks weren’t 100% sold on the idea of moving everything into a cloud environment, whether public or private, due to the perceived risks around security, availability and most importantly, control. But here we are a few years later, and IDC predicts markets for hybrid cloud are expected to grow from the over $25 billion global market we saw in 2014 to a staggering $84 billion global market by 2019. Very impressive statistics for a cloud model that we didn’t expect to see such large adoption as public or private cloud.

So why is hybrid cloud so important? And why is the market prediction so large? Well, first let’s start with the benefits of hybrid cloud. Simply put, hybrid clouds provide all the benefits of a regular cloud environment such as integration, networking, management and security, but applied to a partially internal environment.

The market is at a point now where the complexities that originally came from designing, implementing and maintaining a hybrid environment are now mostly solved

This means an organisation can start with in-house computing resources, add external cloud resources to scale up, and then go back and replace those cloud sources either with more on premise infrastructure, or continue to leverage cloud solutions to balance manageability and security with the low-cost benefits of outsourcing to cloud providers where it makes sense.

By combining in-house private and public clouds, organisations benefit from not just the standardisation of shared services, but also scalability, pay per use models, and the ability to launch new services more efficiently. By tacking on external services and connecting them through UI technologies, APIs and publishing services, these hybrid models make it easier to use the cloud services as a true extension of in-house data centres.

Imagine using external storage services as if they were sitting in your data centre, but without the care and feeding requirements such as patching, maintenance and backups. Cloud computing can also be leveraged to help with data processing or development, and help reduce not just the capital investments associated with building the environment, but also the costs of resources sitting idle between projects.

The best part of hybrid cloud is that it’s a solution that can be used in so many different contexts from cloud security, networking, integration, management, and consulting. Plus, it applies to just about every vertical including powering media and entertainment, complex computing, healthcare, government, education, and analytics driven organisations.  It’s a great way to augment your IT team and resources where you may not have the luxury of building up teams and skill sets or purchasing new infrastructure.

The market is at a point now where the complexities that originally came from designing, implementing and maintaining a hybrid environment are now mostly solved.  This means organisations have more solutions to choose from, more supported vendors and availability of providers, and increased simplicity when it comes to ensuring visibility, connectivity and stability between multiple environments.

The post The Rising Growth of Hybrid Cloud appeared first on Cloud Best Practices.

SDN: How software has (re)defined networking

(c)iStock.com/Henrik5000

By Andrea Knoblauch

Over the last few years we’ve seen just about every part of the data centre move towards virtualisation and software. First we virtualised desktops, then storage, then even our security tools. So when the idea of software defined networking (SDN) started being floated around, it wasn’t a big surprise. But what exactly is it?

SDN’s early roots can be likened to the idea of MPLS, where we saw the decoupling of network control and forwarding planes. It’s also one of the key features in Wi-Fi, one of the most prevalent technologies today. But SDN isn’t just the decoupling of the network control plane from the network forwarding plane, it’s really more about providing programmatic interfaces into network equipment regardless if it’s coupled or not.

By being able to create APIs into these devices, we can replace manual interfaces and use the new software to automate tasks such as configuration and policy management, but also enable the network to dynamically respond to application requirements. Now you can deal with a pool of network devices as a single entity which means it is easier to control network flows with tools such as OpenFlow protocol.

So what does this mean for network folks? Well, first and foremost, SDN brings the promise to centralise and simplify how we control networks. It can make networks programmable and thus more agile when it comes to automation or enforcing policies. It also means that by having software at the heart of networking, it can keep up with virtualisation and cloud computing workflows.

Like a larger control console, SDN brings centralised intelligence that makes it easy to see the network end to end to make better overall decisions, and easily update the network as a whole rather than in segments.

Security folks will also benefit with advancements in SDN, giving them hopefully more insight into network issues, and quickly be able to respond to incidents.

The jury is still out when it comes to whether widespread adoption is ready for mainstream. There are still many startups driving this market, but the Open Networking Foundation (ONF), made up of board members from Microsoft, Yahoo, Facebook, Google and several other telecoms and investors are pushing for widespread adoption.

It’s yet to be seen what the true benefits of software defined networks will be, but the ability to adapt the network to different loads, be able to prioritise traffic or reroute, and of course the ability to see a better overall picture is reason enough for many organisations to start investigating this new methodology.

But the ability to start dropping in SDN for parts of your network and expand it as you change out of legacy gear is also going to get strong supporters who are looking for ways to reduce costs around gathering traffic and expand their network more with the returns on investment.

The post SDN: How Software has Re(Defined) Networking appeared first on Cloud Best Practices.