Three major cloud trends for 2017: SMBs, vendors, and architecture

(c)iStock.com/Leslie Achtymichuk

With adoption growing, service provision changing and the emergence of true cloud architecture, this is going to be a big year for cloud computing. Here is the story behind each trend:

SMBs drive cloud adoption

Small and medium-sized businesses will be the driving force behind the cloud adoption not only because the cloud presents an amazing and cost-effective opportunity to utilise the services they are not able to afford in-house, but also because it’s the easiest and scalable market demographic for service providers to drive sales.

Take for example the widespread adoption of Microsoft Office 365. It is a direct cloud replica of what everybody is used to in a classical office infrastructure sense — Microsoft Exchange email. Office 365 provides the same functionality, except there is no need to buy hardware or purchase licensing for the entire server. Just hosting a few mailboxes through another hosting provider is the easiest way to adopt the service. Attached to that, comes a myriad of add-on services extending the functionality of Office 365 mailboxes. Backup, analytics, additional file storage and many other tools make the service more attractive to all users across the board. This provides an attractive opportunity for service providers to increase recurring revenue and for end users to adopt and use the service without any large investments.

There are many other examples that will drive the cloud adoption by SMBs in the next 12-18 months. This will be the largest service adoption shift in the whole of the IT market space.

Traditional distributors give way to cloud service providers

The classical distribution model, which involves moving physical boxes of hardware and software to provide service to customers, is rapidly changing shape. To stay in the game, traditional distributors are turning their focus from physical goods onto service provision. Not only in a sense that they start providing services from their own datacenters, but also in a sense that they are turning into aggregated cloud distribution companies to provide their reseller channels with all the available Independent Software Vendor (ISV) services they are able to consolidate under one invoice.

This will create a completely new trend where cloud services will be more predominant and focused on the traditional distribution channels. When choosing a cloud vendor, service providers will be giving preferences to services that come with easy management, integration, and flexible onboarding tools.

True cloud architecture emerges

We are going to see a big shift in the understanding of the true cloud architecture. Cloud has become a buzz world over the last few years, but in its essence, it’s not much different from what’s been on the market for about 15-20 years: a model of providing services from a centralised location, from a shared platform, to take of case of IT needs of SMBs and enterprises.

In the next 12-18 months we’ll see a strong trend toward creating a true cloud architecture, which will become the foundation of the cloud industry. It’s very different from just provisioning individual services to individual customers. It’s all about the management and service delivery layer that sits on top of the actual commodity cloud services.

Cloud vendors who are not able to offer a white-labeled, multi-tiered, multi-tenant, securely separated, and management-delegated solution will miss out on new business in 2017. Service providers and SMB customers will be looking for a solution that allows them to sign in their own distributors and sub-distributors, resellers and sub-resellers, and has a capacity to create multiple user roles for multiple user departments. This functionality will be a critical service differentiation in the cloud space in the coming months.

This year’s developments in the cloud space will create unprecedented opportunities for service providers and SMB customers. Service providers will be able to grow their business by packaging and reselling ISV hosted services through existing channel ecosystems, without any capital investment. This will also open new opportunities for SMBs, who will continue replacing in-house IT infrastructure with cloud-based services, taking advantage of the Opex model of service consumption.

How often should you test your disaster recovery plan?

(c)iStock.com/Aslan Alphan

By Lily Teplow

As a savvy managed service provider (MSP), you know that having an effective backup and disaster recovery (BDR) solution and disaster recovery (DR) plan is a necessity in today’s business landscape – just in case your client opens an umbrella indoors and their whole IT network crashes. However, having these reliable solutions in place is of no value if the processes aren’t regularly tested. So, the question is this: what do your clients’ DR plan look like and when was the last time they had a DR drill test performed?

Creating a disaster recovery plan is an essential part of the business continuity planning process, yet over half of DR plans are tested once a year or never at all. There’s more to simply developing the plan – it must also be regularly tested to ensure it will work in the event of an emergency. However, when it comes to small- and medium-sized businesses (SMBs), they don’t always have the technical resources to do so.

As an MSP, you can help them avoid major data loss or business failure due to failed DR by packaging regular DR tests into your overall BDR offering. Not only will you be able to increase a client’s disaster readiness, you’ll also increase the reliability of your BDR solution and accelerate sales. But how can you convince clients and prospects that they need to partner with you to routinely test their DR plan? We’ve gathered data from CloudVelox’s State of Disaster Recovery Survey in the chart below for you to use in your next BDR sales meeting:

Infrequent testing of backup environments is putting businesses at substantial risk in the event of an outage or disaster. As you can see from the chart, 58% of respondents say they test their DR plan just once a year or less, while 33% of respondents say they test infrequently or never at all. Without adequate testing, a minor outage could become a serious headache, and a major disaster could prove catastrophic.

So what is holding these organisations back from more frequent DR testing? For most, the major reasons can be chalked up to lack of internal resources and process complexity. But this is where an experienced MSP can come in to help. Your clients and prospects can rely on you to test and work out any kinks in their DR plan and ensure a higher level of data protection. If they have any questions pertaining to the need for DR testing and your services, use the talking points below to convince them.

Why is DR testing important?

Despite all the talk about disaster recovery testing, most organisations still don’t do it enough. Without the testing and verification of DR plans, you’ll have no idea as to whether or not you’ll actually be able to recover from a disaster or extended outage. It’s during these testing periods that any security and backup issues can be identified and addressed because sometimes, extended downtime can be a life-or-death situation.

Rather than leaving it to chance, testing backups helps you ensure that all components of our BDR solution and process work harmoniously to provide you with the proactive data management, protection and business continuity you need to be successful.

How often should I have my DR plan tested?

Chances are the answer is “more often than you are testing now.” There truly isn’t one magic number, but the more DR testing you have done, the better prepared you’ll ultimately end up. As your MSP, we can undertake this testing for you to ensure that all backup and data protection processes are working correctly, and provide you with added peace of mind that you’ll be prepared for any disaster.

How can I improve my business’ DR preparedness?

The first step is to build a detailed disaster recovery plan. We can work together to help you determine specific procedures for your DR plan, such as setting Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) for critical applications. The next step is then to test the plan. Constant testing can set a cadence that your organization deems effective, while also allowing for improvements critical adjustments. The best solution for this is to form a strategic partnership with an MSP. As an experienced provider, we can offer assistance with DR testing while providing ongoing monitoring to ensure data is being regularly backed up and is always secure. With our help, you’ll be in a good position to stay fully protected in any situation that might come your way.

Does your BDR solution enable you to run frequent DR tests? Scaling for Success: The MSP Guide to Operational Efficiency, Continuum’s new interactive eBook, examines the costs of “noisy” BDR technology and reactive service delivery, revealing a smarter, more reliable backup verification alternative. Download your copy here!

How often should you test your disaster recovery plan?

(c)iStock.com/Aslan Alphan

By Lily Teplow

As a savvy managed service provider (MSP), you know that having an effective backup and disaster recovery (BDR) solution and disaster recovery (DR) plan is a necessity in today’s business landscape – just in case your client opens an umbrella indoors and their whole IT network crashes. However, having these reliable solutions in place is of no value if the processes aren’t regularly tested. So, the question is this: what do your clients’ DR plan look like and when was the last time they had a DR drill test performed?

Creating a disaster recovery plan is an essential part of the business continuity planning process, yet over half of DR plans are tested once a year or never at all. There’s more to simply developing the plan – it must also be regularly tested to ensure it will work in the event of an emergency. However, when it comes to small- and medium-sized businesses (SMBs), they don’t always have the technical resources to do so.

As an MSP, you can help them avoid major data loss or business failure due to failed DR by packaging regular DR tests into your overall BDR offering. Not only will you be able to increase a client’s disaster readiness, you’ll also increase the reliability of your BDR solution and accelerate sales. But how can you convince clients and prospects that they need to partner with you to routinely test their DR plan? We’ve gathered data from CloudVelox’s State of Disaster Recovery Survey in the chart below for you to use in your next BDR sales meeting:

Infrequent testing of backup environments is putting businesses at substantial risk in the event of an outage or disaster. As you can see from the chart, 58% of respondents say they test their DR plan just once a year or less, while 33% of respondents say they test infrequently or never at all. Without adequate testing, a minor outage could become a serious headache, and a major disaster could prove catastrophic.

So what is holding these organisations back from more frequent DR testing? For most, the major reasons can be chalked up to lack of internal resources and process complexity. But this is where an experienced MSP can come in to help. Your clients and prospects can rely on you to test and work out any kinks in their DR plan and ensure a higher level of data protection. If they have any questions pertaining to the need for DR testing and your services, use the talking points below to convince them.

Why is DR testing important?

Despite all the talk about disaster recovery testing, most organisations still don’t do it enough. Without the testing and verification of DR plans, you’ll have no idea as to whether or not you’ll actually be able to recover from a disaster or extended outage. It’s during these testing periods that any security and backup issues can be identified and addressed because sometimes, extended downtime can be a life-or-death situation.

Rather than leaving it to chance, testing backups helps you ensure that all components of our BDR solution and process work harmoniously to provide you with the proactive data management, protection and business continuity you need to be successful.

How often should I have my DR plan tested?

Chances are the answer is “more often than you are testing now.” There truly isn’t one magic number, but the more DR testing you have done, the better prepared you’ll ultimately end up. As your MSP, we can undertake this testing for you to ensure that all backup and data protection processes are working correctly, and provide you with added peace of mind that you’ll be prepared for any disaster.

How can I improve my business’ DR preparedness?

The first step is to build a detailed disaster recovery plan. We can work together to help you determine specific procedures for your DR plan, such as setting Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) for critical applications. The next step is then to test the plan. Constant testing can set a cadence that your organization deems effective, while also allowing for improvements critical adjustments. The best solution for this is to form a strategic partnership with an MSP. As an experienced provider, we can offer assistance with DR testing while providing ongoing monitoring to ensure data is being regularly backed up and is always secure. With our help, you’ll be in a good position to stay fully protected in any situation that might come your way.

Does your BDR solution enable you to run frequent DR tests? Scaling for Success: The MSP Guide to Operational Efficiency, Continuum’s new interactive eBook, examines the costs of “noisy” BDR technology and reactive service delivery, revealing a smarter, more reliable backup verification alternative. Download your copy here!

[session] Million Dollar #SaaS Service with #Kubernetes | @CloudExpo #DevOps

In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. His expertise is in automating deployment, management, and problem resolution in these environments, allowing his teams to run large transactional applications with high availability and the speed the consumer demands.

read more

CIO Outlook: Rethinking #SaaS and #DevOps | @CloudExpo #AI #Microservices

The rapid growth of hyperscale IaaS platforms that provide Serverless and Software management automation services is changing how enterprises can get better Cloud ROI. Heightened security concerns and enabling developer productivity are strategic issues for 2017.
The emergence of hyper-scale Infrastructure as-a-Service (IaaS) platforms such as Amazon Web Services (AWS) that offer Serverless computing, DevOps automation and large-scale data management capabilities is changing the economics of software consumption. As SaaS vendors as well as SaaS customers start to operate on the same IaaS platform, why have the enterprise data sit in someone else’ virtual cloud environment? Why pay for the data twice?

read more

DevOps still lacking definition despite big usage figures – but the cultural element is key

(c)iStock.com/DrAfter123

The way in which DevOps is being tackled by different organisations can be defined in one of two ways. You’ve either got the ‘teenage sex’ approach – everyone’s talking about it but hardly anybody’s doing it – or you’ve got the Eric Morecambe method; ‘all the right notes but not necessarily in the right order.’

In other words, there is no one unified guide; companies and departments are taking their own strands and doing it their own way. This has been backed up by a study from B2B research firm Clutch released earlier this week, which finds that while 95% of organisations polled either already use or plan to use DevOps methodologies, nobody can agree on a proper definition.

The most popular definition at the time of the survey, according to the almost 250 respondents, came from Wikipedia, which states that DevOps “is a culture, movement or practice that emphasises the collaboration and communication of both software developers and other [IT] professionals while automating the process of software delivery and infrastructure changes.”

35% of those polled plumped for that, ahead of definitions taken from Rackspace (24%) – “uniting development and operations teams” – Hewlett Packard Enterprise (21%) – “grounded in a combination of agile software development plus Kaizen, Lean Manufacturing, and Six Sigma methodologies” – and Amazon Web Services (20%) – “combination of cultural philosophies, practices, and tools that increases an organisation’s ability to deliver applications and services at a high velocity” – respectively.

Riley Panko, the author of the report, said she wouldn’t have been surprised by a 25% split across all four from respondents. “We did hypothesise that there would be a lack of consensus among the definitions,” Panko told CloudTech. “We were a little surprised that Wikipedia’s definition secured even an 11% edge over the second-place definition.”

Organisations need to take time with DevOps to align the processes with their own goals – but not too much, for fear of being left behind

When asked whether employing DevOps has improved organisations’ development process, the average score out of 10 was 8.7. 87% of respondents put their score between eight and 10, with only one respondent putting ranking it four or below. According to survey respondents, Docker was the most useful tool for employing DevOps, with Microsoft Azure the most effective of the ‘big three’ cloud providers.

Panko noted that, in conducting the research, one clear element stood out. “I believe that culture is a key element to DevOps,” she said. “Successful DevOps implementation seems to require more than just tools and new processes – it involves encouraging a culture of greater communication and collaboration. Using those tools and processes with an antiquated mindset is counterintuitive – however, it’s all a matter of personal opinion and how you define DevOps for your own organisation.”

This is a view backed up by Brian Dearman, solutions architect at IT infrastructure consulting provider Mindsight. “It being a cultural movement is true,” he said. “15 to 20 years ago, DevOps consisted of two separate things, with operations and development consistently complaining about each other. The culture is being changed, removing the animosity between each side.” David Hickman, VP of global delivery at IT services firm Menlo Technologies, said that it was ‘more of a methodology trend than a culture’, adding it “pertains more to social elements of organisations and how people relate to each other from a professional and business perspective.”

So for the vast majority who are partaking, what should be done from here? As the various definitions overlap and consensus is less than universal, Clutch recommends organisations should take time to align the processes with their own goals, but not too much, for fear of being left behind. “Leaving a team without those guidelines means that they might develop conflicting ideas about DevOps, since many differing ideas about the philosophy already exist,” said Panko.

A study released earlier this month by F5 came to a slightly different conclusion; while the poll, of almost 2,200 IT executives and industry professionals, said usage was increasing, only one in five said DevOps had a strategic impact on their organisation.

You can read the full Clutch report here.

Postscript: The reason the Wikipedia entry was signposted as ‘at the time of the survey’ was because the definition has since changed. At the time of print, the page now explains DevOps is “a term used to refer to a set of practices that emphasise the collaboration and communication of both software developers and IT professionals while automating the process of software delivery and infrastructure changes.” Semantics, eh?

DevOps still lacking definition despite big usage figures – but the cultural element is key

(c)iStock.com/DrAfter123

The way in which DevOps is being tackled by different organisations can be defined in one of two ways. You’ve either got the ‘teenage sex’ approach – everyone’s talking about it but hardly anybody’s doing it – or you’ve got the Eric Morecambe method; ‘all the right notes but not necessarily in the right order.’

In other words, there is no one unified guide; companies and departments are taking their own strands and doing it their own way. This has been backed up by a study from B2B research firm Clutch released earlier this week, which finds that while 95% of organisations polled either already use or plan to use DevOps methodologies, nobody can agree on a proper definition.

The most popular definition at the time of the survey, according to the almost 250 respondents, came from Wikipedia, which states that DevOps “is a culture, movement or practice that emphasises the collaboration and communication of both software developers and other [IT] professionals while automating the process of software delivery and infrastructure changes.”

35% of those polled plumped for that, ahead of definitions taken from Rackspace (24%) – “uniting development and operations teams” – Hewlett Packard Enterprise (21%) – “grounded in a combination of agile software development plus Kaizen, Lean Manufacturing, and Six Sigma methodologies” – and Amazon Web Services (20%) – “combination of cultural philosophies, practices, and tools that increases an organisation’s ability to deliver applications and services at a high velocity” – respectively.

Riley Panko, the author of the report, said she wouldn’t have been surprised by a 25% split across all four from respondents. “We did hypothesise that there would be a lack of consensus among the definitions,” Panko told CloudTech. “We were a little surprised that Wikipedia’s definition secured even an 11% edge over the second-place definition.”

Organisations need to take time with DevOps to align the processes with their own goals – but not too much, for fear of being left behind

When asked whether employing DevOps has improved organisations’ development process, the average score out of 10 was 8.7. 87% of respondents put their score between eight and 10, with only one respondent putting ranking it four or below. According to survey respondents, Docker was the most useful tool for employing DevOps, with Microsoft Azure the most effective of the ‘big three’ cloud providers.

Panko noted that, in conducting the research, one clear element stood out. “I believe that culture is a key element to DevOps,” she said. “Successful DevOps implementation seems to require more than just tools and new processes – it involves encouraging a culture of greater communication and collaboration. Using those tools and processes with an antiquated mindset is counterintuitive – however, it’s all a matter of personal opinion and how you define DevOps for your own organisation.”

This is a view backed up by Brian Dearman, solutions architect at IT infrastructure consulting provider Mindsight. “It being a cultural movement is true,” he said. “15 to 20 years ago, DevOps consisted of two separate things, with operations and development consistently complaining about each other. The culture is being changed, removing the animosity between each side.” David Hickman, VP of global delivery at IT services firm Menlo Technologies, said that it was ‘more of a methodology trend than a culture’, adding it “pertains more to social elements of organisations and how people relate to each other from a professional and business perspective.”

So for the vast majority who are partaking, what should be done from here? As the various definitions overlap and consensus is less than universal, Clutch recommends organisations should take time to align the processes with their own goals, but not too much, for fear of being left behind. “Leaving a team without those guidelines means that they might develop conflicting ideas about DevOps, since many differing ideas about the philosophy already exist,” said Panko.

A study released earlier this month by F5 came to a slightly different conclusion; while the poll, of almost 2,200 IT executives and industry professionals, said usage was increasing, only one in five said DevOps had a strategic impact on their organisation.

You can read the full Clutch report here.

Postscript: The reason the Wikipedia entry was signposted as ‘at the time of the survey’ was because the definition has since changed. At the time of print, the page now explains DevOps is “a term used to refer to a set of practices that emphasise the collaboration and communication of both software developers and IT professionals while automating the process of software delivery and infrastructure changes.” Semantics, eh?

ChatOps Interface | @DevOpsSummit @VictorOps #DevOps #IoT #ChatOps

The modern software development landscape consists of best practices and tools that allow teams to deliver software in a near-continuous manner. By adopting a culture of automation, measurement and sharing, the time to ship code has been greatly reduced, allowing for shorter release cycles and quicker feedback from customers and users. Still, with all of these tools and methods, how can teams stay on top of what is taking place across their infrastructure and codebase? Hopping between services and command line interfaces creates context-switching that slows productivity, efficiency, and may lead to early burnout.

read more

Anexia IT Trends 2017 | @CloudExpo @_ANEXIA #Cloud #Mobile #AI #BigData

Being able to look into the crystal ball, that would be great: What are the IT trends 2017? Flexible, fast and self-made: Developments that have been observed in various IT areas in recent years will continue to apply in 2017. But in which direction? What are the IT trends 2017? We are not quite sure whether this is in the glass ball of the fortune-teller at the fair. Therefore, we asked our Anexia experts. They opened their doors gladly and shared their trend analysis for 2017 with us. What happens in the areas of big data, cloud management, SEO and VOIP telephony?

read more

Database Performance Analyzer | @CloudExpo @SolarWinds #VM #APM #Cloud

Quickly find the root cause of complex database problems slowing down your applications
Up to 88% of all application performance issues are related to the database. DPA’s unique response time analysis shows you exactly what needs fixing – in four clicks or less.
Optimize performance anywhere
Database Performance Analyzer monitors on-premises, on VMware®, and in the Cloud, including Amazon® AWS and Azure™ virtual machines.

read more