All posts by jgardner

Analysing the latest AWS services: Certificate Manager, Lambda, and DevSecOps

Picture credit: “The Crunchies Awards 2008”, by “Nandor Fejer”, used under CC BY / Modified from original

The Amazon Web Services (AWS) cloud is continuing its pace of rapid, iterative service improvements in 2016. It has already announced several hundred updates in the last few months alone, proving yet again why it is a top choice for enterprises: not only are core AWS services stable and mature, but AWS is constantly improving services and software — innovation that comes “built in” by using the AWS platform.

Here are the 2016 AWS service announcements that the senior DevOps engineering team at Logicworks are especially excited about:

AWS Certificate Manager

For any systems administrator that has experienced downtime from misconfigured or expired certificates – in other words, every systems administrator – AWS Certificate Manager is an ideal solution. Released in January 2016, Certificate Manager removes the everyday “annoying” parts of managing SSL certificates and allows you to provision, manage, and renew SSL certificates for AWS resources.

The certificates are free and self-renewing, but currently can only be deployed to AWS resources like Elastic Load Balancer or a CloudFront distribution. There are many 3rd party services that perform the same function, but Certificate Manager is sure to appeal to enterprises that need to maintain encryption standards and centrally manage certifications across large, complex AWS environments.

AWS Lambda – VPC access

Released in mid-February 2016, AWS Lambda can now access services within a Virtual Private Cloud (VPC). This means your Lambda functions can now access resources that are “behind” a VPC like RDS databases, ElasticCache nodes etc. or you can use VPC NAT gateway to give Lambda access the internet.

AWS Lambda is in many ways the future of infrastructure-as-code and cloud automation. Lambda allows you to run code without managing instances or networks, and can be used in conjunction with other automation tools like Puppet, Chef, AWS CloudFormation and CodeDeploy to create infrastructure that is truly built, managed and secured with code. (We talk more about why we love Lambda here.) Although it is likely not being adopted by many AWS cloud consumers at this point, it has generated buzz since its spotlight at Re:Invent 2015 and will likely reach many more milestones in 2016.

Scheduled reserved instances

Managing cloud costs remains a top concern for SMBs and enterprises, and around 30% rely on AWS Reserved Instances (RIs) to optimise cloud costs. They will likely be pleased by a new type of RI: Scheduled Reserved Instances, released in mid-January 2016.

Scheduled Reserved Instances allow you to reserve EC2 capacity in advance for recurring jobs. You can think of it like a highly reliable version of an AWS Spot Instance that cannot fail mid-job and is provisioned on a regular schedule. This will be very useful for companies that run batch jobs once a month, such as periodic business intelligence “data crunching” jobs or Elastic Map Reduce (EMR) workloads. That said, for the vast majority of use cases, an enterprise can just purchase a group of standard RIs to receive the 30-70% discount.

CodeDeploy push notifications

AWS CodeDeploy is a very simple, language agnostic platform that allows enterprises to create fully automated deployment pipelines. The best part is that they can easily reuse existing setup code or software release processes in CodeDeploy, making it easy to setup and use.

However, in the past it has been difficult to find out the live status of a deployment — the only option was actively monitoring updates. In mid-February 2016, AWS remedied this problem by adding push notification support for CodeDeploy, meaning that your developers or systems staff can receive notifications for CodeDeploy events (e.g. deployment failure) directly to email, text, pager, etc. This means engineers can respond more rapidly to troubleshoot and remedy deployment errors.

As AWS continues to emphasise the benefits of DevOps tooling and practices, expect more service updates around AWS Developer Tools (CodeDeploy, CodePipeline, CodeCommit, etc.) this year.

New DevSecOps documentation

Recently AWS has increased its output of documentation and whitepapers related to DevOps tooling, security and governance on the cloud, which they call DevSecOps. Although not technically a new service, this documentation can make a huge difference for enterprises looking to achieve compliance or architect for better security policies on AWS.

Here are a few of the resources that every security professional should check out:

The post New 2016 AWS Services: Certificate Manager, Lambda, DevSecOps appeared first on Logicworks Gathering Clouds.

Incident response on the AWS cloud and the case for outsourcing

(c)iStock.com/nzphotonz

By Jason Deck, VP strategic development, Logicworks

It is 4pm on a Friday before a holiday, right before your team leaves for a long weekend. An engineer on your team suddenly cannot connect to certain instances in your AWS environment. The error is affecting the largest projects and biggest customers — across hundreds of instances — including DR.

What happens now?

The answer to this question depends on your support model. In most companies, an incident like this means that no one is going home for the holiday weekend; they will spend 15+ hours diagnosing the problem, then 200+ hours fixing it manually, instance by instance. If they do not get it fixed in time, they will lose data and have to tell their customers — a potentially damaging situation.  

This is a true story, but what actually happened was very different. The company instead called their managed service provider (MSP) – us – who diagnosed the problem and fixed it over the holiday weekend.

Every system has weak points. Every system can fail. It is how you deal with catastrophe — and who you trust to help you during failure — that makes the difference. Every enterprise team needs an insurance policy against mistakes, large and small.

It turns out that one of the company’s internal engineers had caused the problem, inadvertently changing permissions for their entire environment. Logicworks was able to diagnose the problem in less than an hour, determine blast radius, get our smartest engineers in a room to develop a complex remediation strategy, and implement that fix before business resumed after the holiday. This involved writing custom scripts (in Python, BASH, and Puppet) to investigate the scope of the failure and another more complex script to partially automate the fix, so that each instance could be repaired in 3-5 minutes, rather than 2-3 hours. Ultimately it took 170+ hours of engineering effort, but the company readily admitted that it would have taken them two weeks to fix on their own.

Managed infrastructure service providers were born in an age when implementing a fix meant going to a data centre, swapping out hardware, and doing manual configurations. The value of an MSP to enterprises was not having to manage hardware and systems staff.

In the cloud, MSPs must do more. They must be programmers; instead of replacing hardware, they need to write custom scripts to repair virtual cloud instances. MSPs need to think and act like a software company: infrastructure problems are bugs, the solution is code, and speed is paramount.

Not all MSPs operate this way. Many MSPs would have looked at this company’s issue and applied the traditional incident response model: just reboot everything manually, one at a time. (Many also would have said “you caused the problem, you fix it.”) This is the traditional MSP line of thinking, and it would have meant that the company would have lost three to five days of data and customer trust.

MSPs need to think and act like a software company: infrastructure problems are bugs, the solution is code, and speed is paramount.

Running on cloud infrastructure comes with unique risks. It is often easier for your engineers to make a career-limiting mistake when a single wrong click of a button in an automated script can change permissions across an entire system. These new challenges require new answers and a new line of defence.

Importantly, this means that MSPs no longer replace internal IT teams; they provide additional expertise that the enterprise currently lacks (or is in the process of building) in fields like cloud security and automation. They provide an additional layer of defense. In the example above, the internal and MSP team collaborated to fix the problem, since there is shared control of the infrastructure.

In the cloud, the conversation no longer has to be insourcing vs. outsourcing. In fact, you will get the most out of an MSP if are also setting up internal DevOps teams or implementing software development best practices. As an MSP, companies with an existing or growing DevOps team are the most exciting to work beside. As an example, an MSP cannot automate your entire deployment pipeline alone; most only operate below the application level and can only automate instance spin-up and testing. But if the two teams are working together, they can balance application-specific needs with advanced scaling and network options and create a very mature pipeline very quickly.

An MSP can accelerate your DevOps team building strategies, not substitute them.

In other words, an MSP can accelerate your DevOps team building strategies, not substitute them. This is an incredibly powerful model that we have watched transform and mature entire cloud projects in a matter of months. Plus, they can subtract all the crucial compliance work your DevOps team dreads, like setting up backups and logging — and even improve the quality of that compliance work by creating automated tests to ensure logs and backups are always kept.

It is true that internal IT teams sacrifice some control by using an MSP. The key is that you are sacrificing control to a group of people who are held responsible for making your environment secure, available, etc. You control how and when the MSP touches your environment.

Cloud projects are complex, and cloud problems can be equally so. Just make sure that when they happen, you have the right team on the bench.

The post Incident Response on AWS Cloud: The Case for Outsourcing appeared first on Gathering Clouds.

Why company culture is key to cloud success

(c)iStock.com/konradlew

If you ask any successful company for the key to their success, or ask any employee why they are happy in a position, the answer is almost always “the people”. But what makes happy, productive people is usually fueled by a thriving company culture.

Company culture is not built with free lunches, ping pong tables, and bring-your-pet-to-work days. Only companies that develop more meaningful values soar ahead of the rest.

Technology companies and particularly those in the cloud space have a unique challenge in fostering a healthy culture. The market is evolving at a dizzying pace and the competition to source qualified employees is fierce. The threat of continual staff turnover can disrupt team harmony, delay DevOps transformation efforts, and reduce client satisfaction. How do companies maintain DevOps projects by finding the right staff and vendors that build and align with company culture?

Here are a few things to watch out for when building company culture or sourcing your next partner:

Share innovation

The connection between a company’s willingness to experiment with new ideas and employee satisfaction is well known. Company cultures that encourage experimentation, have early access to new feature or product releases, carve out time for their teams to understand, test, and tinker with them allow engineers to be innovative, elevate the team around them, and service clients more effectively.

However, how the company communicates these experiments and centralises learnings across teams it is often undervalued. “Innovation sharing” is especially critical in cloud migration projects, where different lines of business are using the cloud in different ways; it is essential that one team’s core learning becomes another team’s playbook.

Companies that document innovation learn smarter. To implement this in your own team, start small by creating a central Wiki that is controlled and regularly audited by the senior-most engineers. Schedule monthly show-and-tells for technical teams; at Logicworks, we schedule ours on Fridays and make them BYOB. For sophisticated cloud teams, create a central GitHub that houses assets like cloud templates and reusable scripts, which can be checked out and modified by individual teams. As senior engineers review suggested modifications, they are able to standardise best practices and build key learnings into templates that everyone can reuse.

Everyone wins when everyone wins

Good organisations recognise the accomplishments of individual contributors. Great organisations celebrate in the shared responsibilities of the collective team’s achievements. Every company has superstars, but few are successful without those superstars rolling up their sleeves to collaborate and provide mentorship. This practice helps companies disprove the old adage “you are only as weak as your weakest link” by creating an environment where everyone is empowered to build on the strengths of others.

Migrating and maintaining cloud resources is new and challenging for every team. Even the most advanced engineers are learning new skills and making mistakes. In a cloud environment where mistakes are usually less costly and errors can be fixed rapidly, it is more important that the team grow stronger than that everything run perfectly. This “blameless” culture is what fosters experimentation and ultimately yields to higher trust between development and systems teams, improving time-to-market and product quality.

Transparency yields better vision

Companies that have established transparency as a foundational element of their culture inspire a more honest workplace, allowing its employees the benefit of understanding career paths and company direction. Transparency stimulates a shared sense of ownership in the company’s success. But what does transparency actually look like, and how do executives maintain a level of discretion while involving all team members?

Simplicity works best here. Hold staff meetings that highlight the performance of teams and discuss project updates, goals, and metrics. Talk about who you are planning to hire and why. The CEO or CFO should discuss the financials of the company. This is a hard one for many companies, but an absolute game-changer in getting the entire staff aligned on revenue strategy.

If you are looking for a partner, this is equally true. There are many players that appear to look good on paper, but when you dive deeper are financially unstable, misrepresent services or experience, or lack vision. Companies that are forthcoming in showcasing their financial health, proven experience, and vision demonstrate a stronger commitment to current services as well as future growth.

Customers are partners, not numbers on a sales report

Companies that respect their clients or customers cultivate more positive company cultures. This is not about building a better customer service department, but about the tone set by all managers towards customers and clients — especially if this is an IT project and customers are internal. How do engineers talk about its customers internally? Is it acceptable to ridicule “silly” customer or employee questions around the watercooler? A company that sees its clients as annoyances, mocks them, or sees them as a number fosters negativity and an overall unhappy workplace.

Cloud projects stretch IT departments and IT vendors thin. There is a lot to be done, and often not a lot of time to do it. But if IT thinks of itself up as beleaguered and overwhelmed, it begins to turn away meaningful interactions and criticise customers. Once this tone sets in, it can be difficult to leave behind.

Again, this starts with managers. What metrics are managers measuring? Is it as important to hit retention numbers as to hit sales numbers? How are positive client interactions or positive cross-departmental stories highlighted? Are customer-facing employees empowered to take extra time on a client? How many clients does each account manager or IT person support? Are deadlines determined without the involvement of IT, which causes bad feelings all around?

While this is difficult to measure in prospective vendors, it is crucial to get references, certifications, and first-hand meetings with non-sales staff up front. To handle the complexity of your projects, you want an IT partner you can trust and that has developed a positive internal culture that values your business.

Keeping employees and clients successful, engaged, and innovative is about investing in cultivating a culture that empowers staff, values innovation sharing, and continually challenges itself to evolve as a company. Although the principles outlined above may seem like common sense, surprisingly few companies actually embed them into their culture. The companies that do will reach and exceed their cloud goals.

The post Company Culture is Key to Cloud Success appeared first on Gathering Clouds.

The unexpected rise of the managed data centre

(c)iStock.com/4x-image

For the last five years, industry analysts have predicted that cloud will kill the data centre. So why are colocation revenues continuing to climb at about 10% a year? Why did Equinix, the largest data centre provider in the world, see revenues go up 13% year on year in the fourth quarter of 2015?

The answer is that as enterprises begin to move “easy” workloads to AWS, they want to move not-ready workloads to a managed environment outside their internal data centres. Somewhat paradoxically, colocation is rising in popularity precisely because enterprises want cloud; it fits well into a hybrid cloud plan, helps enterprises consolidate data centres, and helps transition people and processes to a shared responsibility model.

The decline of the corporate data centre

Enterprises want out of the data centre business – eventually. Just 25% of enterprises in North America will build a new data centre when they run out of space in existing properties, according to a report by 451 Research. A whopping 76% will choose colocation or cloud, followed by 62% who will consolidate existing data centres. By 2018, 50% of all server racks in North America will be located at cloud and colocation data centres, up from 40% today. They will continue to invest in improving existing data centres, but seem unwilling to guesstimate their computing needs twenty years into the future or spend hundreds of millions of dollars on non-differentiating technology.

At a high level, enterprises undergoing digital transformation (92% of enterprises in North America) are looking to eliminate institutionalised friction — processes and transactions that rely on sequential processes and “functional fiefdoms”.

When you run a data centre, it is hard to escape the hard facts of provisioning speed (slow), absolute control (technicians in the building) and sequential processes (hardware install is more or less sequential by nature). Colocation may improve power efficiency and reduce bandwidth costs, but in the end, the colocation versus in-house data centre conversation is not a technology decision — it is a people and process decision. Data centres are simply inconvenient and inefficient people and process centres.

Colocation takes one step towards frictionless IT by removing the slowdowns associated with provisioning services like AC, security, etc. But they still need their own staff to replace cores, add new hardware, and coordinate with line of business teams. What enterprises really want is a cloud responsibility model in the data centre: the ability to provision new compute resources abstractly, with minimal process or delay. They actually want managed colocation so that someone with better processes and scale can do that work faster, invisibly.  

Connecting colocation and AWS

Although colocation is normally a people and process decision, there is an important technology benefit of colocation: super-fast private connections to the public cloud.

It is no accident that Equinix’s fastest growing revenue segment is private connections to AWS. Enterprises also want low-latency connections to the public cloud for a number of reasons. First and most urgently, they want to build hybrid environments where data is transmitted quickly and cheaply between dedicated hardware and the cloud. AWS Direct Connect can cut data transfer fees by two to ten times and result in more reliable connections than other forms of connection between a data centre and AWS. It is also more secure than connecting to AWS over the internet.

But again, the hardest part of transitioning to the cloud is not refactoring applications or preparing data pipelines, but preparing new service definitions and security / compliance models. Managed colocation encourages CSOs and security teams to figure out a shared security, compliance, and management model now, where services are divided between a 3rd party platform (the colocation provider), a managed service provider, and internal teams. This model is similar in most ways to a cloud security model, so when these workloads are transferred to the cloud in the future, transferring service definitions will be relatively simple.

The virtues of managed colocation + managed AWS

The value of managed colocation is magnified even further when an enterprise uses the same managed service team for colocation and cloud.

First, the enterprise gets a single contract, a single set of SLAs, and a single set of security and compliance models. Documentation, auditing, tracking, and ticketing can occur in a single interface or can be divided among lines of business with shared resources.

Enterprises can also use a single service provider for hybrid applications; the team that reboots your Oracle RAC database communicates directly with the team that must monitor your AWS account for unexpected ramifications of the reboot. Service levels are application-centric and team-centric, not infrastructure-centric. In the entire field of ISVs, consultants, and system integrators that enterprises could choose from, MSPs are uniquely positioned to remove some of the complexity of a cloud process migration.

Lastly, a managed service provider that have a long history of “traditional” system management plus AWS expertise is quite simply a great tool to have in your backpocket. They can be consulted when migrations get tough, as they understand the nitty-gritty of traditional systems, the cloud, and how traditional applications behave on the cloud.

The reasons enterprises are migrating to the cloud are well-understood. But we are witnessing an unexpected but logical corollary: the rise of the abstract management layer, and hence managed colocation. If cloud is the future, managed colocation is the transitional architecture that will help enterprises reach these long-term cloud goals.

The post The Unexpected Rise of the (Managed) Datacenter appeared first on Gathering Clouds.

Six things to look for in a cloud MSP

(c)iStock.com/luxxtek

In the world of infrastructure as code, large and small businesses need a team that understands how infrastructure enables faster deployment and smarter product development. In the cloud, costs depend more on how day-to-day workflows are managed, not just on what hardware is running — so processes and reporting are more complex. Companies need a partner to not just fill in the gaps, but to tell them where the gaps are. They need experts not phone support.

The cloud is challenging managed service providers (MSPs) to offer more value to customers, and enterprises should update their criteria to take advantage of these services. Here are six things enterprises should look for in a cloud MSP:

Cloud expertise

This may sound obvious, but true cloud expertise is harder to come by than you might imagine. Here are some things to ask — and some warning signs to watch out for:

Is the MSP an approved partner? (ex. AWS Premier Partner) What tier?

This matters. Top tier partners have more experience — proven experience, more stable, and are more accustomed to dealing with enterprise-level clients. AWS has tens of thousands of partners, so Premier Partners highlight those in the top 1-2%. If a partner you are evaluating is new to the AWS ecosystem, they will have more limited enterprise support access and no preferred access to beta programs.

Has the MSP been independently audited for this expertise?

It is one thing for an organisation to become a partner, but quite another to be audited by a third party for this expertise. AWS offers this as a program, and approved MSPs get APN Managed Service Partner status based on rigorous criteria.

How many cloud certifications does the MSP hold?

The IT industry is filled with thousands of new cloud engineers and new cloud companies. While this makes the cloud industry an exciting place to be, it is also a risky one for enterprises that cannot afford to hire green cloud engineers or inexperienced cloud partners for their mission-critical projects.

If your enterprise is moving to the cloud, make sure your partners are certified in your cloud platform of choice. Certification will not prevent all mistakes, but it will guarantee that the MSP’s staff has breadth of experience, troubleshooting skills, and a serious commitment to cloud best practices.

How does the partner define “cloud”? Are they still trying to sell old tech as “cloud”?

Many MSPs still sell private cloud and colo — and this is a benefit. However, the thing to watch out for is when an organisation sells their own “public cloud” or tries to make the pitch that their collection of data centres is “more secure” than the big players or “more flexible”.

Cloud has come to mean many things, and you do not want your organisation stuck in a small, old-tech data centre because you bought the promise of pseudo-“cloud” technology. If your MSP is trying to sell anything other than AWS (or Google, or Azure) to you as public cloud, take a closer look.

Automation

Agility leads the list of drivers for adopting the cloud, according to a report by Harvard Business Review. Innovation comes second. Most see this agility from reducing business complexity and IT operational complexity.

These statistics, like most cloud-based studies, unfortunately confuse the distinction between cloud-based SaaS products and cloud hosting. The former provides built-in agility, the latter does not. In order to get the cost and agility benefits of migrating core infrastructure hosting to AWS or another cloud provider, you need to not only orchestrate the platform’s services on your own, but you need to also set up your own workflows, build your own reports, and perform hundreds of other tasks that AWS does not do for you. AWS is not a SaaS provider, it is a platform.

The answer is to automate your cloud infrastructure so that it becomes a PaaS-like platform for your development team; i.e., they can spin up and down new environments in minutes, replicate changes automatically across instances, and centralise documentation and change management.

These tools make a DevOps transition possible. Developers can get their code tested and in production in minutes. Automation empowers cost-effective experimentation. Tools like containerisation create a common language for both systems engineers and developers to communicate. Together, cloud automation and orchestration software have the potential to drastically reduce the effort in migrating to the cloud, reduce the risk of human error, guarantee that developers maintain compliance, and increase speed of development.

The problem? It would take multiple, senior-level level automation engineers working for months to develop a script that spins up perfectly configured instances for a variety of applications from scratch.

If cloud MSPs can run a script (created beforehand) to spin up a new environment in days, that is a huge value add for enterprises. Cloud MSPs can do the initial setup and maintenance of deployment pipelines. They can become software companies as well as curators of a dizzyingly complex software marketplace, helping enterprises take advantage of the true agility of the cloud.

BizOps

To maintain agility, you must maintain feedback between business and IT. IT needs to know where to spend and how, and business needs to know how infrastructure costs are changing over time. Typical MSPs will give you a cloud bill, and leave it to you to figure out the rest.

The great thing about migrating to the cloud is that more detailed cost reporting is available to you than ever before. But you need to configure and maintain it. A good cloud MSP handles set up and provides detailed cost reporting and automated reports. These MSPs go further to implement reviews where this data is subjected to human logic and implement appropriate cost savings strategies. For example, they may buy Reserved Instances, provide cost projections, give budget and allocation advice, and help you tag cloud resources by project and team for tracking.

A good MSP will also provide change management, incident management, ticket management and prioritisation, and basic project management. This is especially useful for organisations without an existing project management office. This can be as basic as a central ticketing and change tracking interface, but usually necessitates a dedicated technical account manager.

Traditional IT expertise

At a time when the vast majority of enterprises will implement a combination of on-premises, private, and public cloud environments, they need a partner that understands all three. They need an MSP who can deploy greenfield public cloud environments in a relatively short span of time, but also has experience in traditional enterprise hosting in order to understand legacy applications and communicate in the same language with their not-yet-cloud-ready internal teams.

A ‘born in the cloud’ MSP will stay as far away from your monolithic apps and legacy infrastructure as possible. As a result, they will often perform a cursory audit of the application, and may deploy it to a public cloud without understanding the application’s weaknesses, which tiers/features cannot be replaced by cloud platform’s resources, or the roles of the engineers that maintain that application. They are also usually not accustomed to integrating with complex enterprise teams who require more than standard monitoring and reporting features.

In addition, not every application tier is immediately suitable for the public cloud. There are many legacy systems – like Oracle RAC, for example – that have no replacement in a cloud platform like AWS. There may also be business reasons why it is not advisable to move all tiers of an application, such as when significant capital has been invested in custom database or virtualisation systems. An MSP that understands both traditional IT and cloud infrastructure will not only be able to better audit and advise enterprises on this score, but may even be able to transport that physical hardware to their own datacenter. This enables the enterprise to get all the benefits of outsourced management while maintaining colocation between their database tier and other tiers hosted on the public cloud. The cost savings and agility benefits of such a configuration can be significant.

Cloud R&D

Cloud technology changes every day. Old-guard MSPs are highly proficient at maintaining a system, but may not build cloud infrastructure that can evolve efficiently. Business should find an MSP that prioritises ongoing changes, not just ongoing monitoring.

A great MSP will understand that setting up your cloud “perfectly” on Day 1 is impossible. Instead, they will give your cloud the capacity for efficient change, which is usually a function both of the project management services (described above in #3) and cloud automation (#2). A cloud that is fully templatised and automated can change more frequently, automatically document those changes, and enable roll backs. This reduces both the risk of change and the overhead associated with change management.

Security credentials and certifications

Most enterprises that are looking for an MSP are also looking for a MSSP — a managed security services provider. Security expertise is table stakes in any MSP evaluation.

How do you evaluate cloud security experience? The field is relatively new, and little exists in the way of credentials or certifications. As a result, there are three key characteristics you should look for instead: traditional security credentials, compliance experience, and 3rd party audited security practices.

First, any well-qualified MSP will maintain the following certifications:

  •     SSAE-16
  •     SAS70 Type II
  •     SOX Compliance
  •     PCI DSS Compliance

The ability to earn such qualifications indicates that the MSP possesses a high level of security and compliance expertise. They require extensive (and expensive) investigations by 3rd party auditors of physical infrastructure and team practices.

Secondly, the MSP should have compliance experience, measured both by existing client logos, detailed responsibility matrixes, and 3rd party auditors. Most organisations will claim PCI and HIPAA compliance, but make sure they have had their offering audited by a reputable auditor against the HIPAA requirements as defined by HHS or the PCI DSS 3.0.

In 2016, more old-guard MSPs and new born-in-the-cloud providers will enter the cloud MSP space. It is crucial that businesses know what to look for — and distinguish the marketing speak from reality.

The post 6 Things to Look for in a Cloud MSP appeared first on Gathering Clouds.

Why the cloud does not just “work”

(c)iStock.com/4774344sean

Everyone knows that migrating workloads to the cloud is challenging. But many assume that after you get to the cloud, all you have to worry about is maintaining your applications.

After all, you have outsourced infrastructure management to AWS, right? No more racking and stacking servers, no more switches and hypervisors.

While AWS will maintain the physical infrastructure that supports your virtual environment, it is not initially set-up to help you configure those virtual instances and get them ready to run your code. And it remains your responsibility to maintain that architecture so that it evolves when your applications change.

This is a potentially dangerous misconception – and one that we run into often.

If you only do the bare minimum in cloud management, or leave it to your (new) DevOps teams or your systems engineers (who are already busy helping with future projects and migrations) to figure out, it often results in chaotic cloud environments. Every instance has a “custom” configuration, a developer launched an environment and forgot to spin it down, changes are made but not documented, and wasted resources accumulate. Suddenly you are not sure which resources belong to which project, and when your site goes down, your team is left combing through dozens of resources in the AWS console.

AWS is a world-class engine equipped with a robust set of tools, but it is not a car you can just drive off the lot. Even the best cloud systems require ongoing maintenance. And while moving to the cloud means you have “outsourced” racking and stacking servers, your IT team still needs to do things like configure networks, maintain permissions, lock down critical data, set up backups, create and maintain machine images, and dozens of other tasks AWS does not perform.

You may have heard of enterprises that now use a team of engineers to run their clouds. These enterprises realise that beyond the initial cloud migration, significant time and effort needs to be invested in automating configuration, auto scaling, and deployment. Automation allows these teams to spin up environments in seconds and repair failures without human effort. AWS provides the tools that make automation possible, but it is not configured on day one. That team of engineers may never touch the AWS console; they are maintaining automation scripts and modules, not the AWS resources themselves.

Over the last few years, the word “automation” has been used to describe many things. That is likely because every enterprise cloud environment will use different tools for different goals, and the end result will likely be partial automation – not every organisation needs to deploy 200 times a day or wants to become the next Netflix, after all. But the great thing about automation is that it is not an all-or-nothing proposition. Even a little effort yields immediate results.

The cloud does not work out of the box, nor does it maintain itself. But with the right team and the right skill set – in tools like Puppet, Jenkins, AWS CloudFormation – it can change your IT department forever.

The post The Cloud Does Not Just “Work” appeared first on Gathering Clouds.

Is your cloud wasting money? How to rein things in

(c)iStock.com/ohmygouche

Cloud costs are notoriously difficult to contain. With 50+ AWS services, complex usage pricing, and bills that list “EC2 costs” as a single line item, it is often hard to know who spent what — and then tie those numbers to project budgets and ROIs.

It is no wonder that cost monitoring tools have proliferated in the last two years. And while these tools help, they are only half the battle.

Recently, our team helped perform an audit of a large media organisation’s three year old AWS environment. While they employed cloud engineers on staff, those engineers were busy supporting code releases and putting out fires. This left them little time to update their environment with new AWS features or best practice configurations.

This is a challenge for every IT department. How does a single engineering team balance fire fighting, supporting major code push events, and still have time to do real maintenance on their cloud infrastructure?

This is what happens when a “DevOps” engineering team is tasked with doing everything quickly — but not given the tools to reduce manual maintenance and deployment work. This is what happens when you try to create a DevOps team without automating the infrastructure to support continuous integration and continuous delivery.

It should come as no surprise that our audit on this team’s AWS environment revealed that about 20% of their compute resources were being wasted. Another 15% of the instances could not be linked back to an active project. They had over-engineered VPCs and were still manually launching and updating instances, which meant each instance had different configurations based on which engineer launched the instance.

Automation solves this team’s problems in a number of ways. First, and most importantly, it reduces the amount of time that the team spends configuring instances and deploying new code. When you create a custom template for your environment, bootstrap it, and then set up an integration between auto scaling and your automated deployment process, you have an environment that can spin up new instances in minutes and deploy them with the latest version of code with little or no human intervention.

Not coincidentally, the system described above is also far less prone to deployment errors or the failure of a server instance. According to a survey of 20,000 engineers by Puppet, deployment automation and integrated tested reduce deployment failures by 60%. Teams that use deployment automation deploy 200 times faster. And among Logicworks clients, downtime in production environments is zero — compared to an industry average of 10.6 downtime incidents per year among clients not on AWS, according to the IDC.

Automation also has the potential to save your team money in a more subtle way: by reducing custom configurations and therefore the time it takes to fix something. When you need to make a small change to your environment and your engineers manually boot and configure instances, they have to make this change in the console or in the CLI, then create a new AMI. What happens if this new AMI causes a failure? Your engineers will probably comb through BASH logs. And then they will go through the console. And then they might just try to rebuild the AMI.  

If your engineers want to make a small change to your environment and instead modify AWS CloudFormation templates or a configuration management module, it might actually take longer. But the value to your organisation is enormous. You will have a single, living source of documentation for every change you make, versioned and timestamped. This will not only save them a ton of time in troubleshooting, but over time it will reduce the technical complexity of your environment and encourage modular template design, which further reduces the scope of potential errors.

This emphasis on documentation and templatization over manual CLI work will also save you when your cloud engineers leave — as they inevitably will, someday. (The average engineer tenure these days is about three years.)

To truly cut maintenance costs — and not just monitor them — automation is the key. The initial investment an engineering team makes in building and maintaining templates and automation scripts creates fewer errors, less fire fighting, and reduces risk — and makes your engineers happier, too.  

The post Is Your Cloud Wasting Money? first appeared on Gathering Clouds.

IT turnover: How to keep cloud and DevOps projects on track

(c)iStock.com/ivanastar

It is no secret that it is difficult to recruit and retain IT talent. Millennial disloyalty, boredom, stressful workplaces, and the ubiquitous advice that job-switching leads to higher salaries are some of the many reasons employee tenure is reaching all-time lows across multiple industries. Low employee retention in the enterprise always means expense and disruption.

Turnover among cloud engineers and in DevOps teams is especially painful. According to a recent survey of IT executives, 43% of polled companies report being understaffed in IT. Furthermore, 41% of large firms pursued cloud expertise in 2015, greater than the number that pursued security, network, or data analytics expertise. Replacing lost cloud engineers can take months and delay projects making it particularly challenging for companies beginning DevOps transformations or establishing small, efficient pockets of cloud-based teams.

Volumes have been written on how to retain tech talent. Improving working conditions and higher pay might help, but what enterprises really need is an insurance policyagainst turnover. They need to create conditions such that when a cloud engineer leaves, projects stay on track.

Replacing lost cloud engineers can take months and delay projects making it particularly challenging for companies beginning DevOps transformations or establishing small, efficient pockets of cloud-based teams

To create those conditions, you need to create a team of cloud engineers that is protected from turnover: an outsourced cloud team. To be clear, this would absolutely not be to the exclusion of creating an internal cloud team. But as more enterprises undergo cloud transformations, many are realising that a combination of internal DevOps teams plus an external team is a highly effective strategy.

Your internal DevOps team should be laser focused on product delivery. Their goal should be cloud-enabling your applications, creating automated deployment and testing pipelines, and making sense out of the complexity of your existing monoliths. They should be building new applications that deliver immediate business value.

To achieve these product transformations most efficiently, your internal team should have pre-configured cloud computing resources on hand. These cloud resources should “just work.” Your (expensive, valuable, but easily bored) DevOps engineers should not be responsible for spinning up cloud instances and manually configuring cloud networks, installing anti-virus, and managing backups.

Despite the marketing speak, no cloud platform automatically supports your applications out of the box. Cloud platforms like AWS and Google abstract away your physical interaction with machines, but someone still needs to maintain the services your applications run on top of, set up Auto Scaling, establish security groups, etc. and monitor your environment 24x7x365. And most importantly, someone needs to templatise your cloud resources so that your DevOps team has a library of cloud resources to support new projects. Outsource this. Make your cloud a stable, repeatable, secure foundation for your internal DevOps team to use. Focus your DevOps engineers on what matters to your business.

As more enterprises undergo cloud transformations, many are realising that a combination of internal DevOps teams plus an external team is a highly effective strategy

Why not build this technical cloud platform knowledge in-house? Because outsourced engineers do not quit. For several long-term clients, Logicworks is the highest tenure engineering team. For some clients, we have seen multiple executive teams and nearly 80% turnover over the course of our engagement. We frequently train new staff and when issues arise in their applications, we offer historical context. When a new engineer makes an error, we let them know that 18-months ago their ex-colleague tried the same thing — and here is why it did not work.

This is the kind of continuity that is extremely valuable for a rapidly growing team. For a company in the middle of a DevOps transformation that is relying heavily on the experience of a very small group of highly skilled engineers, this is crucial.

As you create your 2016 budget, it is worth considering whether or not you have an insurance policy for your cloud transformations. The outsource/in-house debate has never truly been a strict dichotomy; in cloud projects, both are necessary to create efficient, fully-functional, and stable teams. You cannot control who quits, but you can control what remains behind.

The new AWS region is set to further cloud adoption in the UK

(c)iStock.com/john shepherd

By Ryan Kroonenburg, managing director at Logicworks UK

With the steady rise of AWS adoption in the UK, it should come as no surprise that AWS will be adding a new UK region by the end of 2016.

Last week, Amazon CTO Werner Vogels announced the new AWS UK region amongst five other regions set to open in 2016: India, Ohio, South Korea, the UK and a second region in China.

The new regions are a welcome addition for UK businesses that are interested in migrating to the cloud, but are struggling with data sovereignty issues, increased regulatory restrictions, and a greater prominence of security threats. Businesses must balance the regulatory risk of cloud migration against escalating costs associated with local providers and/or hosting in-house.

With the new UK region, yet another hurdle to cloud adoption is removed, allowing companies to take advantage of the significant cost savings and improved performance of AWS while reducing the regulatory risk of data stored outside the UK.

AWS’ new region is even more important in light of the in/out EU referendum looming in 2017. Should the UK choose to leave the EU, data would be mandated to remain within the UK. This uncertainty has caused many businesses to delay AWS adoption plans for fear that the referendum would force them to go through the time and expense of two migration projects: onto AWS and off the platform shortly after. The new region allows businesses to safely host on AWS regardless of the results of the referendum.

The final barriers to cloud adoption are usually a variety of concerns over security, and security will no doubt continue to drive nearly every cloud conversation. However, with the failure of local hosting providers to offer transparency into security practices, the recent security breaches of these providers, and increased cloud education, these conversations have shifted from “Is the public cloud secure?” to “How do we take advantage of the data protections the public cloud offers?”

As a result, a greater volume of UK companies will be willing to trust critical workloads to the cloud.  AWS provides native capabilities for protecting their infrastructure from common security attacks such as DDoS, SQL injections, cross site scripting and so on, whilst still maintaining data sovereignty.

AWS is working feverishly to remove barriers to large scale cloud adoption, and with the new UK region, businesses are primed to take advantage of the cost efficiency, compute power, and scale of the AWS platform. It is an exciting time to be in the UK and watch the IT transformation unfold.

The post New AWS Region to Further Cloud Adoption in UK appeared first on Gathering Clouds.

Why UK CIOs are dissatisfied with their cloud providers

(c)iStock.com/Barcin

By Ryan Kroonenburg, Managing Director at Logicworks UK

After speaking with hundreds of UK technology leaders about cloud adoption, it is clear that cloud technology is transforming business models and improving cost-efficiency in the enterprise.

Despite the positive results, IT leaders have also shared serious concerns. They usually sound something like this:

“The cloud is great. The support is mediocre at best.”
“I am not sure my cloud provider understands my business.”
“I spend thousands of pounds a month for cloud support, but I do not know what they do.”

Enterprises expect a certain level of support for IT products, and they are simply not getting it. In fact, nearly seventy-five percent (75%) of UK CIOs feel they have sacrificed support by moving to the cloud. Eighty-four percent (84%) feel that cloud providers could do more to reduce burden on internal IT staff, and the vast majority of respondents felt ripped off by “basic” cloud support.

This is a huge threat to the success of the cloud in the UK. Poorly architected clouds that are supported by junior, outsourced technicians not only expose enterprises to downtime and security vulnerabilities, but also hurt overall business goals and reduce the likelihood of further cloud adoption.

I believe there are three major causes of cloud support failure — and in many ways, it is up to enterprises to become educated and demand a higher quality of service.

Cloud washing

While this phenomenon is well-known in North America, the UK market is still full of small service providers that claim to provide “cloud computing” but in fact do not. Usually these ‘cloud providers’ are small companies with a couple of leased data centres that provide none of the scale, built-in services and tools, and global diversity that true cloud providers offer.

These companies have a tiny fraction of the power and capacity for innovation of cloud giants like Amazon and Google. Cloud technology has far outstripped the basic low-cost compute model, and has now transformed data warehousing, analytics, cold storage, scalable compute, and more. Small cloud providers cannot possibly compete.

UK IT leaders assume that these small providers are superior in cloud support and cost. Unfortunately, the typical support model tends to be “fix what is broken”, which is insufficient for cloud systems and results in slow wait times and higher risk of manual error. Further, these niche providers lack the scale and breadth of services to keep costs down.

Misaligned support models

When your applications depend on the health of physical data centre components, you want a support team that is very good at fixing mechanical issues, fast.

When your systems are in the cloud, you want a support team that builds systems that can survive any underlying mechanical issue.

These two infrastructure models require very different support teams. In the cloud support model, the system as a whole is more complex. For instance, a problem might originate in a script in a central repository, so the support engineer must have a deep understanding of all layers of the system in order to discover the source of the issue. They must be able to code as well as to understand traditional networking and database concepts. Service providers can no longer staff support teams with low-level engineers whose only responsibilities are to record issues and read monitoring dashboards.

Unfortunately, cloud service providers — or traditional hosting providers that have rebranded — do not staff their support centers with experienced cloud architects. It is no surprise to me that nearly half of IT leaders report that call handlers lacked sufficient technical knowledge (41%) and were slow to respond (47%).

The cloud automation gap

There is only one way for service teams to deliver a fast, targeted fix to a service request: cloud automation. Not incidentally, automation is also the only way to deliver 100% available systems on the cloud.

Maintaining complex cloud environments is difficult. To take full advantage of the cloud’s flexibility and pay-as-you-go pricing, your cloud should scale dynamically without human intervention. This requires more than just a basic service install: you need to automate the provisioning of new instances, which means configuring those instances quickly with a configuration management script like Puppet. If your team’s goal is to deploy more frequently, you need to combine this rapid infrastructure provisioning capacity with deployment automation tools, which automate the testing and deployment of new code.

When small cloud companies claim to offer scalability, be sure to dig deeper. What they may mean is that they can manually respond to a service request to increase your server capacity. If you want to actually automate scaling on a large cloud platform, these tasks require advanced, specialized skills both to create these systems in the first place and to maintain them.

Few service providers offer true automation, and few enterprises realise they need it. Automation is difficult, automation experts are hard to find, and this education gap will have even larger consequences in the years to come. As the demands on IT resources continue to increase, the delay required to perform or outsource manual work will become impossible to sustain. Downtime is impossible to prevent when the system cannot be adequately automated for failover.

Any cloud service provider must be cloud automation experts. Cloud automation should be the heart of enterprise support — it will decrease downtime, increase flexibility, and dramatically improve your provider’s ability to rapidly respond to service requests.

The right providers

Choosing the right cloud platform and service provider feels very risky for many UK business leaders. The industry is changing rapidly. The pressures to develop faster and more cost-efficiently are enormous.

I place my bets behind companies that are also evolving rapidly and developing the right new services to meet customer demand, which is why I have helped European enterprises move to AWS for over five years. That’s also why I joined Logicworks, who I believe are one of the only cloud service providers to understand the importance of automation in improving data privacy and agility. Feel free to reach out to me directly (rkroonenburg@logicworks.net) to learn more about AWS or Logicworks.