All posts by jgardner

The number one reason IT struggles with cloud migration

(c)iStock.com/baluzek

IaaS adoption usually begins in isolated pockets — one project here, one there. Then a few months (or years) later, IT decides to expand AWS usage and realises that they don’t know the status of current AWS projects; they don’t know if they ‘did it right’ when they built the first couple of projects; and they don’t know what it looks like to ‘do it right’ in AWS in general.

This last point is the crux of the issue, and the #1 reason we see AWS migration projects stall is not because of a lack of talent or even lack of planning, it is simply that there is no clear vision for what a “good” AWS environment looks like for their specific complex workloads. There is no common yardstick for assessing current environments and no template for building new ones — and this uncertainty is the true root cause of downstream security or performance concerns.

What is a good AWS environment?

Several security organisations have developed a set of common standards for the cloud. AWS themselves have developed extensive documentation and even a service, AWS Trusted Advisor, to help you implement best practices.

But what IT teams really need is a single, simple set of guidelines that mees both internal and external standards — and they need those guidelines to cover everything from server configurations to monitoring and management.

This is where IT meets its first obstacle: they must develop a common baseline for security, availability, and auditability that everyone agrees to. They need to develop “minimum viable cloud configurations”. For instance: MFA on root, everything in a VPC, CloudTrail enabled everywhere.

Example of Logicworks 89-Point Assessment, part of our Cloud Migration Service

The effort to create a single set of standards may seem like a direct push-back against the self-service IT approach, where product teams use whichever technologies they need to get the job done. However, as we will discuss below, developing these common standards is actually the bedrock of a safe self-service IT approach.

Moving from standards to live systems

It is one thing to develop standards, and another to implement those standards on new and existing AWS projects. And then another effort entirely to enforce those standards on an ongoing basis.

In the old world, enforcing configuration was largely manual. IT could afford to manually update and maintain systems that changed very rarely. In the cloud, when your resources change every day and many developers and engineers can touch the environment, a manual approach is not an option.

The key is that these standards need to be built into templates and enforced with configuration management. In other words, build a standard “template” for what your security configurations should look like, and then maintain that template rather than creating a hundred custom configurations for each new cloud project. The template gets changed, not individual virtual instances or networks.

Logicworks build process

For central IT teams, this is revolutionary. Rather than spending months testing and reviewing each new cloud environment, the security team spends time upfront collaborating with systems teams to build a common standard, and then only needs to be involved when that standard changes and at other key points. You know exactly how every system is configured for security at any point in time, and you reduce the time and cost of deploying future systems; you do not have to rebuild security configurations or get them approved by security teams.

What to do now

If you are planning for an AWS migration, the #1 thing you should do right now is implement configuration management (CM) in your current systems. As outlined above, implementing CM means you have to a) come up with the right standards and b) “code” them into a centralised place. This is the hard part. Once you do this, migrating to AWS is much easier because you know what “good” looks like.

The main question is whether or not enterprises will have the time to set up these processes as they migrate to the public cloud. CM requires training, a team of advanced DevOps engineers and Puppet/Chef experts, and months of work. By far the easiest way to do this is to hire a consulting company that already has a common set of standards and a well-developed CM framework, who can assess your current AWS deployments and/or build a solid foundation for future deployments based on those standards.

IT leaders struggle with cloud migration when they do not have expertise in defining and maintaining ideal state. When you migrate to the cloud, any weakness in this area quickly becomes a major handicap. Configuration management can set you on a faster and more stable road to success.

The post The #1 Reason IT Struggles with Cloud Migration appeared first on Logicworks Gathering Clouds.

Why vendor lock-in remains a big roadblock to cloud success

(c)iStock.com/LeoWolfert

Despite the clear advantages of cloud, enterprises can be skittish about getting “trapped” in a public cloud vendor. But unfortunately, this may prevent them from achieving success in the cloud, a new survey finds.

According to a Logicworks survey by Wakefield Research, 78% of IT decision makers believe that concerns about vendor lock-in prevent their organisation from maximising the benefits of cloud resources. This means that the majority of IT leaders consciously choose not to fully invest in cloud because they value long-term vendor flexibility over long-term cloud success.

The cost of avoiding lock-in

The study confirms reports by GartnerFortune and others that cite IT executives who believe the overwhelming market dominance of public cloud players like Amazon is a negative; to combat this, these companies use core services like Amazon EC2 and S3 but stay away from “higher level” managed services like databases and orchestration tools. The perception is that if you use AWS’ basic services and build your own management tools, you can pick up and leave Amazon more easily.

The trouble is that there is a cost to this choice. The benefit of the public cloud is not just outsourced compute and storage. It is that public cloud providers like AWS have spent the last 10 years building advanced, scalable infrastructure services—and they release hundreds of updates to these services every year. You do not have to build your own version of these services, you never have to upgrade your cloud to get these new features, you do not have to patch them or version them. They just show up.

Also, if you build your own queuing, notification, networking tools, etc. that you can get from AWS, you spend the same amount of time — if not more — upfront building or re-architecting your own tools. Then you have to manage those tools, update them, and improve them over time.

The other option is to run “multicloud” — use two or more public cloud vendors and/or PaaS providers, so that you can leave one or the other at any time. While only a small number of enterprises currently take this approach, it seems to berising in popularity. The problem with this approach is that cloud vendors have no incentive to make the seamless transition of data between clouds possible, and many thought leaders claim the “interoperable cloud” is a dream. In reality, multicloud can be complicated, messy, and result in applications built to the lowest common (cloud) denominator.

Alternative paths to cloud success

Is there a way to balance vendor lock-in fears with manageability? To avoid vendor-lock in while still reducing the complexity of IT management? The answer is not about keeping one foot out of the cloud, but about fully embracing a new way of managing IT. The real challenge in migrating to any IaaS platform is not the technology itself but the governance models, cost control measures and the processes your systems and development teams use to work together. The hard part is evolving the role of central IT as not just purchaser and governance body, but an engine of continuous development and change.

The cloud gives IT teams an opportunity to work with infrastructure as code, which means that infrastructure can be versioned, tested, repeatable, and centrally managed. Central IT can develop an infrastructure system, create a template from it, and establish guardrails for the evolution of that template by multiple business units. Usually this takes the form of a security playbook or service catalog, where users can launch resources from pre-formulated templates in which security best practices are embedded. This follows the service-oriented IT model where central IT becomes the service provider or PaaS platform through which IaaS is ingested.

When you implement templatization (AWS CloudFormationTerraform) andconfiguration management, you build a set of management standards that will serve you in every IaaS platform. You just have a JSON file that controls virtual resources and could theoretically be configured to build any cloud resource with any vendor. Any work you do on this front will make it easier to manage or migrate in or out of your cloud.

Why is central IT the key player in preventing vendor lock-in? It is all about defining standards that transcend the boundaries of platforms or cloud vendors. Getting the platform wrong can cost you three months of work. Getting the standards right will serve you for years to come. If you ever need to migrate out of your primary cloud provider, you will have a central entity that knows where workloads live and can intelligently recreate practices on a second platform.

The future of vendor lock-in

Vendor lock-in will never be absent from IT. IT management is too complex; it will always be easier/more cost-effective on some level for organisation to use a vendor’s tools or resources than to build their own. Vendors are vendors — they want you to stick around. The key is not to defend your organisation by avoiding cloud, but by rethinking the way you manage IT all together.

In the end, intelligent cloud management allows you to maximise the benefits of the cloud without getting stuck in one vendor. When you get the people and process right, changing the technology is not so daunting.

The post Vendor Lock-In is Big Roadblock to Cloud Success, Survey Finds appeared first on Logicworks Gathering Clouds.

Why IT remains unprepared for cloud management

(c)iStock.com/imilian

The vast majority of enterprises plan to migrate more workloads to the cloud in 2016. But IT teams may not be prepared to maintain cloud resources, a new survey by Logicworks finds.

According to the report, nearly half of IT decision makers believe their organisation’s IT workforce is not completely prepared to address the challenges of managing their cloud resources over the next five years. As cloud adoption grows, this can have serious impacts on the long-term success of cloud in the enterprise.

Underestimating cloud management

A cloud platform like Amazon Web Services simplifies infrastructure management by providing resources that can be spun up in seconds — plus built-in tools to facilitate common maintenance tasks.

However, many organizations mistake “simplified” infrastructure maintenance for “little to no” infrastructure maintenance; in other words, they think maintaining their cloud systems will be easy. And when leadership thinks of the cloud as easy, IT teams suffer.

Infact, the survey found that 80% of IT decision makers feel that leadership underestimates the cost and effort of managing cloud systems. And because they underestimate cloud management, they do not effectively plan for the staffing and resources IT requires to achieve highly available, scalable cloud systems.

The reality is that a resource like Amazon EC2 is just a virtual server — you still need to manage backups, upgrades, patches, etc. You still need to monitor it, and if it goes down at 3am, your team needs to bring it back up. Platforms like Microsoft Azure and AWS have introduced tools to make these tasks easier, but they have not eliminated these tasks entirely. This is also true to some extent with “plug and play” SaaS platforms; someone still needs to manage access, configure reports, integrate the tool with existing workflows, etc.

Increased pressure on IT

When you combine lack of cloud management planning, lack of cloud expertise, and the increasing pressure to deliver infrastructure faster and more reliably, you can see why IT teams are struggling to keep up.

These pressures sometimes cause cloud projects to falter and stagnate after the first wave of migration. The company usually gets some cost benefits from migrating, but does not get the agility benefits they expected.

The easy answer to cloud agility and cost concerns is automation and continuous delivery; in other words, use experienced cloud engineers to automate common maintenance tasks. Unfortunately, the same survey found that the majority of respondents (54%) think it is extremely difficult to find good DevOps talent; and they cited lack of expertise as a top reason why they cannot automate their cloud deployments further.

Anecdotally, the team at Logicworks has encountered dozens of mid-sized companies with similar challenges. They have one or two projects in the cloud, but have realised that cloud migration is not the hard part — cloud management is. And they do not have operational maturity on cloud platforms to transition existing processes (runbooks, incident response plans, change management processes) to the cloud. For many, supplementing their internal team with external experts is the answer.

The post IT is Unprepared for Cloud Management, Survey Finds appeared first on Logicworks Gathering Clouds.

How Amazon is disrupting a $34bn database market

(c)iStock.com/Prykhodov

When Amazon launched Aurora in 2014, it was presented as a clear challenge to giants in the $34 billion database market. Today it is Amazon’s fastest-growing product and has already surpassed the growth of Amazon Redshift, which is saying something. Customers of the service are among Amazon’s largest and loudest advocates. Since the start of 2016, roughly 7,000 databases have been migrated to AWS Aurora — and the rate of adoption has tripled since March 2016.

Why is Aurora gaining popularity? As we have come to expect from Amazon, Aurora provides enterprises with the performance and reliability of a commercial product at a fraction of the cost of Oracle or IBM. Amazon has also made consistent efforts to reduce the effort of database migration with services like AWS Database Migration Service.

First and foremost, migration to Aurora is about cost: companies want to get out of expensive database licenses. But the popularity of Aurora is also a sign of rising interest in Amazon’s fully-managed tools — overcoming fears that using native AWS tools equates to vendor lock-in.

Traditionally, vendor lock-in worries would cause a company to use only “basic” services in order to make Amazon easy to leave. But It appears that these fears are being eclipsed by a desire to reduce IT management. In other words, the value of a managed or automated approach far outweighs the potential effort of migrating out of that service for a (hypothetical) future transition.

Zynga began with this “tentative” approach to adopting AWS, but now realizes that the value of AWS is not cheap compute — it is reduced infrastructure maintenance. Zynga is infamous for migrating to AWS, then deciding to migrate back to on-premises cloud, then returning to AWS in 2015. This time around, Zynga decided their goal was not just to reduce bottom-line costs, but to be smarter about putting engineering resources towards applications, not infrastructure.  

“As we migrated from our own private cloud to AWS in 2015, one of the main objectives was to reduce the operational burden on our engineers by embracing the many managed services AWS offered,” said Chris Broglie of Zynga on the AWS blog. “Before Aurora we would have had to either get a DBA online to manually provision, replicate, and failover to a larger instance, or try to ship a code hotfix to reduce the load on the database. Manual changes are always slower and riskier, so Aurora’s automation is a great addition to our ops toolbox.”

The adoption rate of Aurora and Redshift seem to indicate that Zynga is not the only company willing to purchase higher-level service offerings from Amazon. Anecdotally, the team at Logicworks has also seen growing interest in Aurora and other services like Redshift and RDS.

Changing your database schema has traditionally been difficult and expensive. Early adopters of the cloud usually just want to get their databases running on Amazon EC2 — choosing speed and ease of migration over long-term licensing cost savings. As cloud adoption matures, expect more companies to make a (slow) migration over to cloud-native systems. Because in the end, it is not just about licensing costs. It is about removing management burden from IT — and choosing to focus engineering talent on what really matters.

The post appeared first on Logicworks Gathering Clouds.

Opinion: Why there is no such thing as managed IaaS

(c)iStock.com/theevening

By Jason Deck, SVP of Strategy, Logicworks

The age of managed infrastructure is coming to an end. Cloud providers like Amazon Web Services (AWS) have spent the last decade developing platforms that eliminate the manual, time-consuming maintenance of hardware, network, and storage, and enabling infrastructure to be controlled via software, i.e. infrastructure as code.

But contrary to what you might expect, enterprises are still managing IaaS. Buildout and maintenance tasks in AWS can be performed via API call and can therefore be automated, but many still choose to maintain AWS manually, instance by instance. Or perhaps worse, they purchase a “portal” to orchestrate IaaS. And these are very expensive and potentially risky choices.

What is the value of infrastructure?

As the vast majority of businesses become digital (software) companies, the value of infrastructure is only its ability to support business-critical applications that change frequently. Therefore, the best possible infrastructure is reliable and quick to spin up or destroy.

In the old world, the only way to make infrastructure more reliable was to throw more people and dollars at the problem. More people to monitor things like CPU utilization and replicate data, either in-house or outsourced to a managed service provider; more dollars to buy better hardware, live backups, and so on.

In IaaS, such concerns are irrelevant. AWS monitors hardware, CPU usage, network performance, etc. AWS can be programmed to take snapshots on a regular schedule. If you use a service like AWS Aurora, Amazon’s managed database service, you get replication, upgrades, and management built-in. If you want to improve reliability or disposability, AWS does not offer “premium hardware”; instead, you must architect your environment in new ways and rely on automation to improve SLAs. In other words, you must treat AWS resources like objects that can be manipulated with code.

In this new world, you do not care about CPU utilization. Your metrics of success are not measured in minutes of downtime. If you architect IaaS correctly, you should never have downtime. Instead, your KPIs are things like: How many times did we push changes to our infrastructure as code base? How many infrastructure changes produced errors? How long does it take to go from cloud architectural plan to delivering an environment?

Cloud automation is what drives better availability, better cost management, better governance, better time-to-delivery. So whether an enterprise chooses to build an automation team in-house or outsource it to an next-gen service provider, it should be at the top of enterprises’ cloud priority lists.

Cloud automation in action

IaaS gives you the tools to control infrastructure programmatically. In other words, you can manipulate infrastructure resources with code rather than through the AWS console or by manually typing in the CLI. This fits in with the larger vision set out in Agile philosophy, particularly the art of maximizing work not done; if you can automate, you should.

What does this look like in practice with AWS? Teams that want to spend less time on manual infrastructure maintenance usually do one or all of the following:

  • They use the fully managed cloud services that AWS already provides (likeAWS Aurora or AWS Redshift) as much as possible
  • They automate the buildout of infrastructure resources using a templating tool (like AWS CloudFormation)
  • They automate the install/configuration of the OS with a configuration management tool
  • They integrate infrastructure automation with their existing deployment pipeline
  • They prepare for the future of IT where serverless compute resources likeAWS Lambda abstract away infrastructure orchestration entirely

The impact of this model is enormous. When you take full advantage of AWS services, you minimize engineer effort, reduce risk by automating things like backups and failover, and get built-in upgrades. When you automate infrastructure buildout and configuration, you enable rapid change, upgrading, patching, and self-healing of AWS resources without human intervention. Engineers never modify individual instances directly; instead they modify templates and scripts so that every system change is documented and can be rolled back, reducing the risk and effort of change.

When you move to this model, you are building infrastructure as code, not managing infrastructure. Your engineers are now essentially “developers” of infrastructure software. This requires a new set of skills and an entirely new outlook on how engineers should spend their time.

The cost of managed infrastructure

Unfortunately, many enterprises still throw people and money at cloud availability and agility issues. They create (or buy) “orchestration portals” that tell them about instance performance and storage utilization and resource usage. They use the same security processes, i.e., spend many weeks of each deployment cycle manually testing infrastructure and keep compliance checklists in spreadsheets. They use only the most basic AWS services, perform manual upgrades and updates, and in the case of an issue, they nurse individual instances back to health. In other words, they still manage infrastructure.

What is the real cost of this model? A recently released report by Puppet found that high performing IT teams — that prioritize automation, high deployment velocity, and the “work not done” principle — spend 22% less time on unplanned work and rework than low-performing IT teams. High-performers have a change failure rate of 7.5%, compared to medium-performers with a change failure rate of 38%. Mean time to recover is 1 hour for a high-performing organization and 24 hours for a medium-performer. If you multiply mean time to recover by the average cost of downtime at your organization, the real cost of not prioritizing automation becomes unjustifiable.

What could your engineers do with 22% more time? What could they do if they were not constantly firefighting broken virtual machines — and could instead blast away the instance and rebuild a new one in minutes? They would spend more time on new projects, the products that drive real business value and revenues.

It is true that automation itself takes time and money. It also takes expertise — the kind that is hard to find. Yet infrastructure automation is the inflection point that jumpstarts organization’s DevOps efforts. These factors make it an ideal service to outsource; it is a skills gap that enterprises are struggling to fill, and a non-disruptive place where MSPs can provide value without replacing internal DevOps efforts. An next-generation MSP that has already developed proprietary infrastructure automation software is an ideal fit; just beware of companies that sell “Managed IaaS” that is just monitoring and upgrades, because they will not help you escape from infrastructure management.

The future of infrastructure as code

We are entering a world where infrastructure is not only disposable, it is invisible.The best way to manage infrastructure is not to manage infrastructure at all, but instead to develop an automation layer composed of AWS services, 3rd party tools, and your own custom-built software. No matter how your market changes or the state of IT in five years, investing now in automation will allow you to adapt quickly.

Major cloud providers are pushing the market in the direction of management-less IT, and it is only a matter of time before the market follows. Chances are that adherents of the “management” model will linger — both internal teams and external vendors — that want to patch and update machines with the same reactive, break/fix approach they used in the 90’s and 00’s in managed datacenters. When companies gain AWS expertise and realize how little value infrastructure management adds, IT will evolve into an infrastructure as code provider. Value creation is moving closer up the stack, and IT must follow.

The post There is No Such Thing as Managed IaaS appeared first on Logicworks Gathering Clouds.

Why DevOps engineer is the number one hardest tech job to fill

(c)iStock.com/spxChrome

DevOps engineers are notoriously difficult to find. If you needed further proof of this fact, a new study by Indeed.com has revealed that DevOps engineer is the #1 hardest IT job to fill in North America, leading a list that includes software and mobile engineers.

An organisation’s inability to hire – and retain – systems engineers, build automation engineers, and other titles usually grouped under “DevOps” is a major roadblock to digital transformation efforts; in fact, the majority of organisations say the biggest roadblock to cloud migration is finding the right IT talent, not security, cost, or legacy systems.

There is certainly no easy answer — but here are several ways that organisations are attempting to reduce its impact.

The power of automation

Most companies hire DevOps engineers to automate deployment for frequent or continuous deployment. In reality, this means that much of a DevOps engineer’s time is spent deploying and configuring daily builds, troubleshooting failed builds, and communicating with developers and project managers — all while the long-term work of automating deployment and configuration tasks falls by the wayside.

It is possible that the term “DevOps engineer” itself contributes to this confusion and poor prioritisation; many say there is (or should be) no such thing as a DevOps engineer and they should more properly be called by their exact function in your team, like storage engineer, deployment automation engineer, and so on.

The value of deployment automation and the progress towards some variety of “push button” deployment to test environments is obvious; a survey by Puppet found that high performing IT teams deploy thirty times more frequently than low performing teams. Infrastructure automation is often lower on the priority list but of equal importance, and involves the ability of virtual machines to scale, self-heal, and configure themselves. Anecdotally, our experience is that most organisations do the bare minimum (auto scaling), while the vast majority of infrastructure maintenance tasks are still highly manual.

The fact that your DevOps engineers — or if you prefer different titles: build automation engineers, Linux administrators, Puppet engineers, etc. — do not have time to automate tasks (that could save them more time in the future) is clearly a problem. Your sluggish progress on deployment automation drains resources every day. But your lack of infrastructure automation can quickly become a punishing business problem when you find that auto scaling fails, or you forgot to update a package, or your SSL cert is not automatically renewed, or your environment is not automated to deal with your cloud provider’s infrequent outages. Slow deployment pipelines are bad, but broken infrastructure is worse.

Such events cause what we will call “reactive automation”, a sudden burst of interest in infrastructure automation that quickly fades when everything goes back to normal. Unfortunately, the templates and configuration management scripts that automate infrastructure buildout and maintenance themselves must be maintained, and if no one is paying attention, another infrastructure failure is bound to happen.

The result is a team of stressed, overworked engineers that wish they could focus on the “cool stuff”, but are instead stuck in firefighting mode: exactly the opposite of what you want to happen.

“Hire more DevOps engineers”

When faced with overworked engineers, the natural answer is: let’s hire more. But of course you are already doing that. Most companies have a standing open position for DevOps engineer. Is there another answer?

The second answer is training some of your existing systems engineers in new tools and new cultural frameworks. This certainly needs to happen, but will take some time. The other answer is outsourcing. Outsourcing can mean any number of things, but there are two flavours that best complement DevOps teams. The first is outsourcing infrastructure automation. The second is outsourcing day-to-day, boring, repetitive infrastructure maintenance tasks and around-the-clock monitoring.

Infrastructure automation is in many ways the ideal set of tasks to outsource; the line in the sand between your team’s responsibilities (the application) and the outsourced team (the infrastructure) is relatively clear for most applications, and there is often little time, initiative, or advanced experience to automate infrastructure in-house. Your in-house engineers keep doing what they are doing — managing daily builds, interfacing with developers — and the outsourced team co-manages the templates and scripts that control scalability, security, and failover. This team should also integrate this automation with your existing deployment pipeline.

This works out even better if the same team manages day-to-day patching, monitoring, alerting, log management, change management, etc., much like a traditional professional services or managed services team. These are items that distract your valuable DevOps engineers from more important tasks, and also wake them up at 3am when something goes wrong. When you outsource, you are still fulfilling “you build it, you own it” principle, but at least you have a team telling you when things break and helping you fix it faster.

Managed service providers (MSPs) are not what they used to be — in a good way. Among its many positive effects, the cloud has forced MSPs to evolve and provide more value. Now you can use it to your advantage.

The enterprise DevOps team

As DevOps makes its way to the enterprise, the nature and definition of “DevOps team” will change. Enterprises will continue to struggle to attract talent away from big tech. You will likely see more differentiation in what DevOps means, as traditional network engineers become cloud network experts and Puppet engineers become cloud configuration management masters, leading to a complex medley of “traditional” and “cloud” skills.

Adopting DevOps involves adopting a certain amount of risk, and enterprises want to control that risk. They will rely more heavily on outsourced talent to supplement growing internal teams. This will help them achieve higher deployment velocity and automation more quickly, and put guardrails in place to prevent new DevOps teams from costly mistakes.

DevOps engineers will always be hard to find. Great tech talent and great talent generally is hard to find. The key is knowing how to protect your business against the drought.

The post DevOps Engineer: #1 Hardest Job to Fill appeared first on Logicworks Gathering Clouds.

The future of AWS’ cloud: Infrastructure as an application

(c)iStock.com/foto-ruhrgebiet

By Thomas Rectenwald, senior engineer, DevOps, Logicworks

Infrastructure as code has defined the last five years of systems engineering, and will likely define the next five. But as we became better and better at manipulating infrastructure with declarative languages, we started to look beyond JSON to find more full-fledged, dynamic programming languages to create and configure cloud resources.

In many ways, this movement is mimicking the evolution of web development as web ‘sites’ became web ‘applications’; in a few years, systems engineers will be coding infrastructure applications, not hard-coding declarative templates.

What is CloudFormation?

Amazon Web Services’ infrastructure as code (IaC) offering, CloudFormation, enables engineers to manage infrastructure through the use of declarative templates, utilising the data format language, JSON. The value of this approach is tremendous. Templates can be checked into a source code management system, linted and validated using tools and IDEs. Entire environments can be quickly duplicated, modified and deployed in quick time to keep pace with the rapid change ever present in today’s IT world.

CloudFormation can also kick off bootstrapping through UserData, creating a seamless bridge into the operating systems where machine images and configuration management can take over. In addition, any infrastructure change can have an audit trail and proper process built around it, automated to reduce human error and disparate configurations.

Despite the significant gains present in using CloudFormation, on its own it has some downsides. At its core, CloudFormation uses JSON to declare resources used for infrastructure build out. JSON is a data format, meant to be read by both machines and humans. The JSON is interpreted, turned into API calls and run against the infrastructure to create, change and remove various components. Amazon provides excellent documentation on the service, but for any engineer that has to manage large-scale, dynamic and complicated environments using CloudFormation alone, issues soon arise.

Hand-coding JSON is not a pleasant experience. The format does not allow for commenting. Stacks created can quickly become unruly as resource and line limits are hit, and environments grow. Some of this can be resolved by using sub-stacks and splitting the templates into smaller, more manageable chunks, but issues then arise due to a lack of global configuration, so resources’ IDs need to be passed and hard-coded as parameters or mappings in each stack.

Stack names and resource names are also immutable. Make a typo? You will be looking at it for a long time to come. Whereas CloudFormation does include some intrinsic functions to map and locate resources, it is by nature not dynamic and that leads to significant hurdles when attempting to deploy and maintain dynamic environments with common code.

At its heart, CloudFormation is simply declarative JSON. And that is a good thing. What is needed is not to make CloudFormation more dynamic, but rather to build tools which can dynamically generate templates and manage stacks. This is happening now to some extent, but the expectation is that this practice will be the future.

CloudFormation and HTML parallels

In some ways, the state of CloudFormation, and perhaps IaC in general today, is history repeating itself. Parallels can be seen in the long and varied history of web development that gives us insight into what the future will bring.

20 years ago, developing for the web consisted of a lot of hand-coding HTML, another declarative templating language meant to be interpreted by a machine. As web ‘sites’ evolved into web ‘applications’, this soon gave way to technologies like CGI, mod_php, JSP and others that provided a means of developing in a full-fledged programming language that could generate the underlying HTML. This gave us easy access to databases, global configurations, sessions, commenting and proper code organisation – benefits that revolutionised and accelerated the pace of the Internet.

However, even this was not enough and on top of these technologies grew the fully-fledged web frameworks seen in play today such as Rails, Django, Spring, Laravel, and countless others. Yet if you right click and view source on even the most sophisticated web application, you will see HTML at its core.

For AWS today, CloudFormation’s JSON templates is that underlying core for IaC development. However, I do expect to see many people hand-coding JSON over the next few years directly. This is rapidly being replaced by a variety of tools that can dynamically generate the JSON using a proper programming language. Examples include Troposphere, cfndsl, and many others.

A quick search on GitHub will bring up many, many projects that generate CloudFormation in a variety of languages, including Ruby, Python, JavaScript, Java, and even Scala. Any of these solutions give us the bare basics we need to take IaC within AWS to the next level in much the same way as CGI, Apache modules and other technologies did for web development back in the day. It does not stop here though. The next logical step is to develop frameworks that can be used to further abstract and control the stacks created. We’re seeing the beginnings of that with such utilities as SparkleFormation, StackMaster, and Cumulus. Other tools such as Terraform take a different approach by accessing the API directly and not using CloudFormation.

While that is a wonderful IaC solution for companies managing environments outside of or in addition to AWS, having tools that generate CloudFormation as a base allow you to move between AWS-centric solutions as needed, and keep the underlying work intact.

The future of IaC

CloudFormation is here to stay and is in our opinion an excellent service offered by Amazon to implement effective IaC. However, in a few years we will look back and say “remember the time we had to hand-code a ton of JSON?”

Initial tools that allow for dynamic generation of CloudFormation from other programming languages are available in full force. Stack management frameworks are starting to appear in bulk too. As the leaders of those categories rise, infrastructure as code in AWS will turn into ‘infrastructure as an application’. It will achieve the maturity and benefits of a full-fledged development environment, much as web development has evolved over the years. And whether it is HTML or JSON, having a proper language at its base will enable multiple technologies to compete and provide choices while keeping a solid underlying base.

The post The Future of AWS Cloud: Infrastructure as an Application appeared first on Logicworks Gathering Clouds.

Why more enterprises are running Microsoft applications on the AWS cloud

(c)iStock.com/wundervisuals

Microsoft revenue from Windows Server rose a remarkable 46% in 2015, even while revenue from on-premises licenses fell 2%. The source of growth? Cloud service providers like Microsoft Azure and Amazon Web Services (AWS), whose customers would rather pay pennies per hour for a license than spend hundreds of thousands of dollars on on-premises licenses.

Amazon Web Services (AWS) cloud is ramping up efforts to make the case that you should run Windows Server or any application from Microsoft on AWS. There have been significant advances on this front in the last five months: AWS announcement of Microsoft Active Directory in December, the new AWS Database Migration Service released in January, and a new white paper published this month, all aimed at large enterprise customers.

It appears to be succeeding: according to IDC, 50% of AWS enterprise users host Windows productivity applications on AWS. It is also interesting that Windows Server licenses on Azure only rose 7% in 2015 — likely not the main source of Microsoft Server’s explosive 46% revenue jump.

Here are just a few of the features of AWS’ platform that enterprises find especially appealing:

  • Pre-configured machine images (AMIs) with fully compliant Microsoft Windows Server and Microsoft SQL Server licenses included.
  • AWS Database Migration Service, mentioned above, that allows enterprises to migrate their databases with virtually no downtime. Microsoft provides tools for migrating SQL Server easily to Azure, but other DB types are a more significant effort; AWS Database Migration Service supports Oracle, Microsoft SQL, MySQL, MariaDB, and PostgreSQL.  
  • SQL Server on AWS RDS, a managed SQL Server service that provides automated backups, monitoring, metrics, patching, replication, etc. According to Hearst Corporation, running RDS SQL Server allowed them to launch quickly and refocus 8 or 9 engineers on other projects rather than managing on-premises databases.
  • AWS Schema Conversion Tool, which converts database schemas and stored procedures from one platform to another.
  • Managed Microsoft Active Directory, which allows you to configure a trust relationship between your existing on-premises AD and AD in AWS, simplifying the deployment and management of directory services. Monitoring, recovery, snapshots, and updates are managed for you. (Many enterprises may also choose to manage their own AWS AD.)
  • BYOL (Bring Your Own License) to AWS with the Microsoft License Mobility through Software Assurance program. This can include SharePointExchange, SQL Server, Remote Desktop, and over a dozen other eligible Microsoft products. Can be used on Amazon EC2 or RDS instances.
  • Reference Architectures and Demos for running SharePointExchangeSQL Server, etc. on AWS cloud.

According to AWS’ own report, the Database Migration Service has been used to migrate over 1,000 databases to AWS in the first quarter of 2016. This is quite an impressive figure, especially considering that it is a new service. One third of those migrations were not just moving databases, but switching database engines, further evidence that AWS Aurora and its native database services are attracting customers away from big vendor licenses with Microsoft and Oracle.

As AWS continues to develop services that facilitate large-scale migrations and help enterprises modernise applications, it further differentiates itself from other cloud vendors in depth and sophistication of services. Microsoft will have to do more to convince enterprises that they should run Windows Server and other Microsoft services on Azure, especially when AWS keeps creating native services that promise faster, easier migrations and simplified ongoing management.

The post More Enterprises Running Microsoft Applications on AWS Cloud appeared first on Logicworks Gathering Clouds.

Docker security: How to monitor and patch containers in the cloud

(c)iStock.com/bugphai

Nearly a quarter of enterprises are already using Docker and an additional 35% plan to use it. Even sceptical IT executives are calling it the future. One of the first questions enterprises ask about containers is: What is the security model? What is the fallout from containerisation on your existing infrastructure security tools and processes?

The truth is that many of your current tools and processes will have to change. Often your existing tools and processes are not “aware” of containers, so you must apply creative alternatives to meet your internal security standards. The good news is that these challenges are by no means insurmountable for companies that are eager to containerise.

Monitoring & IDS

The most important impact of Docker containers on infrastructure security is that most of your existing security tools — monitoring, intrusion detection, etc. — are not natively aware of sub-virtual machine components, i.e. containers. Most monitoring tools on the market are just beginning to have a view of transient instances in public clouds, but are far behind offering functionality to monitor sub-VM entities.

In most cases, you can satisfy this requirement by installing your monitoring and IDS tools on the virtual instances that host your containers. This will mean that logs are organised by instance, not by container, task, or cluster. If IDS is required for compliance, this is currently the best way to satisfy that requirement.

Key takeaway: Consider installing monitoring and security tools on the host, not the container.

Incident forensics and response

Every security team has developed a runbook or incident response plan that outlines what actions to take in the case of an incident or attack. Integrating Docker into this response process requires a significant adjustment to existing procedures and involves educating and coordinating GRC teams, security teams, and development teams.

Traditionally, if your IDS picks up a scan with a fingerprint of a known security attack, the first step is usually to look at how traffic is flowing through an environment. Docker containers by nature force you to care less about your host and you cannot track inter-container traffic or leave a machine up to see what is in memory (there is no running memory in Docker). This could potentially make it more difficult to see the source of the alert and the potential data accessed.

The use of containers is not really understood by the broader infosec and auditor community yet, which is potential audit and financial risk. Chances are that you will have to explain Docker to your QSA — and you will have few external parties that can help you build a well-tested, auditable Docker-based system.

That said, the least risk-averse companies are already experimenting with Docker and this knowledge will trickle down into risk-averse and compliance-focused enterprises within the next year. Logicworks has already helped PCI-compliant retailers implement Docker and enterprises are very keen to try Docker in non-production or non-compliance-driven environments.

Key takeaway: Before you implement Docker on a broad scale, talk to your GRC team about the implications of containerisation for incident response and work to develop new runbooks. Or try Docker in a non-compliance-driven or non-production workload first.

Patching

In a traditional virtualised or AWS environment, security patches are installed independently of application code. The patching process can be partially automated with configuration management tools, so if you are running VMs in AWS or elsewhere, you can update the Puppet manifest or Chef recipe and “force” that configuration to all your instances from a central hub.

A Docker image has two components: the base image and the application image. To patch a containerised system, you must update the base image and then rebuild the application image. So in the case of a vulnerability like Heartbleed, if you want the ensure that the new version of SSL is on every container, you would update the base image and recreate the container in line with your typical deployment procedures. A sophisticated deployment automation process (which is likely already in place if you are containerised) would make this fairly simple.

One of the most promising features of Docker is the degree to which application dependencies are coupled with the application itself, offering the potential to patch the system when the application is updated, i.e., frequently and potentially less painfully. But somewhat counterintuitively, Docker also offers a bright line between systems and development teams: systems teams support the infrastructure, the compute clusters, and patch the virtual instances; development teams support the containers. If you are trying to get to a place where your development and systems teams work closely together and responsibilities are clear, this is an attractive feature. If you are using are a Managed Service Provider (like Logicworks), there is a clear delineation between internal and external teams’ responsibilities.

Key takeaway:  To implement a patch, update the base image and then rebuild the application image. This will require systems and development teams to work closely together, and responsibilities are clear.

Almost ready for prime time

If you are eager to implement Docker and are ready to take on a certain amount of risk, then the methods described here can help you monitor and patch containerised systems. At Logicworks, this is how we manage containerised systems for enterprise clients every day.

As AWS and Azure continue to evolve their container support and more independent software vendors enter the space, expect these “canonical” Docker security methods to change rapidly. Nine months from now or even three months from now, a tool could develop that automates much of what is manual or complex in Docker security.

When enterprises are this excited about a new technology, chances are that a whole new industry will follow.

The post Docker Security: How to Monitor and Patch Containers in the Cloud appeared first on Logicworks Gathering Clouds.

The key to maximising the advantages of cloud security

(c)iStock.com/LeoWolfert

By Paul Fletcher, Security Evangelist at Alert Logic

Despite the pervasive use of the cloud to handle complex, secure workloads, many organisations question whether the cloud is natively secure. They still think that the security of a system depends on their ability to touch and control a physical device. Visibility from layer one (physical) up to layer seven (application) of the OSI Model gives us security professionals a good gut feeling.

Veteran systems administrators are challenged to both embrace the cloud as being inherently secure, and share responsibility for the ultimate security of the environment. This can be a tall order for these professionals who are used to having complete control of IT systems and security controls. However, as with most challenges in IT, properly skilled staff and good processes are the foundation to a secure framework. Leveraging a shared security responsibility model can help organisations struggling to meet IT demand while implementing security best practices on the cloud.

Cloud security advantages

The advantages of using the cloud versus on-premises are well documented. From a security standpoint, one of the biggest advantages is the ability to easily scale and deploy new cloud systems with security features already enabled (as part of a pre-set image) and deployed within a specific security zone. In order to take advantage of this, organizations should integrate the native cloud security features built-in by their provider.  These features include built-in security groups for access control, tags (or labels) to organize and group assets to create security processes and technology commensurate with those assets, and the use of the Virtual Private Cloud (VPC) as a network segmentation option so that each VPC can be managed and monitored in accordance with their level of data sensitivity.

With cloud innovations growing exponentially there are many security technology options that include encryption, anti-virus, file integrity management, identity and access management, vulnerability testing, email encryption, intrusion detection, DDOS, anomaly detection, virtual private network (VPN), network and web application firewalls, along with log collection, analysis and correlation.  Also, organisations need to have people and processes focused on the care and feeding of these technology solutions.

Cloud security pitfalls

The same threats to any IT infrastructure apply to cloud security, but the technology options to defend against them can be limited in scope. It is important to source those that have been designed from the ground up to integrate with the cloud infrastructure providers they’re servicing. When organisations appreciate and understand where their responsibilities begin and end, this is where the integration of people, process and technology gives synergy to the security posture of an organisation.

Maximising the advantages of cloud security

The advantages of security in the cloud are leveraging the built-in security functionality of a cloud provider.  Commitment to training and educating staff can bridge the gap for organisations to maximise the performance of the cloud while maintaining proper secure procedures.

It is key to have dedicated professionals committed to continued education on cloud infrastructure and security best practices. Finding and retaining those individuals can be challenging which is why many organisations turn to a cloud security services provider to be their trusted advisor and subject matter expert.

Cloud security providers enable organisations to refocus their IT talent to on core business initiatives rather than cloud security and infrastructure maintenance. These providers go beyond the value in time, cost, and efficiencies gained to provide peace of mind that the provider is taking every measure to help ensure ongoing security requirements are met.

The post The Key to Maximizing the Advantages of Cloud Security appeared first on Logicworks Gathering Clouds.