All posts by jgardner

Managing hybrid clouds: What team do IT leaders need?

(c)iStock.com/Erik Khalitov

As most enterprise IT leaders know, transitioning IT staff to a cloud-based service delivery model is often more challenging than transitioning the infrastructure itself.

A collaborative, vertically-oriented IT organisational structure is crucial to the success of any cloud infrastructure, and yet the advice enterprises receive—usually some variation of “break down silos”—is not especially useful in an organisation with hundreds or even thousands of IT employees. A highly functional IT structure is even more difficult to achieve when the enterprise has a mix of public, private, and on-premises environments.

The vast majority of enterprises now manage hybrid clouds, and 74% of enterprises will deploy 10% to 60% of their business applications on a public IaaS platform in three years. In order for IT leaders to gain visibility across all environments and monitor, optimise, and audit all tiers of hundreds of applications, hybrid cloud environment must be managed by high-performing cross-functional teams, monitoring dashboards, and clearly defined roles and responsibilities.

Unfortunately, 59% of enterprises believe they lack the operational workflows to manage security in a cloud environment, and 79% believe they need better visibility across on-premises and cloud-based environments, according to a recent study. The question is, how? What team can adequately manage a hybrid cloud, with which processes?

A greater breadth of experience and flexibility is required of each IT employee than ever before, yet the progress of implementing the latest deployment and integration best practices is often slow. It is also challenging to find staff that understands the challenges of both traditional IT and cloud computing and can implement and monitor security and deployment strategy across both. This is why enterprises usually hire a managed service provider (MSP).

Unfortunately, enterprises all too often make the mistake of hiring a provider that specialises only in public cloud deployments, also called a “born in the cloud” provider. Because the enterprise has in-house expertise in on-premises or private cloud infrastructure, they are attracted to companies who promise exclusive expertise in the public cloud.

This type of MSP is usually best equipped to deal with new applications and greenfield deployments, and as a result will frequently stay far away from an enterprise’s legacy applications and traditional IT staff. They will often perform a cursory audit of the application, and may deploy the application in a public cloud without understanding the application’s weaknesses, which tiers/features cannot be replaced by cloud platform’s resources, and the roles of the engineers that maintain that application.

This is a crucial function for enterprises. Some enterprises supplement their managed cloud service provider with a consultant, but with hundreds or thousands of legacy applications, hiring consultants to perform this work on a near-constant basis is cost-prohibitive. Overall, trusting legacy applications to a cloud-only team may result in higher costs for hosting that is not even as secure or highly available as on-premises hosting, not through the fault of the cloud platform itself but due to mismanagement.

At a time when the vast majority of enterprises will implement a combination of on-premises, private, and public cloud environments, they need a partner that understands all three. They need an MSP who can deploy greenfield public cloud environments in a relatively short span of time, but also has experience in traditional enterprise hosting in order to understand legacy applications and communicate in the same language with their not-yet-cloud-ready internal teams. Enterprises no longer need to manage both traditional hosting partners and cloud partners, but can maintain a single MSP for both private and public deployments.

This allows enterprises to maintain a single relationship, a single contract, SOW and SLA, and a single “throat to choke”. This level of organisational simplicity makes it possible to move applications between public/private clouds with the same vendor. This dramatically reduces organisational friction by reusing a team that has already integrated with the internal team. As an added benefit, traditional IT staff and cloud engineering staff both connect with a single external team of DevOps experts, enabling the further spread of a single set of best practices throughout the organisation.

A cloud MSP with traditional IT knowledge can often communicate more effectively with your internal IT teams in “translating” cloud resources, which makes it possible for the MSP to integrate with, coach, and educate the internal team about cloud best practices. They will often recommend and document the enterprise’s deployment processes and use tools like Jenkins and Puppet to allow developers and cloud engineers to work together more seamlessly. In this way, they function as DevOps implementers and internal change-makers, not mere consultants who usually must be hired multiple times because internal divisions, heavyweight processes, and poor deployment strategies are gradually reinstated after they leave. Unlike consultants, managed service providers are incentivised to educate and change internal IT staff’s processes – because it makes their lives easier.

In addition, not every application tier is immediately suitable for the public cloud. There are many legacy systems – like Oracle RAC, for example – that have no replacement in a cloud platform like AWS. There may also be business reasons why it is not advisable to move all tiers of an application, such as when significant capital has been invested in custom database or virtualisation systems. An MSP that understands both traditional IT and cloud infrastructure will not only be able to better audit and advise enterprises on this score, but may even be able to transport that physical hardware to their own data centre so that the enterprise can get all the benefits of outsourced management while maintaining colocation between their database tier and other tiers hosted on the public cloud. The cost savings and agility benefits of such a configuration can be significant.

From a technical perspective, this scenario also enables enterprises to employ a single monitoring interface that spans both environments. Often third party security and monitoring tools like EM7 or Alert Logic are employed, each of which provide tools for both private and public cloud deployments. This monitoring interface is a crucial component of any true DevOps team, and often acts as a single source of truth when something does not go as planned. Dashboards help reduce finger-pointing when the “blame” is clear, and both the internal IT team and the MSP can focus instead on implementing a solution. This will also allow internal teams to see what is beneath the covers.

While it may be attractive in the short-term for large enterprises to hire cloud consultants or cloud-only MSPs, these are often stop-gap solutions. They fix current issues and support new lines of business, but require additional time and expense to manage legacy applications, educate internal IT teams, and monitor across both environment. Hybrid deployments require teams that understand where an enterprise is right now – on its way to a more agile, customer-driven model, but needing some help to get there.

The post Managing Hybrid Clouds: What Team Do IT Leaders Need? appeared first on Gathering Clouds.

Why your cloud needs more than great customer service

(c)iStock.com/davidstuart

When start-ups and enterprises first evaluate cloud providers, they often choose an out of the box solution that fits their immediate needs. Their hosting provider promises great customer service, but they have essentially the same stack as thousands of other customers and still need a large SysOps staff to monitor their infrastructure.

This solution may be adequate for several months or even years. But most enterprises ultimately find that a one-size-fits-all cloud solution no longer actually fits – if it ever did in the first place. Whether due to performance hiccups, rapid growth, a desire for a more automated DevOps approach, or new compliance challenges, they begin looking for other options.

The combination of a powerful infrastructure like AWS and a hands-on managed service provider is frequently the best solution.

A public cloud like AWS is a set of expertly designed tools – powerful, revolutionary, capable of building an incredible machine. AWS does not, however, tell you how to architect an environment from the resources it provides. It is an unassembled F-22 without a pilot or mechanic. As complexity increases, enterprises need someone to assist them beyond just fixing things when they break. They need someone to customise, right-size, and automate their infrastructure for their specific requirements and integrate with their internal IT teams – not just answer the phone when they call. They need someone who will take responsibility for security concerns and have the capacity to provide detailed reports to meet audit requirements.

In fact, Amazon understands that infrastructure is only half the battle for most enterprises. That is why Amazon invites Consulting Partners to help enterprises understand the full possibilities of the cloud. A managed cloud services provider allows enterprise clients to take full advantage of AWS services beyond elastic computing and VM provisioning.

These are the functions and services enterprises are usually looking for in a managed services partner:

1. Cloud migration

Most enterprises need help learning about which AWS resources can best replicate or improve their bare metal infrastructure. While AWS has provided extensive documentation for its services, each application is unique and internal IT teams often do not have the time to perform extensive testing on instance size/capacity, security, etc. The best managed service partners will offer a thorough audit and discovery process on the current environment, not a cursory “lift and drop” solution that usually results in improperly-sized instances and poor performance. It may take time to test the application, beginning with the smallest possible instance and conducting extensive performance testing to ensure the solution is cost-effective while maintaining the level of performance they expect from on-premise infrastructure.

A key part of cloud migration is also expertise in both cloud engineering and traditional IT. If your managed service provider was “born in the cloud,” meaning that they opened their doors five or so years ago and employ only cloud engineers, will they understand an enterprise’s legacy applications? Will they understand why the database in on-premises infrastructure needs a special blend of resources in the public cloud or how to get higher I/O out of AWS? Are they capable of understanding when an application needs to be hosted on a private cloud? Only a partner with extensive managed private hosting experience and AWS expertise can understand where an enterprise is now help get them where they are going.

2. Cloud management, not customer service or consulting

Obviously, managing cloud infrastructure requires a very different set of skills, and maintaining an external team with cloud expertise is often more cost-effective than maintaining a staff of cloud engineers in-house (or suffering from downtime due to a lack of staff experience). There are, however, vastly different levels of support; some offer ticket support, others offer 24/7/365 phone support, and still others integrate with your internal IT team to support code pushes, seasonal events, etc. Enterprises are generally looking for the latter, no matter how “fanatical” the phone support promises to be.

Some enterprises get stuck with a managed service provider that only really offers consulting services. While they may provide some technology guidance and implementation, they hold no responsibility for the ultimate product and must be hired again if the infrastructure ever changes. Before you hire a managed service provider, understand where the responsibility ends and who is responsible for what. As we explain in detail here, it is often not a long-term cost savings to engage a consulting partner rather than a managed services partner.

3. Cloud SLAs

AWS offers a high guarantee on uptime. But for mission-critical IT applications, this guarantee may not be enough. Managed Services providers that offer 100% uptime SLAs do so because they are able to configure a unique blend of native AWS and third party tools to create a self-healing, auto scaling environment that never goes down. Very few providers are able offer this, due to the fact that they must constantly monitor and test the environment to meet this requirement. Smaller cloud providers are certainly unable to guarantee zero downtime.

4. Automation/DevOps

DevOps is a buzzword that has been used to apply to nearly any development framework. A true DevOps shop will encourage clients to focus on automation and integration; they will make it possible for a client to bring up environments in new regions in a matter of hours and use a configuration management tool like Puppet to maintain a single source of consistent, documented system configuration, deploy environments prescriptively, and enforce a mature Software Development Lifecycle. An automated AWS environment is neither easy nor automatic, and does require a significant upfront investment to bake AMIs, write custom configuration management scripts, etc. to be able to deploy new environment in a matter of hours.

5. Security services

After a string of high profile security attacks, security is the #1 concern of CIOs and CTOs in 2015. While Amazon guarantees the security of the physical infrastructure with the kind of physical security measures that few datacenters can boast, the user is responsible for security “in” the cloud. Enterprises look to cloud managed service providers to minimize risk and carry a significant portion of the responsibility for infrastructure security.

AWS has a number of native resources that make enterprise-grade security possible.

6. Compliance services

Beyond traditional security best practices, compliance requirements necessitate a level of monitoring, reporting, and data storage that internal IT teams are either unfamiliar with or do not know how to translate to the cloud – exposing a multi-million dollar fine risk for large enterprises. Enterprises will often need to sign a BAA with AWS and their managed services provider. AWS has in fact Amazon has always been on the leading edge of compliant storage solutions in the cloud.

Managed service providers that specialize in compliance, have a long history managing compliant infrastructure, and have been through multiple audits are better equipped to deal with security threats – even if the specific application does not have compliance requirements. Providers that specialise in compliance use security best practices as default for all instances.

7. Cloud integration

Increasingly, enterprises have realised that enlisting the help of multiple service providers for their infrastructure can lead to fragmentation, poor communication, excessive contract negotiation work, and wasted resources. They need a managed service provider who will be able to move them to AWS while still hosting some legacy applications in a private cloud that is connected to their AWS deployment and united under a single SLA and SOW. This facilitates the migration over time of applications to AWS.

As the above list outlines, infrastructure built a managed services provider and powered by AWS is much more sophisticated than customer service on top of a smaller, less frequently updated cloud. With the right managed services provider, it is possible to create an enterprise-grade, self-healing, dedicated and highly secure infrastructure on AWS that has significant advantages over other solutions.

The post Your Cloud Needs More than Great Customer Service appeared first on Gathering Clouds.

Private vs public vs hybrid cloud: Which one to choose?

(c)iStock.com/Henrik5000

Most enterprise IT departments now manage applications across multiple environments in a dizzyingly complex overall IT architecture. They also must constantly reevaluate their unique mix of on-premises, private cloud and public cloud infrastructure to meet new business goals and determine how applications can be migrated to the public cloud in a cost-effective way.

This is no small feat. Dozens or even hundreds of applications built at different times, in different languages, and by different teams need to be evaluated for migration to the cloud, which often requires deep knowledge of the existing IT infrastructure as well as the public cloud resources that could replace these functions.

Ultimately, enterprises must determine the hosting solution that suits each application: on-premises, private cloud, public cloud, or hybrid cloud. Below we outline some basic considerations and cloud comparisons, as well as best practices for how to integrate and manage these complex deployments.

Public cloud

By now, most organisatons understand the cost benefits of an IaaS provider like Amazon Web Services, including a low and predictable cost of ownership and a shift from a capital expenditure to an operating expenditure. This makes it possible to significantly reduce an organisaton’s upfront costs, its ongoing costs of IT labor and potentially its tax liability.

The technical benefits are equally attractive: scalability, automated deployments, and greater reliability, to name a few. There are also very few technical limitations that would prevent an organisaton from moving their infrastructure to AWS; almost every function a traditional resource supports in the private cloud or in a data centre could be replicated in AWS.

These application tiers are especially well suited to the public cloud:

  • Long-term storage, including tape storage, which has significantly more cost-effective solutions in AWS (Glacier and Storage Gateway’s Virtual Tape Library)
  • Data storage of any kind, especially if you are currently hosting physical media that fails often or needs to be replaced (S3 is an infinitely expandable, low-cost storage resource)
  • The web tier of an application that is bursty or highly seasonal (EC2, Auto Scaling, ELBs)
  • The web tier of an application that is mission-critical or latency-intolerant (Custom Auto Scaling groups and automated deployments with Puppet scripts)
  • Any new application that demand is uncertain for, especially for microsites or other interactive properties for marketing and ad campaigns
  • Testing environments, due to the fact that it is so much easier to spin up and down instances for load testing.

Enterprises must then decide whether they want to manage their public cloud infrastructure themselves or outsource it to a managed cloud services provider. A managed cloud services provider can maintain the entire cloud infrastructure (web servers, application servers, load balancing, custom failover scripts) and some may also be able to integrate with on-premises or private cloud solutions to provide a single monitoring interface.

Note that compliance requirements no longer necessitate a private cloud solution rather than a public cloud solution. AWS has been on the leading edge of compliance in the cloud for several years, and while there is lingering skepticism, the adoption of AWS cloud by the largest and most complex healthcare and financial institutions is a indication of the degree to which AWS ensures compliance and security in the cloud. We presented at Amazon re:Invent on the architecture required for HIPAA-compliant deployments here. 

Private cloud

Although there are many advantages to the public cloud, enterprises very rarely deploy 100% of their applications into the public cloud. Logistically, it is often much simpler to move from your on-premises environment to a private cloud than from on-premises to public cloud.

Private cloud environments can be configured to support any application, just as your data centre currently hosts it. Private cloud is an especially attractive option if certain features in legacy applications prevent some applications from operating well in the public cloud.

Here are some indicators that your application would be a good candidate for maintenance in a private cloud:

  • You are using Oracle RAC (shared storage) and require dedicated infrastructure for compliance. The shared storage equivalent in AWS, RDS, is not HIPAA-compliant.
  • You need high performance access to a file system, as in a media company that creates or produces large video files.
  • An application is poorly written and infrequently used, and therefore not worth the effort of migrating to the public cloud.
  • The application has very predictable usage patterns and low storage costs.
  • An application is unstable and heavily trafficked, but current IT staff is unfamiliar with the application. This may instead be a case for partial rewriting in the cloud.
  • The engineering team responsible for maintaining the application is not equipped for migrating the application in a cost-effective time frame. This may instead be a case for bringing on a managed cloud service provider.

A private cloud solution can be implemented in your on-premises data centre with a virtualisation layer such as VMware, though many mid-sized and large enterprises let a managed private cloud services provider maintain servers, storage, network, and application infrastructure.

On-premise servers

While cloud-based infrastructure has many advantages, there are some applications that would see little to no cost benefit from migrating to the cloud. This is usually the case when you have invested significant capital in on-premise infrastructure, such as high-performance databases, that are specially configured to support that application.

Here are some situations where on-premises infrastructure might work best for your application:

  • The cost savings of cloud storage and compute resources do not outweigh significant capital in on-premise solutions
  • Your application already sees high performance and high availability from custom infrastructure
  • You produce large multimedia files that your in-house staff needs low-latency access to for editing purposes
  • An email platform that is high-volume, time-sensitive, and confidential. For example, some brokerage houses send very large volumes of email early each trading day.

Applications that meet these requirements are often not well-suited to the cloud. Often it would be wiser financially to maintain the infrastructure until its value has depreciated.

Hybrid cloud

Ninety percent (90%) of enterprises say they are going to pursue a hybrid cloud solution this year. As explained above, enterprise architecture is often so complex that a hybrid cloud solution — where public, private or on-premises infrastructure supports a single application — is the best solution.

Hybrid architectures are especially attractive for large organisatons that want to explore the flexibility and scalability of the public cloud. An audit will not always reveal how an application will perform in the public cloud, so enterprises choose to test a single tier in the public cloud while maintaining key infrastructure on their private cloud or dedicated infrastructure.

A hybrid system is also a good solution if there is institutional hesitancy about the security of the public cloud for sensitive data (whether this is justified or not). Frankly, it is often easier to convince internal executive or IT teams to experiment with cloud solutions rather than adopt them wholesale. Maintaining veteran IT staff and legacy applications on legacy infrastructure while opening new lines of business in the cloud is a cost-effective solution that also manages institutional risk.

Finally, an important thing to understand about hybrid environments is that they are only as strong as the integrations that unite them. Performance monitoring, regular testing, and data ingress and egress procedures will reveal future areas of difficulty as well as signal when and how to further evolve the application. The team orchestrating the infrastructure is almost always more important than the specific type of cloud solution you chose.

If you want to learn more about what cloud solution is right for you, contact Logicworks for Hybrid, Private, and Public Cloud solutions. 

The post Private vs. Public vs. Hybrid Cloud: Which One to Choose? appeared first on Cloud Computing News.

Opinion: Sorry, Europe: Data localisation is not the killer app for privacy

(c)iStock.com/maxkabakov

By Kenneth N. Rashbaum, Esq.

This blog post is for informational and educational purposes only. Any legal information provided in this post should not be relied upon as legal advice. It is not intended to create, and does not create, an attorney-client relationship and readers should not act upon the information presented without first seeking legal counsel.

Edward Snowden has unleashed a torrent of activity in the name of data security and privacy protection. Some of that activity has resulted in the creation of jobs, especially in the field of encryption technology (the better to foil the NSA, the theory goes) and stimulation of local economies through the construction of local data centres in Europe. Alas, Virginia, there is no magic bullet for privacy in housing data within one country because data, it has been said, wants to be free. To mix metaphors, it will seek its own level. To put it bluntly, data localisation, as housing data within one’s own country is called, is an expensive fantasy that won’t move the privacy ball very far.

The Wall Street Journal reported on February 23, 2015 that Apple has agreed to build two data centres, one in Denmark and one in Ireland, at a cost of approximately two billion dollars.  Construction of data centres means creation of many jobs and a good jolt to the local economies of the places where the centres are built, which would be a very good thing for Ireland (data centre construction has been a “tent pole” for the Irish economy for some time).The data centres could also go a long way to salving bad feelings across the pond with regard to Apple’s activities that have rubbed regulators the wrong way. Indeed, the sidebar to the Wall Street Journal article is entitled “Apple’s pitch to European lawmakers drips in honey.”

Apple, then, appears to follow other web technology companies such as Google, Amazon and Microsoft in catering to European fears of US access to personal data by attempting to implement “data localisation;” that is, assuring users that their data will be stored on servers within one’s home country. Russia has proposed a bill with this requirement and similar proposals have been advanced within the European Union.

What the construction of these expensive data centres will not do, though, is preserve privacy much better than current cloud hosting providers based in the US do presently.  And it will raise complexities that will no doubt stimulate the economy of a sector that has been lagging lately: the legal profession.

Data can be forwarded, replicated, retweeted, reposted or otherwise transmitted with the click of a mouse to almost any location in the world. Social media platforms rely on the free flow of data and, as a result, encourage users to send all manner of personal (and, often, company) data across borders. Hosting data on servers within the user’s home country, then, accomplishes little, but the complexities of data localisation are Byzantine.

Whose law applies to the data hosted in the home country once it is sent beyond that country’s borders? If the home country’s law won’t apply, was the construction of a data localisation centre just a very expensive marketing device aimed at attracting European and Asian (and, perhaps Canadian) users concerned about US governmental surveillance? Indeed, at some point, the data will end up in the US vulnerable, if Mr. Snowden is to be believed, to NSA international surveillance just as it was before the data localisation centres were built.

When one digs down and comes to the conclusion that a data localisation strategy is a false premise as a privacy safeguard the logical follow-up question, in the face of tightening privacy restrictions in Europe and elsewhere, is how can a user rely upon any cloud hosting service to maintain his or her privacy?

Cloud hosting providers are well aware of the trends in the EU and elsewhere, and many have taken technical and administrative steps to maximise security and privacy protection.  A comprehensive review of the master services agreement or service level agreement of the provider should indicate compliance with required data security and privacy levels, and most provide addenda with regard to compliance with security and privacy levels in the EU and elsewhere to comply with local laws and regulations. Mid-sized and larger cloud hosting services providers retain third-parties to audit administrative (data protection policy and procedure), physical (locks, keys and facility surveillance) and technical safeguards for security and privacy. A prospective customer can, and should request reports of those audits.

Security and privacy, then, can be assured to the extent reasonably practicable the old fashioned way: by due diligence into the cloud hosting provider.

The post Sorry, Europe: Data Localization Is Not the Killer App for Privacy appeared first on Cloud Computing News.

Three steps to resilient AWS deployments

Picture credit: Flickr/NandorFejer

Hardware fails. Versions expire. Storms happen. An ideal infrastructure is fault-tolerant, so even the failure of an entire datacenter – or Availability Zone in AWS – does not affect the availability of the application.

In traditional IT environments, engineers might duplicate mission-critical tiers to achieve resiliency. This can cost thousands or hundreds of thousands of dollars to maintain and is not even the most effective way to achieve resiliency.On an IaaS platform like Amazon Web Services, it is possible to design fail-over systems with lower fixed costs and zero single points of failure with a custom mix of AWS and 3rd party tools.

Hundreds of small activities contribute to the overall resiliency of the system, but below are the most important foundational principles and strategies.

1. Create a loosely coupled, lean system

This basic system design principle bears repeating: decouple components such that each has little or no knowledge of other components. The more loosely coupled the system is, the better it will scale.

Loose coupling isolates the components of your system and eliminates internal dependencies so that the failure of a single component of your system is unknown by the other components. This creates a series of agnostic black boxes that do not care whether they serve data from EC2 instance A or B, thus creating a more resilient system in the case of the failure of A, B, or another related component.

Best practices:

– Deploy Vanilla Templates. At Logicworks, our standard practice for Managed AWS hosting is to use a “vanilla template” and configure at deployment time through Puppet and configuration management. This gives us fine-grain control over instances at the time of deployment so that if, for example, we need to deploy a security update to our instance configuration, we only touch the code once on the Puppet manifest, rather than having to manually patch every instance deployed with Golden Template. By eliminating your new instances’ dependency on your Golden Template, you reduce the failure risk of the system and allow the instance to be spun up more quickly.

– Simple Queuing Service or Simple Workflow Service. When you use a queue or buffer to relate components, the system can support spillover during load spikes by distributing requests to other components. Put SQS between layers so that the number of instances can scale on its own as needed based the length of the queue. If everything were to be lost, a new instance would pick up queued requests when your application recovers.

– Make your applications as stateless as possible. Application developers have long employed a variety of methods to store session data for users. This almost always makes the scalability of the application suffer, particularly if session state is stored in the database. If you must store state, saving it on the client reduces database load and eliminates server-side dependencies.

– Minimise interaction with the environment using CI tools, like Jenkins.

– Elastic Load Balancers. Distribute instances across multiple Availability Zones (AZs) in Auto Scaling groups. Elastic Load Balancers (ELBs) should distribute traffic among healthy instances based on frequent health checks, which you control the criteria for.

– Store static assets on S3. On the web serving front, best practice is storing static assets on S3 instead of going to the EC2 nodes themselves. Putting CloudFront in front of S3 will let you deploy static assets so you do not have the throughput of those assets going to your application. This not only decreases the likelihood that your EC2 nodes will fail, but also reduces cost by allowing you to run leaner EC2 instance types that do not have to handle content delivery load.

2. Automate your infrastructure

Human intervention is itself a single point of failure. To eliminate this, we create a self-healing, auto scaling infrastructure that dynamically creates and destroys instances and gives them the appropriate roles and resources with custom scripts. This often requires a significant upfront engineering investment.

However, automating your environment before build significantly cuts development and maintenance costs later. An environment that is fully optimised for automation can mean the difference between hours and weeks to deploy instances in new regions or create development environments.

Best practices:

– The infrastructure in action. In the case of the failure of any instance, it is removed from the Auto Scaling group and another instance is spun up to replace it.

  • CloudWatch triggers the new instance spun up from an AMI in S3, copied to a hard drive about to be brought up.
  • The CloudFormation template allows us to automatically set up a VPC, a NAT Gateway, basic security, and creates the tiers of the application and the relationship between them. The goal of the template is to minimally configure the tiers and then get connected to the Puppet master. This template can then be held in a repository, from where it can be checked out as needed, by version (or branch), making it reproducible and easily deployable as new instances when needed – i.e., when existing applications fail or when they experience degraded performance.
  • This minimal configuration lets the tiers be configured by Puppet, a fully expressive language that allows for close control of the machine. Configuring Puppet manifests and making sure the Puppet Master knows what each instance they are spinning up does is one of the more time-consuming and custom solutions a managed service provider can architect.

– Simple failover RDS. RDS offers a simple option for multiple availability-zone failover during disaster recovery. It also attaches the SQL Server instance to an Elastic Block Store with provisioned IOPS for higher performance.

3. Break and Destroy

If you know that things will fail, you can build mechanisms to ensure your system persists no matter what happens. In order to create a resilient application, cloud engineers must anticipate what could possibly develop a bug or be destroyed and eliminate those weaknesses.

This principle is so crucial to the creation of resilient deployments that Netflix – true innovators in resiliency testing – has created an entire squadron of Chaos Engineers “entirely focused on controlled failure injection.” Implementing best practices and then constantly monitoring and updating your system is only the first step to creating a fail-proof environment.

Best practices:

– Performance testing. In software engineering as in IaaS, performance testing is often the last and most-frequently ignored phase of testing. Subjecting your database or web tier to stress or performance tests from the very beginning of the design phase – and not just from a single location inside a firewall – will allow you to measure how your system will perform in the real world.

– Unleash the Simian Army. If you a survive Simian Army attack on your production environment with zero downtime or latency, it is proof that your system is truly resilient. Netflix’s open-source suite of chaotic destroyers is on GitHub. Induced failures prevent future failures.

Unfortunately, deploying resilient infrastructure is not just a set of to-dos. It requires a constant focus throughout AWS deployment on optimising for automatic fail-over precise configuration of various native and 3rd party tools.

The post 3 Steps to Resilient AWS Deployments appeared first on Cloud Computing News.

Healthcare data security: Is cloud encryption alone enough?

By Kenneth N. Rashbaum, Esq. and Liberty McAteer, Esqs.

This blog post is for informational and educational purposes only. Any legal information provided in this post should not be relied upon as legal advice. It is not intended to create, and does not create, an attorney-client relationship and readers should not act upon the information presented without first seeking legal counsel.

What if the data of 80 million Anthem subscribers were encrypted at rest? And access required two-factor authentication? Would the security breach still have occurred? These lines in the new cyber-security “anthem” are being sung with gusto by those following the bouncing cursor of a breach that may be larger than all healthcare security breaches of the last ten years combined. The questions need to be asked but, like many other things in information security, the answers are not always obvious, though sometimes they do follow simple basic information management common sense.

True, investigating a breach, especially one of this size, attracts attention that makes the Super Bowl and Academy Awards look like Saturday morning cartoons. The analysis is always retrospective, Monday-morning quarterbacking, and it’s hard not to come up with some weakness that if addressed, maybe, possibly, perhaps could have prevented the breach.  Here most commentators, especially those in the mainstream press, have focused on data encryption at rest as the panacea that would have preserved the sensitive information of the millions of Anthem subscribers.  Encrypted cloud storage is part of the answer, but not the whole answer because attackers who can circumvent authentication protocols can get around encryption (and, as Edward Snowden stated, encryption often comes with back doors).

One reason why encryption alone isn’t a complete defense against a data security breach is that, as Professor Steven M. Bellovin of Columbia University wrote in an Ars Technica article:

In a case like the Anthem breach, the really sensitive databases are always in use. This means that they’re effectively decrypted: the database management systems (DBMS) are operating on cleartext, which means that the decryption key is present in RAM somewhere. It may be in the OS, it may be in the DBMS, or it may even be in the application itself (though that’s less likely if a large relational database is in use, which it probably is). (Emphasis added.)

This means that someone with access to a computer can access the database decryption key, or potentially even unencrypted database contents, from the RAM, or ‘working memory,’ of the computer. As a result, the robustness of the database encryption scheme becomes nearly irrelevant and would likely not have posed a substantial barrier to someone with the know-how to circumvent authentication protocols in the first place.

So, the first question that must be asked is how robust were the authentication protocols at Anthem? A combination of strong, perhaps multifactor authentication protocols and database management systems controls, plus encryption at rest could have reduced the chances of a successful breach. It’s important, from a liability perspective, to note that neither HIPAA compliance nor other federal information security requirements require perfection.  These regulations are not rules of strict liability. The metric is “reasonable steps,” though, of course, that is often in the eyes of the beholder with the benefit of hindsight.

And there are “reasonable steps” that can be taken to deter all but the most sophisticated hackers.  One may be to store sensitive information with a cloud hosting provider who encrypts at rest and requires multifactor authentication. However, many healthcare plans and providers are skeptical due, among other things, to a perceived loss of control over the data in the healthcare cloud and, thereby, the ability to oversee data security. This is one reason, as Professor Bellovin notes, that it is appropriate for cloud hosting services to use robust database encryption, as you no longer control authentication protocols to your computer systems because “you don’t control the machine room and you don’t control the hypervisor (a program that allows multiple operating systems to share a single system or hardware processor).” On the other hand, cloud hosting provider systems administrators are often more experienced at securing their systems than most healthcare plan and provider IT personnel or, when they are large enough to have them, information security departments (HIPAA compliant hosting requires the appointment of Security Officers, but they often are not sufficiently experienced to harden the OS and DMBS, let alone encrypt at rest).

The New York Times reported on February 6, 2015 that healthcare information is increasingly at risk of a data security breach because medical records, with their rich set of personal identifiers including Social Security Numbers and medical record numbers that can be used to obtain pharmaceuticals and even medical care for undocumented aliens, are of greater value on the black market that credit card numbers alone, as those accounts can be cancelled. The Times also noted that “health organizations are likely to be vulnerable targets because they are slower to adopt measures like keeping personal information in separate databases that can be closed off in the event of an attack” (subscription required).

As the attackers get more and more brazen and sophisticated, especially in light of the recent series of successful attacks, healthcare organizations will look for means to better secure information, and those means will comprise more than just encryption. They will include hardened authentication and DMBS protocols as well and, if the organization cannot manage these controls themselves, hosting of data in a healthcare cloud with reputable managed cloud hosting providers.

The post Healthcare Data Security: Is Cloud Encryption Alone Enough? appeared first on Cloud Computing News.

DevOps automation: Financial implication and benefits with AWS

(c)iStock.com/urbancow

Automation applied to efficient operations can lead to a gain in efficiency that directly translates to the bottom line of a business. This article is about how DevOps automation in an Amazon Web Services (AWS) environment has tangible financial implications and benefits.

It is no surprise that cloud computing brings with it monetary advantages, which are realised when an organisation trades fixed capital expenditure (CAPEX) for variable operational expenditure (OPEX) on a pay-as-you-go usage model (learn more about Capex vs. Opex). In AWS environments, savings are further realised because there are no wasted resources as a result of the auto scaling capabilities inherent in AWS. As a result of automation, the gap between predicted demand and actual usage is minimised, something that is rarely possible in traditional, on-premises technology deployments.

Moreover, automation allows the continuous deployment of new infrastructure to be accomplished within a matter of minutes, making it possible for a quick time to market without the significant collateral damage of failed deployments. These benefits are further amplified when automation is in place. DevOps solutions, such as CloudFormation and OpsWorks within the AWS service offerings, make this automation possible.

AWS CloudFormation automates the provisioning and management of AWS resources using pre-built or configurable templates. These templates are text-based with configurable parameters that can be used to set up the version, map the region where the resources need to be deployed, and set up the resources and security groups that need to be provisioned automatically.

AWS OpsWorks makes it possible to automatically spin up new instances of AWS resources, as and when needed, and also change configuration settings based on system event triggers. Manual operations to perform some common activities – e.g. installing new applications, copying data, configuring ports, setting up firewalls, patching, DNS registrations, device mounting, starting and stopping services and rebooting – can all be automated using the AWS DevOps OpsWorks product, or by using additional configuration management frameworks such as Chef or Puppet.

In addition to AWS DevOps solutions, AWS also provides application management tools that can automate code deployment (AWS CodeDeploy), source code control and commitment (AWS CodeControl) and continuous deployment (AWS CodePipeline). In addition to making it possible to rapidly release new features, AWS CodeDeploy can be used to automate deployment, which reduces manual error-prone operations that could be costly from which to recover.

AWS CodeControl can be used to manage source code versions and source control repositories, ensuring that the correct version is deployed, thereby reducing the need to rollback, which can be expensive from the perspective of time lost, missed deadlines, and missed revenue opportunities. AWS CodePipeline makes infrastructure and application release seamless and automated, reducing the operational costs of checking-out code, building the code, deploying the application to a test and staging environment, and releasing it to production.

The automation and auto scaling of infrastructure and applications in AWS environments allows a business to benefit from both reduced time and cost. AWS automation in Amazon’s cloud computing service offerings not only makes the business operations more efficient, but also magnifies the efficiencies gained, which will have a direct impact on the bottom line – the financial posture of the business.

The post DevOps Automation: Financial Implication and Benefits with AWS appeared first on Cloud Computing News.

How DevOps can improve reliability when deploying with AWS

Picture credit: iStockPhoto

Reliability, in the cloud technology era, can be defined as the likelihood that a system will provide uninterrupted fault-free services, as it is expected to, within the constraints of the environment in which it operates. According to the Information Technology Laboratories (ITL) bulletin published by the National Institute of Standards and Technology (NIST) on cloud computing, reliability in cloud environments is composed of the following:

  • The hardware and software services offered by the cloud service provider
  • The provider’s and consumers’ personnel
  • The connectivity to the services

In other words, reliability is dependent primarily on connectivity, availability of the services when needed, and the personnel managing and using the system. The focus of this article is about how DevOps can improve reliability when deploying in the cloud, particularly in Amazon Web Services (AWS) environments.

DevOps can, in fact, help improve the reliability of a system or software platform within the AWS cloud environment. When it comes to connectivity, there is the Route 53 domain name system routing service, the Direct Connect service for dedicated connection, and ElasticIP, offered by AWS. For assurance of availability, AWS offers Multi Availability Zone configurations, Elastic Load Balancing, and Secure Backup storage solutions with Amazon S3.

DevOps can be used to configure, deploy and manage these solutions and the personnel issues for reliability are practically non-existent because of the automation that is brought about by AWS DevOps offerings such as OpsWorks.

OpsWorks makes it possible to maintain the relationship between deployed resources persistently, meaning that an IP address that is elastically allocated to a resource (for example, an EC2 instance) is still maintained when it is brought back online, even after a period of inactivity. Not only does OpsWorks allow an organization to configure the instance itself, but it also permits the configuration of the software on-demand at any point in the lifecycle of the software itself, taking advantage of built-in and custom recipes.

Deployment of the application and the infrastructure is seamless with OpsWorks as the applications and infrastructure can be configured and pointed to a repository from which the source is retrieved, automatically built based on the configuration templates, and deployed with minimal to no human interaction.

Additionally, the auto-healing and automatic scaling features of AWS are perhaps what gives AWS deployments the greatest reliability, since when one instance fails, OpsWorks can automatically replace the failed instance with a new one and/or scale up or scale down as demand for the resource fluctuates. This ensures that there is uninterrupted and failure-free availability of services offered to connected consumers.

So, with a DevOps approach to deployment of cloud apps in an AWS environment, it may very well be possible that while the shift to cloud computing may test the reliability of the technology, in reality, it is a not something that one needs to be overly concerned about. Try the DevOps approach to deployment and see for yourself how reliable it can be and is.

The post How DevOps can Improve Reliability When Deploying with AWS appeared first on Cloud Computing News.

The importance of DevOps when deploying with AWS

The fact that ‘Time is Money’ applies today, just as it did the past. In contemporary computing environments, business agility to address customer needs directly translates to the success of your company. This is where the emerging software development paradigm, “DevOps”, which builds on several principles of agile development, lean manufacturing, Kaizen and other continuous improvement processes, can be invaluable.

Companies that operate in the cloud are already familiar with the need for speed in their business operations. When these operations take advantage of DevOps, the combination can be extremely powerful in efficiently and effectively using time to maximize profits.

The importance of taking a DevOps approach to deployment, specifically within the context of an Amazon Web Services (AWS), has specific claims on business agility. There are many other benefits of leveraging DevOps for AWS, such as high availability, improved scalability, reliability, security and compliance.

Amazon Cloud Services comes bundled with a few offerings as part of its solutions to deploy and manage applications and infrastructure as code, with an inherent DevOps bent. Primarily, these include CloudFormation and OpWorks:

  • CloudFormation makes it possible to create AWS Resource templates that can be spun into working instances as needed.
  • OpsWorks is a full-fledged DevOps Application Management Service that makes it possible to automate the deployment of applications from source code repositories to a production environment.

Using CloudFormation, an organization can customize the settings of its applications and infrastructure, and blueprint their business needs as an AWS resource template. This can then be automatically instantiated based on an organization’s particular operating requirements. This significantly reduces the time needed for application onboarding (aka standup time or load time). This also increases the portability of the applications to be deployed in different environments and/or geographic time zones.

Using OpsWorks, an organization can set up Amazon EC2 instances (also known as a Stack). It is best advised to set up stacks for both pre-production (staging) and production environments. The CloudFormation template and other related resources could then be configured, using pre-built Puppet  configs or custom JSON, as layers on top of the stack. All an organization then has to do is to specify the location of its source code repository and define any additional configuration requirements, which OpsWorks takes and from which it generates the application’s executable artifacts. These executables are then deployed automatically to the production environment without any human interaction. This approach shortens the release cycle and makes it possible to deploy changes to code dynamically and frequently, in some cases literally within hours.

Moreover, the likelihood of errors introduced by human interaction is practically non-existent in the deployment process itself, as long as the blueprints and configurations are tested and validated in staging stacks.

A DevOps approach to deployment brings with it several benefits, the most important of which is business agility. Companies today cannot afford not to be nimble in addressing industry trends and incorporating customer needs. Business agility is crucial for the continued success of companies and a DevOps approach to deploy code and infrastructure as code into production is pivotal. After all, time is Money.

The post The Importance of DevOps When Deploying with AWS appeared first on Cloud Computing News.

Managed AWS: Fusing managed services expertise and AWS power

Picture credit: NandorFejer/Flickr

Those who implement cloud computing in general, and AWS in particular, are often confused by the role that managed services providers play.  Cost and simplicity are driving the movement to leverage AWS through managed services providers as enterprises look for more ways to consume AWS services.  These services are easier to mix and match with existing applications, and provide a better and more effective way to leverage AWS.

Managed services providers can place a well-defined layer of infrastructure, management, and governance between the cloud provider, in this case, AWS, and those who consume AWS as a managed service.  Managed service providers get enterprises on-boarded quickly to AWS, and avoid initial issues that many enterprises encounter when going directly to AWS.

What’s more, managed services providers offer another layer of support, including guidance around on-boarding applications, and other best practices you should employ. Proactive monitoring of AWS services, including working around outages and other infrastructure issues, allows to you maintain an uptime records far better than if you leverage AWS directly.

The management of costs is another benefit, including the ability to monitor the use of resources in terms of cost.  You can monitor the utilization of resources, and model the cost of levering these resources over time.  Or, even monitor cost trends, and predict AWS costs into the future for budgeting and cost forecasting.

Managed services providers typically offer security and governance capabilities, which can augment the governance and security capabilities of AWS.  This means you can set polices and implement security that spans your AWS and non-AWS applications.

The value of leveraging managed services providers as a path to AWS comes down to a few key notions:

First, it gives you the ability to remove yourself from having to deal with AWS directly.  Managed services providers can quickly set up your environment, and manage that environment on your behalf.  This means quicker time-to-value, and more business agility as well.

Second, you have the ability to monitor costs more proactively.  There are no big billing surprises at the end of the month, and you have the ability to do better cloud expenditure planning.

Finally, AWS is only part of your infrastructure.  Typically, you need to manage services inside and outside of AWS.  Managed services providers allow you to manage both traditional applications, as well as those hosted in AWS and other public cloud providers, using the same management, security, and governance layers.

As time moves forward, I suspect that more enterprises will turn to managed services providers as a way of moving to AWS in a managed AWS capacity.  The lower risk and lower costs are just too compelling.

The post Managed AWS: Fusing Managed Services Expertise and AWS Power appeared first on Cloud Computing News.