Todas las entradas hechas por jgardner

Analysing the next generation of machine learning tools for financial services

“Machine learning is so tantalizing for most every day developers and scientists. Still, there are a lot of constraints for builders….How do we turn machine learning from a capability of the few, into something that many more people can take advantage of?” – Andy Jassy, Keynote from AWS re:Invent 2017

The fintech industry has been hyped about the potential of machine learning technology for years. Despite all the noise, it’s still very early for most companies. Expert machine learning practitioners are rare, and even if you manage to find one, it usually takes more than a year to launch a machine learning app in production.

But all that’s set to change in 2018. First and most importantly, SaaS-based machine learning platforms are maturing and ready for use by fintech companies. Equally exciting are the tools made available by Amazon Web Services (AWS) — the platform most FinTech companies are already running on — to make the process of building your own machine learning algorithms much easier.

SaaS machine learning platforms for fintech

Creating a machine learning model isn’t easy. First you have to get your data in one place, then choose an algorithm, train your model, tune your model, deploy the model, and fine-tune it over time. Given the pace of change in the industry, algorithms need to be tuned constantly. But data analysis power is not enough. The tougher job is understanding how to communicate the insights of machine learning to consumers.

Given all of these challenges, finance companies usually begin by searching for a SaaS-based machine learning platform that solves an existing challenge, rather than building their own tool. For finance companies that ingest large amounts of financial data, machine learning means using data from thousands of consumers to pinpoint investment opportunities, uncover fraud, or underwrite a loan.

Here are some of the popular SaaS machine learning apps and APIs for finance:

User logins and facial recognition: User login is changing, and having usernames and passwords to access your bank account might not be around forever. Technology for facial recognition and biometrics is finally reaching the mainstream; Facebook facial recognition finds photos you’re untagged in; facial recognition cameras have been installed in apartment complexes in China to provide keyless entry; facial scanning pilot programs are currently in use in six American airports. In late 2017, Amazon released a Deep Lens, “the world’s first deep learning enabled video camera”, which will likely spur further innovation in facial recognition.

  • Kairos – a “human analytics” platform for face detection, identification, emotion detection, and more. It’s already used by companies such as Carnival, Pepsico, IPG, and more.
  • Luxand FaceSDK – a system that detects faces and facial features, used for building authentication, video search, and even augmented reality. Used by large enterprise such as Universal, Samsung, and Ford.
  • IBM Watson Visual Recognition API – an API that allows you to tag and classify visual content, including faces.

Portfolio management: Companies like Betterment, Mint and others have proven that millennial customers don’t need to speak with a human advisor in order to feel comfortable investing. Instead, they trust algorithms that change their investments according to market changes. These complex, machine-learning led services are taking significant market share from more traditional advisory channels.

  • ai – a platform used by private wealth managers and institutions to provide clients with a digital experience to track investments, plus automated recommendations. Also provides analytics to the wealth managers across their client base.
  • BlackRock Aladdin Platform – an end-to-end investment platform that combines risk analytics with portfolio management and trading tools.
  • Clinc – a conversational AI platform for personal banking. Clinc can provide wealth managers’ clients with personalized insight into spending patterns, notify customers of unusual transactions, and recommend new financial products.

If you’re interested in learning how to build a machine learning portfolio management platform in-house, read this fascinating article about Man Group, which built its own AI tool and even has its own Institute at Oxford to experiment with different AI-built trading systems.

Fraud detection: According to the Association of Certified Fraud Examiners, the money lost by businesses to fraud is over $3.5 trillion every year. Machine learning-based platforms help warn companies of potential fraudsters or phishing attacks in real time.

  • Kount – A platform that allows you to identify fraud in real time. Kount AI Servicescombines their core platform with custom machine learning rules developed by their data science team.
  • IBM Trusteer – IBM’s Pinpoint Detect is a cloud-based platform that correlates a wide range of fraud indicators to detect phishing attacks, malware, and advanced evasion methods. It also learns each customer’s behavior across multiple sessions to help identify when fraudsters assume that customer’s identity.

Still want to build your own? Machine learning on AWS

Finance companies that want to build proprietary machine learning algorithms will not be satisfied with a one-size-fits-all SaaS tool. If you want to build your own machine learning app, AWS can significantly reduce the amount of time it takes to train, tune, and deploy your model.

AWS has always been at the forefront of machine learning; think of Amazon’s recommendation engine that displays products that customers like you have purchased, or Amazon Echo, the popular voice-controlled smart home hub. They’ve released a series of machine learning tools over the past 3 years for their AWS customers, including the technology behind Echo’s Alexa.

At re:Invent 2017, Amazon released a service that packages together many of their previously-announced machine learning capabilities into an easy-to-use, fully-managed service: AWS SageMaker.

SageMaker is designed to empower any developer to use machine learning, making it easy to build and train models and deploy them to production. It automates all the time-consuming training techniques and has built-in machine learning algorithms so you can get up and running quickly. Essentially, it’s one-click machine learning for developers; you provide the data set, and it’ll give you some interesting outputs. This is a big deal for smaller companies without a fleet of data scientists who want to build machine learning applications. Granted, developers still have to understand what they’re doing and apply that model in a useful way to your customers.

Machine learning will continue to be a huge force in finance in 2018. As the market matures, expect more SaaS products and more platforms like AWS SageMaker that ease adoption of machine learning.

The post The Next Generation of Machine Learning Tools for Financial Services appeared first on Logicworks.

Why financial services companies love Docker containers

Tech-savvy banks were among the first and most enthusiastic supporters of Docker containers.

Goldman Sachs invested $95 million in Docker in 2015. Bank of America has its enormous 17,500-person development team running thousands of containers. Top fintech companies like Coinbase also run Docker containers on AWS cloud. Nearly a quarter of enterprises are already using Docker and an additional 35% plan to use it.

It may seem unusual that one of the most risk-averse and highly regulated industries should invest in such a new technology. But for now, it appears that the potential benefits far outweigh the risks.

Why containers?

Containers allow you to describe and deploy the template of a system in seconds, with all infrastructure-as-code, libraries, configs, and internal dependences in a single package. This means your Docker file can be deployed on virtually any system; an application in a container running on an AWS-based testing environment will run exactly the same in production environments on a private cloud.

In a market that is becoming increasingly skittish about cloud vendor lock-in, containers have removed one more hurdle to moving containers across AWS, VMware, Cisco, etc. A survey of 745 IT professionals found that the top reason IT organizations are adopting Docker containers is to build a hybrid cloud.

In practice, teams are usually not moving containers from cloud to cloud or OS to OS, but rather benefiting from the fact that developers have a common operating platform across multiple infrastructure platforms. Rather than moving the same container from VMware to AWS, they benefit from being able to simplify and unite process and procedures across multiple teams and applications. You can imagine that financial services companies that maintain bare metal infrastructure, VMware, and multiple public clouds benefit from utilising the container as a common platform.

Containers also are easier to automate, potentially reducing maintenance overhead. Once OS and package updates are automated with a service like CoreOS, a container becomes a maintenance-free, disposable “compute box” for developers to easily provision and run code. Financial services companies can leverage their existing hardware, gaining the agility and flexibility of disposable infrastructure without a full-scale migration to public cloud.

On large teams, the impact of these efficiencies — magnified across hundreds or thousands of engineers — can have a huge impact on the overall speed of technological innovation.

The big challenges: Security and compliance

One of the first questions enterprises ask about containers is: What is the security model? What is the fallout from containerization on your existing infrastructure security tools and processes?

The truth is that many of your current tools and processes will have to change. Often your existing tools and processes are not “aware” of containers, so you must apply creative alternatives to meet your internal security standards. It’s why Bank of America only runs containers in testing environments. The good news is that these challenges are by no means insurmountable for companies that are eager to containerize; The International Securities Exchange, for example, runs two billion transactions a day in containers running CoreOS.

Here are just a few examples of the types of changes you’d have to make:

Monitoring: The most important impact of Docker containers on infrastructure security is that most of your existing security tools — monitoring, intrusion detection, etc. — are not natively aware of sub-virtual machine components, i.e. containers. Most monitoring tools on the market are just beginning to have a view of transient instances in public clouds, but are far behind offering functionality to monitor sub-VM entities.In most cases, you can satisfy this requirement by installing your monitoring and IDS tools on the virtual instances that host your containers. This will mean that logs are organized by instance, not by container, task, or cluster. If IDS is required for compliance, this is currently the best way to satisfy that requirement.

Incident response: Traditionally, if your IDS picks up a scan with a fingerprint of a known security attack, the first step is usually to look at how traffic is flowing through an environment. Docker containers by nature force you to care less about your host and you cannot track inter-container traffic or leave a machine up to see what is in memory (there is no running memory in Docker). This could potentially make it more difficult to see the source of the alert and the potential data accessed.

The use of containers is not really understood by the broader infosec and auditor community yet, which is potential audit and financial risk. Chances are that you will have to explain Docker to your QSA — and you will have few external parties that can help you build a well-tested, auditable Docker-based system. Before you implement Docker on a broad scale, talk to your GRC team about the implications of containerization for incident response and work to develop new runbooks. Alternatively, you can try Docker in a non-compliance-driven or non-production workload first.

Patching: In a traditional virtualized or public cloud environment, security patches are installed independently of application code. The patching process can be partially automated with configuration management tools, so if you are running VMs in AWS or elsewhere, you can update the Puppet manifest or Chef recipe and “force” that configuration to all your instances from a central hub. Or you can utilize a service like CoreOS to automate this process.

A Docker image has two components: the base image and the application image. To patch a containerized system, you must update the base image and then rebuild the application image. So in the case of a vulnerability like Heartbleed, if you want the ensure that the new version of SSL is on every container, you would update the base image and recreate the container in line with your typical deployment procedures. A sophisticated deployment automation process (which is likely already in place if you are containerized) would make this fairly simple.

One of the most promising features of Docker is the degree to which application dependencies are coupled with the application itself, offering the potential to patch the system when the application is updated, i.e., frequently and potentially less painfully.

In short, to implement a patch, update the base image and then rebuild the application image. This will require systems and development teams to work closely together, and responsibilities are clear.

Almost ready for prime time

If you are eager to implement Docker and are ready to take on a certain amount of risk, then the methods described here can help you monitor and patch containerized systems. At Logicworks, we manage containerized systems for financial services clients who feel confident that their environments meet regulatory requirements.

As public cloud platforms continue to evolve their container support and more independent software vendors enter the space, expect these “canonical” Docker security methods to change rapidly. Nine months from now or even three months from now, a tool could develop that automates much of what is manual or complex in Docker security. When enterprises are this excited about a new technology, chances are that a whole new industry will follow.

The post Why Financial Services Companies Love Docker Containers appeared first on Logicworks.

Half of companies fail to meet PCI DSS compliance standards: Is your infrastructure up to it?

Only 55.4% of companies meet all PCI DSS compliance standards, according to a new report released by Verizon. While this number is up 7% from 2015, it still translates to nearly half of retailers, IT services companies, payment software providers and hospitality organisations do not adequately protect credit cardholder information.

Companies had the greatest difficulty meeting the following requirements, many of which are related to infrastructure compliance and policies:

  • Requirement 3 – Protect stored cardholder data. Requirement 3 also saw the second highest use of compensating controls globally.
  • Requirement 6 – Develop and maintain secure systems, covering the security of applications, and particularly change management.
  • Requirement 11 – Test security systems and processes, including vulnerability scanning, penetration testing, file integrity monitoring, and intrusion detection.
  • Requirement 12 – Maintain information security policies. Control 12.8 (Manage service providers with whom cardholder data is shared) was the weakest of the Requirement 12 controls.

Additionally, 44.6% of companies fall out of PCI DSS compliance within nine months of validation.

At a time when 51% of compliance officers in financial services firms report a skills shortage in compliance, it is perhaps no wonder that many companies have fallen behind. Rather than hire more staff, sixty-seven percent (67%) of IT leaders would prefer an automated approach to infrastructure compliance, which is usually a cloud-based solution. One of the many reasons that cloud solutions are appealing is that the cloud platform (such as AWS or Azure) takes care of most physical security controls, reducing the overall cost and effort of building a compliant system. More than 50 percent of new 2017 enterprise North American application adoptions will be composed of SaaS, PaaS, or IaaS solutions, according to Gartner.

Infrastructure compliance automation on the public cloud (Amazon Web Services, Azure), which is referred to as Continuous Compliance or DevSecOps, has received increased attention in 2017. In basic terms, infrastructure compliance automation consists of several cloud-based tools, such as configuration management and infrastructure templates, that allow engineers to easily spin up compliant infrastructure, track configuration changes, and reduce manual compliance work.

The complexity of meeting infrastructure compliance requirements is growing, especially for companies that host large amounts of sensitive financial data. As companies explore cloud options, expect to see a shift in compliance management away from manual compliance work and towards cloud automation.

The post Nearly Half of Companies Fail to Meet PCI DSS Compliance Standards appeared first on Logicworks.

WannaCry and the public cloud: The CISO perspective

By Matthew Sharp, CISO, Logicworks

I recently attended a CISO Executive Summit here in NYC.  The room was packed with 175 CISOs and top-level security leaders from various industries.  There was broad agreement that WannaCry was a scramble for many of their teams, and created a long weekend for some.  We concurred that we were lucky the “kill switch” was triggered, and we soberly recognised that the exploit is being redeployed with newly weaponised malware.

The consensus among CISOs is that some key processes were tested, and those with critical structures in place fared much better than those with less mature programs.  At the same time, the incident highlighted the benefits of public cloud computing – and the need to apply automation in order to respond quickly and proactively to threats.

Implementing a strategy to protect and respond to attacks like these goes beyond patching and extends to automating provisioning that supports continuous integration / continuous delivery (CI/CD) pipelines, and adopting the tenants of immutable infrastructure. When your infrastructure is designed to operate like a piece of software, you can reduce or eliminate the time it takes to respond to events such as WannaCry.  We have found AWS indispensable in that regard.

In the best case, clients have a defence in depth strategy with strong endpoint technologies employing artificial intelligence, machine learning, statistical analysis or other buzz-wordy endpoint mitigation technologies.

This is then combined with the abstraction layer afforded by public cloud providers that empowers a clear use of automation, often driven via Infrastructure as Code (IaC) and purposeful orchestration.  The powerful result is that clients can perfectly define the intended state of every environment.  They can then provide assurance that the congruence between dev, stage, test, prod is precise.  By doing so, they accelerate their ability to deploy micro changes in addition to patches and configuration updates while understanding and mitigating many of the risks associated with change.

This year’s DevOps report again confirms that DevOps practices lead to better IT and organizational performance. High-performing IT departments achieve superior speed and reliability relative to lower-performing peers. The 2015 survey showed that high-performing teams deploy code 30 times more often and with 200 times shorter lead times than their peers. And they achieve this velocity and frequency without compromising reliability — in fact, they improve it. High-performing teams experience 60 times fewer failures.

In the case of WannaCry, the malware exploited a critical SMB remote code execution vulnerability for which Microsoft has already released a patch (MS17-010) in mid-March. 

For clients already taking advantage of agile operations and leveraging public cloud technologies, their environments were unaffected because patches were applied months ago. If it had been a zero-day exploit, the ability to implement configuration changes efficiently means that teams must still scramble to patch, but you avoid the long weekends.

The post WannaCry and Public Cloud appeared first on Logicworks.

Why AWS and public clouds are a great fit for digital health companies

Global equity funding to private digital health startups grew for the 7th straight year in 2016, with a 12% increase from $5.9B in 2015 to $6.6B in 2016, according to CBInsights.

Not incidentally, the rise of digital health has coincided with rising familiarity and market acceptance of public cloud providers like Amazon Web Services (AWS). Public cloud is what has allowed growing healthcare software companies to get to market faster, scale, and meet compliance obligations at a fraction of the cost of custom-built on-premise systems.

Digital health go-to-market journey

Ten years ago, when digital technology was disrupting established companies in nearly every industry, health IT was still dominated by a handful of established enterprises and traditional software companies. In the scramble to meet Meaningful Use requirements for stimulus funding, healthcare providers and insurance companies moved en masse to adopt EMR, EHR, and HIE systems. A few years later, another scramble began as the insurance industry rushed to build HIX (Health Insurance Exchanges) under Obamacare.

Today, most healthcare software products are delivered as Software-as-a-Service platforms. Except for core systems, customers do not anticipate needing to add infrastructure to host new software products. They expect to access these services on the cloud, and be able to add or remove capacity on demand. While some legacy software products will struggle to modernize their code to run in the cloud, next generation cloud-native products benefit from the inherent competitive advantages of infrastructure-as-a-service.

In a public cloud like Amazon Web Services, you can:

  • Spin up servers in minutes
  • Get best-in-class infrastructure security off the shelf
  • Use compute/storage/network resources that have already been assessed to HIPAA standards, limiting the scope of your HIPAA assessment and removing the burden of maintaining most physical security controls (learn more about HIPAA compliance on AWS in our new eBook)
  • Leverage native analytics and data warehousing capabilities without having to build your own tools
  • Start small and scale fast as your business grows

Arguably the most important benefit for new companies is the ability to launch your software product into production in a short span of time. In order to comply with HIPAA, you still have to undergo a risk assessment prior to launch, but a good portion of that assessment can rely on AWS’ own risk assessment.

SaaS – not just for startups

The benefits of the SaaS delivery model are not limited to new startups. More established companies — who saw the market shift and took action early — have also benefited from the public cloud.

A top health insurance company recently launched an online wellness and population health management application for diabetes patients. The program combines a number of cloud-based technologies including Big Data, Internet of Things, and Live Media Streaming — all while maintaining HIPAA compliance.

This is all possible because the company hosted its new product on the AWS cloud.

The company also chose AWS because it supports the hyperscale growth of data that must be delivered seamlessly in patient-facing applications that monitor real-time health goals. This kind of data-crunching would be considerably more expensive in an on-premises data centre. AWS also take care of a significant portion of the risk and cost of protecting physical access to sensitive health data.

They didn’t build the infrastructure for the application alone. They relied on cloud automation and a partner (Logicworks).

Cloud automation

One of the core benefits of AWS is that it has the potential to significantly reduce day-to-day IT operations tasks. IT can focus more on developing software, and less on building and maintaining infrastructure.

However, AWS is not maintenance-free out of the box. AWS is just rented servers, network, and storage; you still have to configure networks, set up encryption, build and maintain machine images — hundreds of tasks large and small that take up many man-hours per week. In order to make AWS “run itself”, you need automation.

Cloud automation is any software that orchestrates AWS. AWS officially recommends the following aspects of automation:

  • Each AWS environment is coded into a template that can be reused to produce new environments quickly (AWS CloudFormation)
  • Developers can trivially launch new environments from a catalog of available AWS CloudFormation templates (AWS Service Catalog)
  • OS is bootstrapped by a configuration management tool like Puppet or Chef, so that all configurations are consistently implemented and enforced. Or you can use AWS’ native service, AWS Opsworks.
  • Deployment is automated. Ideally, an instance can be created, OS and packages are installed, it receives the latest version of code, and it is launched in an Auto Scaling Group or a container cluster without human intervention.
  • All CloudFormation, configuration management, etc. is versioned and maintained in a repository.

And yes, it is entirely possible to use these automation tools in a HIPAA-restricted environment. However, creating this software from scratch is time-consuming and complex. It requires vastly different skills from those required to launch AWS or write an application — and most healthcare companies don’t really have the time or resources for it, so hiring a partner is the best approach.

The value of external expertise for health IT on AWS

The AWS cloud is a new landscape for most risk-averse companies. Established healthcare companies struggle to understand the new responsibility model for security and compliance on AWS, while new healthcare companies just want to get HIPAA compliance “out of the way” so they can move on to growing their business. This is where a partner can help. An experienced AWS consulting partner can reduce the risk of migration and accelerate the process of getting a HIPAA audit-ready environment up and running quickly.

The good news is that AWS has a very robust partner ecosystem for healthcare companies. Visit the AWS healthcare partner page for more information. Or contact Logicworks — we currently manage AWS for companies like Orion Health, MassMutual, and Spring Venture Group with ePHI for more than 50 million Americans.

The post Why Digital Health Companies Belong in AWS Cloud appeared first on Gathering Clouds.

How to achieve HIPAA compliance on AWS: A guide

(c)iStock.com/jackaldu

Healthcare companies that are accustomed to complete control over physical systems often struggle to understand their responsibilities in a cloud environment. Who is responsible for which aspects of compliance? Can healthcare companies trust Amazon with their mission-critical apps and sensitive data? What are the rules and boundaries for AWS compliance?

Mastering these intricacies can help you create compliance-ready systems on AWS. In this article, we will cover the basics, but for a deeper dive, download our eBook on Compliance on AWS.

Shared compliance responsibility on AWS (the short version)

By migrating to AWS, you have a shared compliance responsibility. This shared model means that AWS manages the infrastructure components from the host operating system (virtualisation layer) down to the physical security of AWS’ data centres. It is the customer’s responsibility to configure and secure AWS-provided services. In other words, AWS controls physical components; the customer owns and controls everything else. As AWS states repeatedly, “AWS manages security of the cloud, security in the cloud is the customer’s responsibility.”

Bottom line: Think of operating a compliant environment in AWS as similar to operating in a rented data centre. The data centre is responsible for controlling access to the data centre, locking the cages, and the physical security of the network — but you are responsible for everything else.

The AWS Shared Responsibility Model. Includes tasks performed by Logicworks. If you are managing your own environment, the tasks in blue would be your responsibility.

The same line of demarcation applies to IT controls. Customers in AWS shift the management of some IT controls to AWS, which results in a shared control environment. AWS manages controls associated with the physical and architectural infrastructure deployed in the AWS environment; the customer is responsible for network controls (Security Group configurations), access controls, encryption, and any control not directly managed by AWS.

For example, AWS provides the Identity and Access Management (IAM) tool and is responsible the IT controls in place that govern access to the physical infrastructure that holds your access policies, and customers are responsible for the setting up and maintaining roles and users in IAM. Inappropriate or unauthorised usage of an AWS resource as a result of inadequate IAM controls is the customer responsibility.  

Physical security and environmental controls

Any customer can access a copy of AWS’ SOC 1 Type II report, which provides significant detail about physical security and environment controls. The report can be accessed through AWS Artifact, a repository of audit artifacts. This means that if an auditor requests specifics regarding the physical controls of a customer’s system, they can reference the AWS SOC 1 Type II report. AWS does not allow data centre tours, as independent reviews of data centre security are also part of the SOC, ISO 27001, and other audits.

Data privacy

AWS customers retain control and ownership of their data, and customers can move data on and off of AWS storage as required. AWS does not leverage any third-party providers to deliver services to customers and therefore does not provide any customer information or access to data to any other provider. Customers must control access to applications and data through the AWS Identity and Access Management service.

Client environments on AWS infrastructure are by default logically segregated from each other and have been designed to prevent customers from accessing instances not assigned to them. AWS has both instances that are dedicated to a single customer (Dedicated Instances) and instances hosted on shared infrastructure. AWS is responsible for patching the hypervisor and networking services, while customers patch their own guest operating systems, software, and applications.

Backups

AWS provides services to help customers perform their own backups. Amazon S3 and Glacier are the most popular options, and AWS provides data durability and redundancy guarantees. In this way, AWS provides services to enable disaster recovery and resiliency but does not automatically provide backups.

HIPAA in AWS

New AWS customers often ask: Is AWS compliant with HIPAA? The answer to this question is complex. The short answer is that AWS is not “HIPAA compliant”, but it provides services that facilitate HIPAA compliance.

The U.S. Health Insurance Portability and Accountability Act (HIPAA) Privacy and Security Rules for protecting Protected Health Information (PHI) does not provide a certification or Attestation of Compliance to cloud providers or to healthcare companies. HIPAA is a set of federal regulations, not a security standard. A company and its business associates can be periodically audited for compliance with HIPAA regulations by the HHS Office for Civil Rights (OCR), and in the course of that audit it can meet or fail to meet those requirements, but it cannot be “Certified HIPAA Compliant”.

In order to process, store, or transmit PHI in AWS, a healthcare company (the “covered entity”) must sign a Business Associate Agreement (BAA) with AWS, meaning that AWS is performing function or activities on behalf of the covered entity.

However, signing a BAA with AWS does not mean that the customer is “HIPAA compliant”. The customer can maintain compliance with HIPAA regulations through its own efforts to use cloud tools, architect applications, control access, etc. in a manner that complies with those regulations. AWS only assumes responsibility for physical hardware security controls of a limited number of covered services.

Covered services

For each compliance standard, there is a subset of AWS services/programs that are “in scope” of either the Attestation of Compliance, report, or contract. This means that these services have been audited by a third party for that specific compliance standard.

Customers may use any AWS service in an account designated as a HIPAA account, but they should only process, store and transmit PHI in the HIPAA-eligible services defined in the BAA. There are nine HIPAA-eligible services today, including:

Again, this does not preclude customers from using services that are not in scope. For example, AWS CloudFormation, an infrastructure as code service that we discuss in-depth in our eBook, is not included in the list of services in scope for a HIPAA BAA. But as long as no PHI is stored, processed or transmitted in AWS CloudFormation, a covered entity may use it. AWS CloudFormation is a templating service that can build out the core components of AWS architecture (networks, instances, etc) and therefore rarely if ever touches customer data in any use case, even in non-HIPAA regulated environments. Similarly, customers can use a service like AWS CloudWatch Logs, a logging service, if logs are scrubbed of customer information.

Compliance responsibility: Insource vs. outsource

Compliance management is a complex set of tasks with many interrelated components. By outsourcing to AWS, you are already removing some of the risk and cost of compliance, particularly as it relates to physical infrastructure security. Outsourcing infrastructure management to a third party Managed Services Provider further reduces your compliance burden.

Most companies that have compliance obligations and are new to AWS choose to work with a partner to plan, build, deploy, and operate their AWS environment in order to minimise risk, rapidly build out a compliance-ready environment, and minimise the time and effort of ongoing compliance maintenance.

Below is an example of a compliance responsibility matrix when you work with Logicworks:

Resources

AWS has published extensively on the topic of shared compliance responsibility. If you are considering hosting regulated data in AWS, we highly recommend that you utilise these resources:

General compliance information:

HIPAA-specific documentation:

The post How to Achieve HIPAA Compliance on AWS appeared first on Gathering Clouds.

How to achieve HIPAA compliance on AWS: A guide

(c)iStock.com/jackaldu

Healthcare companies that are accustomed to complete control over physical systems often struggle to understand their responsibilities in a cloud environment. Who is responsible for which aspects of compliance? Can healthcare companies trust Amazon with their mission-critical apps and sensitive data? What are the rules and boundaries for AWS compliance?

Mastering these intricacies can help you create compliance-ready systems on AWS. In this article, we will cover the basics, but for a deeper dive, download our eBook on Compliance on AWS.

Shared compliance responsibility on AWS (the short version)

By migrating to AWS, you have a shared compliance responsibility. This shared model means that AWS manages the infrastructure components from the host operating system (virtualisation layer) down to the physical security of AWS’ data centres. It is the customer’s responsibility to configure and secure AWS-provided services. In other words, AWS controls physical components; the customer owns and controls everything else. As AWS states repeatedly, “AWS manages security of the cloud, security in the cloud is the customer’s responsibility.”

Bottom line: Think of operating a compliant environment in AWS as similar to operating in a rented data centre. The data centre is responsible for controlling access to the data centre, locking the cages, and the physical security of the network — but you are responsible for everything else.

The AWS Shared Responsibility Model. Includes tasks performed by Logicworks. If you are managing your own environment, the tasks in blue would be your responsibility.

The same line of demarcation applies to IT controls. Customers in AWS shift the management of some IT controls to AWS, which results in a shared control environment. AWS manages controls associated with the physical and architectural infrastructure deployed in the AWS environment; the customer is responsible for network controls (Security Group configurations), access controls, encryption, and any control not directly managed by AWS.

For example, AWS provides the Identity and Access Management (IAM) tool and is responsible the IT controls in place that govern access to the physical infrastructure that holds your access policies, and customers are responsible for the setting up and maintaining roles and users in IAM. Inappropriate or unauthorised usage of an AWS resource as a result of inadequate IAM controls is the customer responsibility.  

Physical security and environmental controls

Any customer can access a copy of AWS’ SOC 1 Type II report, which provides significant detail about physical security and environment controls. The report can be accessed through AWS Artifact, a repository of audit artifacts. This means that if an auditor requests specifics regarding the physical controls of a customer’s system, they can reference the AWS SOC 1 Type II report. AWS does not allow data centre tours, as independent reviews of data centre security are also part of the SOC, ISO 27001, and other audits.

Data privacy

AWS customers retain control and ownership of their data, and customers can move data on and off of AWS storage as required. AWS does not leverage any third-party providers to deliver services to customers and therefore does not provide any customer information or access to data to any other provider. Customers must control access to applications and data through the AWS Identity and Access Management service.

Client environments on AWS infrastructure are by default logically segregated from each other and have been designed to prevent customers from accessing instances not assigned to them. AWS has both instances that are dedicated to a single customer (Dedicated Instances) and instances hosted on shared infrastructure. AWS is responsible for patching the hypervisor and networking services, while customers patch their own guest operating systems, software, and applications.

Backups

AWS provides services to help customers perform their own backups. Amazon S3 and Glacier are the most popular options, and AWS provides data durability and redundancy guarantees. In this way, AWS provides services to enable disaster recovery and resiliency but does not automatically provide backups.

HIPAA in AWS

New AWS customers often ask: Is AWS compliant with HIPAA? The answer to this question is complex. The short answer is that AWS is not “HIPAA compliant”, but it provides services that facilitate HIPAA compliance.

The U.S. Health Insurance Portability and Accountability Act (HIPAA) Privacy and Security Rules for protecting Protected Health Information (PHI) does not provide a certification or Attestation of Compliance to cloud providers or to healthcare companies. HIPAA is a set of federal regulations, not a security standard. A company and its business associates can be periodically audited for compliance with HIPAA regulations by the HHS Office for Civil Rights (OCR), and in the course of that audit it can meet or fail to meet those requirements, but it cannot be “Certified HIPAA Compliant”.

In order to process, store, or transmit PHI in AWS, a healthcare company (the “covered entity”) must sign a Business Associate Agreement (BAA) with AWS, meaning that AWS is performing function or activities on behalf of the covered entity.

However, signing a BAA with AWS does not mean that the customer is “HIPAA compliant”. The customer can maintain compliance with HIPAA regulations through its own efforts to use cloud tools, architect applications, control access, etc. in a manner that complies with those regulations. AWS only assumes responsibility for physical hardware security controls of a limited number of covered services.

Covered services

For each compliance standard, there is a subset of AWS services/programs that are “in scope” of either the Attestation of Compliance, report, or contract. This means that these services have been audited by a third party for that specific compliance standard.

Customers may use any AWS service in an account designated as a HIPAA account, but they should only process, store and transmit PHI in the HIPAA-eligible services defined in the BAA. There are nine HIPAA-eligible services today, including:

Again, this does not preclude customers from using services that are not in scope. For example, AWS CloudFormation, an infrastructure as code service that we discuss in-depth in our eBook, is not included in the list of services in scope for a HIPAA BAA. But as long as no PHI is stored, processed or transmitted in AWS CloudFormation, a covered entity may use it. AWS CloudFormation is a templating service that can build out the core components of AWS architecture (networks, instances, etc) and therefore rarely if ever touches customer data in any use case, even in non-HIPAA regulated environments. Similarly, customers can use a service like AWS CloudWatch Logs, a logging service, if logs are scrubbed of customer information.

Compliance responsibility: Insource vs. outsource

Compliance management is a complex set of tasks with many interrelated components. By outsourcing to AWS, you are already removing some of the risk and cost of compliance, particularly as it relates to physical infrastructure security. Outsourcing infrastructure management to a third party Managed Services Provider further reduces your compliance burden.

Most companies that have compliance obligations and are new to AWS choose to work with a partner to plan, build, deploy, and operate their AWS environment in order to minimise risk, rapidly build out a compliance-ready environment, and minimise the time and effort of ongoing compliance maintenance.

Below is an example of a compliance responsibility matrix when you work with Logicworks:

Resources

AWS has published extensively on the topic of shared compliance responsibility. If you are considering hosting regulated data in AWS, we highly recommend that you utilise these resources:

General compliance information:

HIPAA-specific documentation:

The post How to Achieve HIPAA Compliance on AWS appeared first on Gathering Clouds.

‘Security by design’ and adding compliance to automation

(c)iStock.com/maxkabakov

By Jason McKay, CTO and SVP of Engineering, Logicworks

Security is “job zero” for every company. If you are putting your customers or users at risk,  you will not be in business for long. And that begins with taking a more proactive approach to infrastructure security — one that does not rely on the typical protective or reactive third party security tools, but builds security into your infrastructure from the ground up.

As your company moves to the cloud, it has an opportunity to start fresh and rethink who and what is responsible for security in your environment. You also want to be able to integrate security processes into your development pipeline and maintain consistent security configurations even as your applications constantly change. This has led to the rise of Security by Design.

The security by design approach

Security by design (SbD) is an approach to security that allows you to formalize infrastructure design and automate security controls so that you can build security into every part of the IT management process. In practical terms, this means that your engineers spend time developing software that controls the security of your system in a consistent way 24×7, rather than spending time manually building, configuring, and patching individual servers.

This approach to system design is not new, but the rise of public cloud has made SbD far simpler to execute. Amazon Web Services has recently been actively promoting the approach and formalizing it for the cloud audience. Other vendors promote similar or related concepts, often called Secure DevOps or Security Automation or Security-as-Code or SecOps. The practice becomes more important as your environment becomes more complex, and AWS actually has many native services that, if configured and orchestrated in the right way, create a system that is more secure than a manually-configured on-premises environment.

Does this mean that companies no longer need security professionals, just security-trained DevOps engineers? Not at all. When security professionals embrace this approach, they have far greater impact than in the past. This is actually an opportunity for security professionals to get what they have always dreamed of: introducing security earlier in the development process. Rather than retroactively enforcing security policies — and always being behind — they are part of the architecture planning process from Day 1, can code their desired specifications into templates, and always know that their desired configurations are enforced. They no longer need to be consulted on each and every infrastructure change, they only need to be consulted when the infrastructure templates change in a significant way. This means less repetitive busy-work, more focus on real issues.

Security by design in practice 

In practice, SbD is about coding standardized, repeatable, automated architectures so that your security and audit standards remain consistent across multiple environments. Your goals should be:

  • Controlled, standardized build process: Code architecture design into a template that can build out a cloud environment. In AWS, you do this with CloudFormation. You then code OS configurations into a configuration management tool like Puppet.
  • Controlled, standardized update process: Put your CloudFormation templates and Puppet manifests in a source code management tool like Git that allows you to version templates, roll back changes, see who did what, etc.
  • Automated infrastructure and code security testing as part of CI/CD pipeline: Integrate both infrastructure and code-level tests into code deployment process as well as the configuration management update process. At Logicworks, we often use AWS CodeDeploy to structure the code deployment process. You can also use Docker and AWS ECS.
  • Enforced configurations in production: Create configuration management scripts that continually run against all your environments to enforce configurations. Usually hosted in a central management hub, and necessitates a hub-spoke VPC design approach.
  • Mature monitoring tools with data subject to intelligent, well-trained human assessment: In compliant environments, your monitoring tools are usually mandated and logs must be subject to human review; we use native AWS tools like AWS CloudWatch, CloudTrail, and Inspector, as well as Alert Logic IDS and Log Manager and Sumo Logic  to meet most requirements. SumoLogic helps us use machine learning to create custom alerts that notify our 24×7 Network Operations Center when unusual activity occurs, so that those engineers can take appropriate action with more accurate real-time data.
  • Little to no direct human intervention in the environment…ever: Once all these tools are in place, you should no longer need to directly modify individual instances or configurations. You should instead modify the template or script to update (or more ideally, relaunch) the environment.

We have gone into significant technical depth into Logicworks’ security automation practices in other places; you can see our Sr. Solutions Architect’s talk about security automation here, watch him talk about our general automation practices here, or read this in-depth overview of our automation practices.

Here are some other great resources about Security by Design and Secure DevOps:

Compliance + security by design

As you can imagine, the SbD approach has significant positive impacts on compliance efforts. The hardest thing to achieve in infrastructure compliance is not getting security and logging tools set up and configured, it is maintaining those standards over time. In the old world, systems changed infrequently with long lead-times, and GRC teams could always spend 2-3 weeks evaluating and documenting change manually (usually in a spreadsheet). In the cloud, when code gets pushed weekly and infrastructure is scalable, this manual compliance approach can severely limit the success of cloud projects, slow down DevOps teams, and frustrate both business and IT.

Running applications in the cloud requires a new approach to compliance. Ideally, we need a system that empowers developers and engineers to work in an agile fashion while still maintaining security and compliance standards; we need a toolchain that a) makes it easier to build out compliant environments, b) provides guardrails to prevent engineers/developers from launching resources outside of compliance parameters, and c) provides ongoing documentation about the configuration of infrastructure resources. The toolchains we have already described — templating, configuration management, monitoring — allow us to launch new compliant environments trivially, ensures very limited access to the environment and full documentation on every change. Together, this means a greatly reduced risk of undocumented configuration change, error, or lack of adequate knowledge about where sensitive data lives, and therefore greatly reduced risk of compliance violations.

When systems are complex, there must be an equally powerful set of management tools and processes to enforce and maintain configurations. Continuous compliance is only possible if you treat your infrastructure as code. If your infrastructure can be controlled programmatically, your security and compliance parameters are just pieces of code, capable of being changed more flexibly, versioned in Git like any piece of software, and automated to self-correct errors. This is the future of any type of security in the cloud.

The future of SbD

SbD allows customers to automate the fundamental architecture and, as AWS says,”render[s] non-compliance for IT controls a thing of the past.”

Recent announcements out of AWS re:Invent 2016 are particularly exciting. AWS launched a major update to their EC2 Systems Manager tool, which is a management service that helps you automatically collect software inventory, apply OS patches, create system images, and configure Windows and Linux operating systems. Basically, AWS is filling the gaps in its existing SbD toolchain, stringing together a lot of the controls described above and allowing you to define and track system configurations, prevent drift, and maintain software compliance. Although EC2 Systems Manager was upstaged by several more headline-worthy releases, the service will make a significant difference to compliance teams in the cloud.

In the future, expect AWS and other cloud platforms to launch more comprehensive tools that make it easier for enterprises to achieve SbD in the cloud. The tools currently exist; but assembling these tools into a robust framework can be a challenge for most IT teams. Expect enterprises to turn towards security-focused partners to fill the skills gap.

The post What is Security by Design? appeared first on Gathering Clouds.

Why only 3% of enterprises have an ‘optimised’ cloud strategy

(c)iStock.com/Wavebreakmedia

The vast majority of enterprises still lack a mature cloud strategy, according to a recent survey of 6,159 executives conducted by IDC. Just 3% of respondents define their cloud strategies as “optimised,” the highest level of strategic maturity, while nearly half (47%) say their cloud strategies are usually “opportunistic or ad hoc.”

At a time when 68% of surveyed organisations are using cloud to help drive business outcomes (up from 42% just one year ago), the impact of his lack of strategy could have long-term effects on the success of cloud projects.

An “ad hoc” cloud strategy is likely the result of a number of factors: the project-by-project adoption of cloud, the speed of cloud adoption, and the staggered expiration of data centre contracts / equipment (which leads to intermittent cloud migration), to name a few. But no matter the cause, the result is often one or more of the following:

  • Isolated cloud projects without common, shared standards
  • Ad hoc security configurations, either due to lack of common benchmark or inconsistent application of that benchmark
  • Lack of cross-team shared resources and learnings, which could lead to further efficiencies
  • Lack of consistent financial data for tracking purposes

In some cases, ad hoc cloud strategy is purposeful: the enterprise wants to take advantage of cloud technologies by experimenting with multiple platforms, usually as part of Center of Excellence or isolated DevOps team that works out the best strategy and then communicates that strategy to the rest of the company. Anecdotally, according to companies that Logicworks works with, some companies often get stuck at this step; they have one or two moderately successful projects, but do not know how to expand usage with the proper controls across the enterprise.

The IDC survey confirms the results of a Logicworks/Wakefield survey conducted in July 2016, which found that nearly half of IT decision makers believe their organisation’s IT workforce is not completely prepared to address the challenges of managing their cloud resources over the next five years.

The report further found that one of the top reasons for this lack of strategy is that many organisations mistake cloud vendors’ claims of “simplified” infrastructure maintenance for “little to no” infrastructure maintenance; in other words, they think maintaining their cloud systems will be easy. The survey found that 80% of IT decision makers feel that leadership underestimates the cost and effort of managing cloud systems, and as a result, they do not effectively plan for the staffing and resources IT requires to achieve highly available, scalable cloud systems.

Clearly enterprises have some work to do — not just to develop a strategy around where and when cloud is adopted, but how to manage and govern these cloud deployments going forward. Whether they choose to do so in-house, with a short term strategic consultant engagement, or with a long-term cloud management solution, most know that they cannot afford to maintain an “ad hoc” cloud strategy. As cloud matures, expect to see a mature, flexible framework for cloud management, automation, and large-scale DevOps as the next must-have for enterprise IT teams.

The post Only 3% of Enterprises Have Optimized Cloud Strategy, Survey Finds appeared first on Logicworks Gathering Clouds.

Advanced cloud security: Standards and automation in a multi-vendor world

(c)iStock.com/maxkabakov

Enterprise IT has long struggled to develop common standards for the security of cloud deployments. With multiple cloud vendors, fast-moving product teams, and a changing security landscape, it is perhaps no wonder that enterprises are left asking:

  • What is the right cloud security standard?
  • What level of security is “good enough”?
  • And most importantly — how do we apply these standards in a consistent way to existing and new cloud environments?

In July 2016, the Ponemon Institute published The 2016 Global Cloud Data Security Study, an independent survey of 3,400 technology and security professionals. More than half of the respondents did not have measures for complying with privacy and security requirements in the cloud. That is clearly a problem.

We sat down with Dan Rosenbloom, the lead AWS architect at Logicworks, to talk about his own struggle with standardization and how he enforces common configurations across 5,000+ VMs on AWS.

Why do you think central IT and GRC teams have struggled to keep up with the pace of cloud adoption?

Every company is different, but this struggle usually happens when the business goal of getting to the cloud and meeting deadlines trumps long-term goals, like setting up standards for consistency and manageability. In fast-moving companies, every team builds a cloud environment on a different platform following their own definition of cloud security best practices; security teams are often pressured to complete reviews of these custom, unfamiliar environments in short time frames. Both developers and security professionals end up unhappy.

Many central IT departments are in the process of reinventing themselves. You do not become a service-oriented IT team overnight — and when you add cloud adoption into the mix, you put further strain on already stretched processes and resources.

Have you seen a company go through this process well? Or is this struggle inevitable?

To some degree, every company struggles. But the ones that struggle less do some or all of the following:

  • They choose (and stick to) a limited set of cloud security guidelines for all projects on all platforms (i.e. NIST, IT consortiums like Cloud Council, and industry-specific associations like the Legal Cloud Computing Association)
  • Central IT maintains a strong role in cloud purchasing and management
  • Central IT commits to an automation-driven approach to cloud management, and developers modular templates and scripts, not one-off environments

What standard do most organizations apply in cloud security?

From a strategic perspective, the issue boils down to who can access your data, how you control that access and how you protect that data while it is being used, stored, transmitted and shared. At Logicworks, we build every cloud to at least PCI DSS standards, which is the standard developed by credit card companies to protect consumer financial information. We choose PCI because a) it is specific and b) we believe it represents a high standard of excellence. Even clients with no PCI DSS compliance requirement meet at least these standards as a baseline.

If your company has an existing annual infrastructure audit processes and standards, and a supplementary standard like Cloud Council can help orient your GRC team for cloud-specific technologies.

How does central IT enforce this standard?

One of the benefits of cloud technology is that you can change your infrastructure easily — but If your environment is complex, change management and governance quickly become a nightmare. Automation is the key.

One of Logicworks’ main functions as a managed service provider is to establish a resource’s ideal state and ensure that even as the system evolves, it never strays from that state. We use four key processes to do this:

  • Infrastructure automation: Infrastructure is structured and built into templates, where it can be versioned and easily replicated for future environments. Tools: AWS CloudFormation, Git
  • Configuration management: Configuration management scripts and monitoring tools catch anomalies and proactively correct failed/misconfigured resources. This means that our instances replace themselves in case of failure, errors are corrected, and ideal state is always maintained. Tools: Puppet, Chef, Jenkins, AWS Autoscaling, Docker, AWS Lambda, AWS Config Rules
  • Deployment automation: Code deployment processes are integrated with cloud-native tools, improving deployment velocity and reducing manual effort (and error). Tools: AWS CodeDeploy, AWS Lambda, Jenkins
  • Compliance monitoring: Systems are continuously monitored for adherence to security standards and potential vulnerabilities, and all changes are logged for auditing. Tools: Git, AWS Config Rules, AWS CloudTrail, AWS Config, AWS Inspector, Sumo Logic, Congiruation management (Puppet, Chef, Ansible)

It is easiest to see how this works together in an example. Let’s say central IT wants to build a standard, PCI DSS compliant environment that product teams can modify and use for their own individual projects. Essentially, they want to “build in” a baseline of security and availability standards into a repeatable template.

First, they would write or create core resources (compute, storage, network) in a framework like AWS CloudFormation, with some basic rules and standards for the infrastructure level. Then the OS would get configured by a configuration management tool like Puppet or Chef, which would perform tasks like requiring multi-factor authentication and installing log shipping and monitoring software, etc. Finally, those resources receive the latest version of code and are deployed into dev/test/production.

Obviously I am oversimplifying this process, but we have found that this is an extremely effective way to ensure that:

  • Product teams have access to an approved, “best practices” template, which ensures they’re not launching completely insecure resources,
  • You have a central source of truth and can manage multiple systems without having to log in and do the same change on hundreds of systems one by one
  • Security teams have a record of all changes made to the environment, because engineers apply changes through the configuration management tool (rather than one-off)
  • Standards are continually enforced, without human intervention.

How do you bring engineers on board with an automation system?

I have been in IT for about 15 years, and I come from a very “traditional” systems engineering background. I always tried to follow the old school IT rule that you should never do the same thing twice — and the cloud just gives me many new opportunities to put this into practice.

I got introduced to configuration management a few years ago and was obsessed from day 1. Most engineers like scripting things if there is an annoying problem, and cloud automation takes care of some of the most annoying parts of managing infrastructure. You still get to do all the cool stuff without the repetitive, mindless work or constant firefighting. Once your engineers get a taste of this new world, they will never want to go back.

Any advice for IT teams that are trying to implement better controls?

The automation framework I just described — infrastructure automation, deployment automation, and configuration management — is complex and can take months or years to build and master. If you work with a Managed Service Provider that already has this framework in place, you can achieve this operational maturity in months; but if you are starting on your own, do not worry about getting everything done at once.

Start by selecting a cloud security standard or modifying your existing standard for the cloud. Then build these standards into a configuration management tool. Even if you do not build a fully automated framework, implementing configuration management will be a huge benefit for both your security team and your developers.

The post Advanced Cloud Security: Q&A with a Sr. DevOps Engineer appeared first on Logicworks Gathering Clouds.