DataCentred ARM-based OpenStack cloud goes GA

DataCentred is moving its ARM-based OpenStack cloud into GA

DataCentred is moving its ARM-based OpenStack cloud into GA

It has been a big week for ARM in the cloud, with Manchester-based cloud services provider DataCentred announcing that its ARM AArch64-based OpenStack public cloud platform is moving into general availability. The move comes just days after OVH announced it would roll out an ARM-based cloud platform.

The company is running the platform on HP M400 ARM servers, and offering customers access to Intel and ARM architectures alongside one another within an OpenStack environment.

The platform, a product of its partnership with Codethink originally launched in March, comes in response to increasing demand for ARM-based workload support in the cloud according to DataCentre’s head of cloud services Mark Jarvis.

“The flexibility of OpenStack’s architecture has allowed us to make the integration with ARM seamless. When users request an ARM based OS image, it gets scheduled onto an ARM node and aside from this the experience is identical to requesting x86 resources.  Our early adopters have provided invaluable testing and feedback helping us to get to point where we’re confident about stability and support,” Jarvis explained.

“The platform is attracting businesses who are interested in taking advantage of the cost savings the lower-power chips offer as well as developers who are targeting ARM platforms. Developers are particularly interested because virtualised ARM is an incredibly cost-effective alternative to deploying physical ARM hardware on every developer’s desk,” he added.

The company said ARM architecture also offers environmental and space-saving benefits because they can be deployed in higher density and require less power than more conventional x86 chips to run.

Mike Kelly, founder and chief executive of DataCentred didn’t comment on customer numbers or revenue figures but stressed the move demonstrates the company has successfully commercialised OpenStack on ARM.

“The market currently lacks easy to use 64-bit ARM hardware and DataCentred’s innovation provides customers with large scale workloads across many cores. Open source software is the future of computing and the General Availability of DataCentred’s new development will make our services even more attractive to price-sensitive and environmentally-aware consumers,” Kelly said.

DataCentred isn’t alone in the belief that ARM has a strong future in the cloud. The move comes the same week French cloud and hosting provider OVH announced plans to add Cavium ARM-based processors to its public cloud platform by the end of next month.

The company, an early adopter of the Power architecture for cloud, said it will add Cavium’s flagship 48 core 64-bit ARMv8-A ThunderX workload-optimized processor to its RunAbove public cloud service.

Freeport-McMoRan moves its apps into hybrid cloud

Freeport-McMoRan has given itself five years to complete the cloud migration

Freeport-McMoRan has given itself five years to complete the cloud migration

Copper and gold producer Freeport-McMoRan is embarking on a five-year project aimed at migrating its core IT applications over to a hybrid cloud platform. The company said the move is aimed at helping it become more agile and reduce overall IT spending.

Freeport-McMoRan is migrating to a system developed by Accenture and based on Microsoft Azure; the company said its core applications will be deployed on a combination of private and public cloud platforms, with Avanade and Accenture offering up a series of tools helping the company automate and manage its workloads.

“This program brings innovation and cloud economics to bear as we work to become more agile, drive increased revenue, and continue our focus on items that impact mine production,” said Bertrand Odinet, vice president and chief information officer of Freeport-McMoRan.

“By partnering with Accenture, we will gain the ability to grow our service portfolio and scale our IT services in line with our global business requirements,” Odinet said.

Amy K. Dale, managing director and client account lead, Accenture said: “We are collaborating with Freeport-McMoRan to help them evolve to an everything ‘as-a-service’ model, giving them the ability to easily provision new capabilities, reduce risk associated with vendor ‘lock-in’ and enable them to scale their IT services up and down as needed.”

Freeport-McMoRan is the latest natural resource firm to move its core applications into the cloud. In April this year Rio Tinto announced a partnership with Accenture that will see it move the bulk of its application landscape to Accenture’s public cloud service in a bid to save costs and switch to an “as-a-service” IT model.

CSA lends prototype compliance tool to six-year cloud security project

The CSA is part of the STRATUS project, a six-year cybersecurity project

The CSA is part of the STRATUS project, a six-year cybersecurity project

The Cloud Security Alliance (CSA) said this week that it is lending a prototype data auditing and compliance regulation tool to the STRATUS initiative, a six-year multi-million dollar cybersecurity project funded by New Zealand’s Ministry of Business, Innovation, and Employment.

STRATUS, which stands for Security Technologies Returning Accountability, Transparency and User-centric Services in the Cloud, is a project being led by the University of Waikato intends to develop a series of security tools, techniques and capabilities to help give cloud users more control over how they secure the cloud services they use.

As part of the project the CSA showed how cloud data governance could be automated by applying auditing guidelines (CSA Cloud Control Matrix, ISO standards, etc.) and compliance regulations using a recently developed online tool.

The organisation, which is leading the data governance and accountability subproject within STRATUS, said it would also help support STRATUS’ commercialisation efforts.

“STRATUS’ approach to research commercialisation is different from typical scientific research grants,” said Dr. Ryan Ko, principal investigator of STRATUS, and CSA APAC research advisor.

“STRATUS understands that for cloud security innovation to reach a global audience, it will require a platform which will allow these cutting-edge cloud services to quickly align to global best practices and requirements – a core CSA strength given its strong research outputs such as the Cloud Controls Matrix and the Cloud Data Governance Working Group,” Ko said.

Aloysius Cheang, managing director for CSA APAC: “We have developed a prototype tool based on our work so far, that has received positive reviews. In addition, we are working to connect STRATUS and New Zealand to the CSA eco-system through our local chapter. More importantly, we are beginning to see some preliminary results of the efforts to connect to dots to commercialisation efforts as well as standardization efforts.”

The organisation reckons it should be able to show off the “fruit of these efforts” in November this year.

Salesforce Adds Security Service

Salesforce has recently announced Shield, a set of services that expands the security and compliance tool sets of developers creating apps on the Salesforce1 platform for regulated industries. The service adds auditing, encryption, archiving and monitoring services to Salesforce1 to make it easier for developers to ensure that cloud apps meet the security, compliance and governance requirements of their organization and industry standards. Shield may be explored in a drag-and-drop interface instead of requiring app developers to search through code.

cloud security

Tod Nielsen, executive vice president of Salesforce1 Platform, said “[Companies] in regulated industries have struggled to take full advantage of the cloud due to regulatory and compliance constraints. With Salesforce Shield, we are liberating these IT leaders and developers, and empowering them to quickly build the cloud apps their businesses need, with the trust Salesforce is known for.”

Shield will have three main features: Field Audit Trail, Data Archive and Platform Encryption. Field Audit Trail allows developers to monitor data exchanged through their apps to ensure that it is kept up to date and compliant with industry regulations; it may track data changes for up to 10 years. Data may be deleted when no longer needed. Data Archive allows historical data that needs to be kept for a long time to be stored, which helps ensure data is available when needed. Platform Encryption allows developers working on the Salesforce1 platform to encrypt data without affecting the way it is used by other areas of the business so that they do not need specialist hardware or software.

The post Salesforce Adds Security Service appeared first on Cloud News Daily.

Analysing cloud as the target for disaster recovery

(c)iStock.com/4x-image

Analysis If we think about the basic functionality that we take advantage of in the cloud, a whole set of possibilities open up for how we plan for disaster recovery (DR).

Until recently, disaster recovery was an expensive, process intensive service, reserved for only the most critical of corporate services. In most cases DR has been about large servers in two geographically distributed location and large data sets being replicated on a timed basis. Many smaller or less critical services were relegated to a backup and restore DR solution although in many cases as these applications “grew up”, organisations realised that they too needed better protection. Unfortunately, the cost of standing up a legacy style DR environment remained prohibitive for all but the largest and most critical services.

As the world has moved to virtual data centre (VDC) and cloud based services we have seen the paradigm shift. Many features of private, public and hybrid clouds provide a solid basis for developing and deploying highly resilient applications and services. Here are some of the features that make this possible.

  • Lower infrastructure cost. Deployment of services into a cloud environment has been shown to be highly effective in reducing acquisition, upgrade and retirement costs. While an organisation’s “mileage may vary”, planning and choice of appropriate cloud options can provide an environment to protect a much larger range of services.
  • Regionalisation. The ability to create a cloud environment based on multiple geographically distinct cloud infrastructure instances. This allows the “cloud” to operate as one while distributing load and risk to multiple infrastructures. These regions can be built in a number of ways to fit the organisation’s requirements.
  • Storage virtualisation and cloud based replication. The biggest issue facing any DR solution hasn’t changed just because we live in the cloud; data consistency is and will remain the number one headache for DR planners whether utilising legacy technologies or the cloud.

    Fortunately, over time the maturity of storage virtualisation and cloud based replication technologies has increased in an attempt to keep up with the challenge. Once again, organisations need to understand their options in terms of hypervisor based replication, such as Zerto, which replicates data on a hypervisor-by-hypervisor basis, or storage based virtualisation, such as VPlex and ViPR from EMC, for storage based replication.

The key concept in DR planning in the cloud is the creation of an extended logical cloud regardless of the physical location. Let us look at three possible solutions utilising varying cloud deployment models.

Multi-region private cloud

In this option, both the primary and secondary sites, as well as the DevOps and management environments sit within the organisation’s internal environment. The primary and secondary are configured as regions within the organisation’s private cloud.

The biggest benefits to this option are that data is replicated as wide-area-network speed and corporate security policies can remain unmodified. The downside to this option is the need to acquire multiple sets of infrastructure to support the services.

Multi-region hybrid cloud

In this option the services are developed, managed and primarily deployed to an in-house private cloud environment while the secondary site resides within a public cloud provider’s domain. This configuration reduces the need to purchase a secondary set of hardware but also increase data replication load over the public Internet and the time required to move data to the public cloud. 

Multi-region public cloud

In this option both primary and secondary sites reside in the public cloud and depend on the cloud provider’s internal networking and data network services. The service’s management and DevOps still reside within the organisation. This is the lowest cost and most rapid growth option due to low acquisition and update costs as well as providing the most flexibility. The possible downsides to this option are data movement to and from the cloud, and the possible need for adjustment to the organisation’s security policies and procedures.

The takeaway

Many aspects of the above solutions need to be considered before beginning a project to use the cloud as a DR target.

There are plenty of items to think about – not least your disaster recovery operations mode, and whether it is active/active or active/passive. Just like legacy DR solutions, a decision needs to be made about the activity or non-activity of the DR resources versus the cost or benefit of utilising, or leaving idle, a set of resources. If it is a benefit to reach a wider geographic region, then active/active might be a consideration, although keep in mind that active/active will require two-way replication of data, while active/passive will not require this level of coordination. Networking is also key; DNS, user access and management connectivity to the environment needs to be thoroughly planned out.

The biggest concern I have heard from customers is, “How do I enforce the same security standards on a public cloud environment as I have for my in-house environments?” This is an excellent question and one not answered lightly.

In many cases corporate security policies (including perimeter controls, IDAM, logging) can be translated to the public cloud by being a little flexible and a good deal innovative. For example, virtual perimeter firewalls can be implemented, and controlled from the same SOC as their physical counterparts. Also, the same IDAM system that is utilised in-house can modified and then accessed over the net in a public cloud based environment.

Keeping the applications that make up the service in sync across regions requires that when updates are made to the primary virtual machines, virtual machines in the secondary environments are also updated. The implementation of a cloud orchestration tool, such as CSC’s Agility suite, can help a great deal.

One decision point that an organisation needs to come to is between virtualisation of data and replication of data. This carefully considered decision depends on the chosen DR operations mode and the application architecture. Another viable option is for the application to maintain the consistency of the data. The best example of this is directory services (below). Directory services applications are built to maintain data consistency across the multiple controller groups.

It is still true that moving large amounts of data in the public cloud can be a slow and painful process. Unfortunately, most applications will need to have sizeable data sets deployed as some point. I have advised a number of customers to limit the number of large data moves to major version deployments and changes to the underlying structure.

Even if the number of large data moves is limited, proper data architecture and structure is critical. Data consistency based on the DR mode and data replication strategy – in other words, how soon the service needs to have data consistent across regions – is another aspect that needs to be understood.

The following is a high level diagram that shows a hybrid solution for directory services:

The easy part of this solution is that the domain controllers are built by the software provider to stay synchronised. This reduces the data replication problem to providing a large enough network connection for the transactions.

Fortunately, the “add, change and delete” transactions typical of directory services are very small and even in a high volume environment do not need a very large pipe between private and public clouds. Also, while a physical firewall controls access to the primary private cloud environment, an equivalent virtual firewall is used in the public cloud.

Cloud Is No Longer Uncharted Territory By @vmTyler | @CloudExpo #Cloud

During my first trip out to the Blue Box office in Seattle this week, I thought about the coast-to-coast flight while relaxing at 40,000 feet. A journey of thousands of miles made completely routinely in about five hours. I remembered one of my favorite computer games as a kid—The Oregon Trail. It tried to capture the experience of leading a wagon train of settlers to the West Coast in 1848. The trip took almost six months and death along the way wasn’t uncommon, as the game frequently reminded you.

It’s amazing how that trip has transformed from perilous to commonplace.The need for safe travel drove massive technology advancements. We’ve seen a similar transformative push in cloud computing. Cloud computing is already having a major impact on IT services. The need for on-demand, self-service offerings from the developers and lines of business has made this a critical area for CIOs to…

read more

Security and advanced automation in the enterprise: Getting it right

(c)iStock.com/Mikko Lemola

Complexity is a huge security risk for the enterprise.

While security is always a top priority during the initial build phase of a cloud project, over time security tends to slip. As systems evolve, stacks change, and engineers come and go, it’s very easy to end up with a mash-up of legacy and cloud security policies piled on top of custom code that only a few engineers know how to work with.

The security of your system should not depend on the manual labour — or the memory — of your engineers. They shouldn’t have to remember to close XYZ security loophole when deploying a new environment. They don’t have time to manually ensure that every historical vulnerability is patched on every system across multiple clouds.

Security automation is the only long-term solution.

Automation significantly improves an engineer’s ability to “guarantee” that security policies are not only instituted, but maintained throughout the lifecycle of the infrastructure. Automated security policies encourage the adoption of evolving standards. And as vulnerabilities are exposed, changes can be implemented across hundreds or even thousands of complex systems, often simultaneously.

Why security automation?

No one can remember everything: The #1 reason to automate security is that human memory is limited. The bigger the infrastructure, the easier it is to forget to close XYZ security loophole when launching a new environment, or remember to require MFA, etc. Engineers are a smart group, but automation created by expert engineers is smarter.

Code it once and maintain templates, not instances: Manual security work is not only risk, but it is extremely time-consuming. It is much wiser to focus engineering time on building and maintaining the automation scripts that ensure security than it is on the manual work required to hunt down, patch, and upgrade individual components on each of your virtual servers.

Standard naming conventions: Inconsistent or sloppy naming is a bigger security risk than most people think. Imagine an engineer being tasked with opening a port on one of the following security groups, below. It would be fairly easy to mistake one security group for another.

Ensure historical vulnerabilities continue to be patched: When a security vulnerability is identified, engineers must manually patch the vulnerability in the right places, across hundreds or even thousands of separate systems. No human can ensure that the patch is in place across all of these systems.

When something like Heartbleed happens, the engineer can:

  1. Update SSL (or affected package) in a single configuration script
  2. Use a configuration management tool like Puppet to declaratively update all running and future instances, without human intervention
  3. See at first glance which instances are meeting core security objectives
  4. Guarantee that any new instances, either created during Auto Scaling event or due to failover, are protected against all historical vulnerabilities

No limited custom configurations: When different environments are built by different engineering teams at different times, manual security configurations often mean custom configurations. This makes it very difficult to gauge the impact of a feature change on security. A single or limited number of custom configurations not only reduces the risk of unexpected security implications, but also means your team is not relying on the memory of the one or two engineers that built the application’s infrastructure.

Our security automation tools

Infrastructure build out: AWS CloudFormation

Infrastructure build out should be the first thing an IT team automates. This includes networking, security groups, subnets, and network ACLs. At Logicworks, we use AWS CloudFormation to create templates of the foundational architecture of an environment.

CloudFormation allows us to spin up completely new environments in hours. This means no manual security group configuration and no AWS Identity and Access Management (IAM) role configuration. Because configuration is consistent across multiple environments, updates / security patches are near-simultaneous. It also ensures that the templated architecture meets compliance standards, which are usually crucial in the enterprise.

There have been a number of tools released in the last year to build out templates of AWS resources. Our opinion is that CloudFormation is the best tool available, despite certain limitations.

Here are a few tasks that CloudFormation performs:

  • Build network foundation
  • Configure gateways and access points
  • Install management services, like Puppet
  • Allocate Amazon S3 buckets
  • Attach encrypted volumes
  • Control and manage access though IAM
  • Register DNS names with Amazon Route 53
  • Configure log shipping and retention

Configuration management: Puppet

Boot time is arguably the most crucial part of an instance lifetime. Puppet or another configuration management tool like Chef or Ansible not only simplifies and speeds up the bootstrap process, but for security purposes, continually checks in on instances and rolls back non-authorized changes. Puppet manifests are therefore a living single source of truth on instance configuration across the environment. This means that engineers can ensure that no (permanent) changes are made on an instance level that compromise security.

Puppet is also used to install various security features on an instance, like Identity Detection System agents, log shipping, monitoring software, etc. as well as requiring MFA and binding the instance to central authentication.

If there is more experience on an IT team with tools like Chef or Ansible, these are equally powerful solutions for configuration management.

Iterative deployment process: AWS CodeDeploy / Jenkins

Ideally, enterprises want to get to a place where deployment is fully automated. This not only maintains high availability by reducing human error, but it also makes it possible for an organisation to respond to security threats quickly.

AWS CodeDeploy is one of the best tools to be able to achieve automated deployments. Unlike Jenkins, which requires a bit more custom configuration, CodeDeploy can be used across multiple environments simultaneously. Any effort that removes custom work is engineering time that can be focused on more important features — whether that’s developing new code or maintaining the automation scripts that make security automation possible.

Monitoring: EM7, Alert Logic, CloudCheckr

By choosing the right third party monitoring tools, you can bake automated security monitoring into every deploy. ScienceLogic’s EM7 is the best tool we’ve found for automated reporting and trend analysis, while Alert Logic provides the most sophisticated intrusion detection tools. CloudCheckr not only provides excellent cost analysis, but it also has introduced governance features that help enterprises stay compliant. Enterprises are usually quite familiar with these tools, and they can function across public clouds and on-premises environments.

Coming soon to the enterprise?

Security automation is not easy.

In fact, for some enterprises, it may be more cost-effective in the short term to configure security manually; CloudFormation and Puppet take several weeks or even months to learn, and it may take a consulting engagement with a third party cloud expert to even understand the foundational security policies in place across different systems.

However, we expect that a manual security approach will be impossible in five years. Enterprises are already spanning on-premises data centres, on-premises virtualised data centres, colocation centres, some public clouds, etc. As the enterprise moves towards hybrid cloud on an application-by-application basis, this means even more complexity.

But complexity does not have to mean custom configuration. Security automation tools, combined with tools like containers, mean that engineers can escape manual configuration work on individual servers. As security is abstracted away from the underlying infrastructure, we have the opportunity to improve our overall security posture.

This is the next frontier: security as code.

The post Security and Advanced Automation in the Enterprise appeared first on Gathering Clouds.

Salesforce bakes security, compliance into native apps with Shield

Salesforce has launched Shield in a bid to improve confidence among highly regulated cloud adopters

Salesforce has launched Shield in a bid to improve confidence among highly regulated cloud adopters

Salesforce this week announced Salesforce Shield, a portfolio of “drag and drop” security and compliance assurance services that developers can bake into native Salesforce apps.

The Shield services include field audit trail and data integrity tracking, data encryption, archiving and event monitoring.

Salesforce said the services are already in use by some of the company’s clients in the financial services and healthcare services sectors.

“While many companies are leveraging the cloud to build apps at the speed of business, those in regulated industries have struggled to take full advantage of the cloud due to regulatory and compliance constraints,” said Tod Nielsen, executive vice president of Salesforce1 Platform, Salesforce.

“With Salesforce Shield, we are liberating these IT leaders and developers, and empowering them to quickly build the cloud apps their businesses need, with the trust Salesforce is known for.”

Salesforce said the move will help provide assurances to more heavily regulated sectors including developing applications with the Salesforce platform, particularly those that are learning more heavily on mobile platforms.

That said, mobile security has been a big focus for the firm in recent months. In April the company acquired Toopher, a Texas-based mobile authentication startup, and towards the end of last year the company joined Verizon’s dark fibre cloud interconnection service to give its customers more secure options for linking to its cloud platform.

IoT platform Thread unveiled, Qualcomm joins

Another week, another IoT standard ecosystem

Another week, another IoT standard ecosystem

Thread, an IP-based wireless protocol designed for consumer IoT in the home, has been unveiled, with the organisation also confirming Qualcomm Technologies as a member of its board of directors, reports Telecoms.com.

The IoT protocol, according to Thread, is designed for consumers and devices in and around the home, and extends domestic M2M connections into the cloud using IP in a low-power mesh network. Having announced its formation in late 2014, Thread now comprises of more than 160 member companies. Qualcomm has also been appointed to the group’s board of directors, where it will be more heavily involved in the development of Thread-compatible products, as well as the protocol itself.

Considering it’s only been operational for just over nine months, the progress being made within the wider Thread group shows the rate of development within the wider IoT industry in general; a sentiment agreed with by Chris Boross, Thread Group’s president.

“In the nine months since opening membership, more than 160 companies have joined the Thread Group, and now the group is launching the Thread technical specification, which has now completed extensive interoperability testing,” he said. “Today’s announcement means that Thread products are on the way and will be in customers’ hands very shortly. I’m excited to see what kinda of products and experiences Thread developers will build.”

With Qualcomm joining the board of directors, it also shows how large and influential tech firms are hedging their bets on the development of IoT, by also contributing to the AllSeen Alliance, another IoT platform development forum. Raj Talluri, Qualcomm’s SVP of product management reckons the work being done at Thread will help further IoT development.

“When it comes to easily and securely connecting the smart home, the work of industry alliances like the Thread Group are essential,” he said. “Collaborating with the Thread Group allows for the integration of this technology into the world’s leading brands of household appliances, and to thereby speed innovation and market transformation.”

Thread coming to the fore serves to illustrate how progress in various aspects of IoT connectivity is accelerating. There’s a plethora of platforms all addressing separate networking considerations, from Sigfox and its cellular IoT platform, to the Wireless IoT Forum deploying low-powered wide area networks for city-wide M2M connectivity. If the variety of industry stakeholders involved are indeed intent on open collaboration and cooperation to ensure the more altruistic progression of IoT; then sooner or later one would assume a level of convergence of said platforms is inevitable.

Rackspace offers up fanatical support for Microsoft Azure

Picture credit: Flickr/Andrew Hyde

Rackspace has announced it has expanded its service offering with Microsoft, working with customers to speed up their deployment of Azure cloud services.

The two companies are unveiling a self-service hybrid solution utilising Rackspace’s private cloud powered by Microsoft Cloud Platform. The two companies, who have enjoyed a 13 year partnership, will aim to minimise costs and optimise performance with its feted fanatical support, offering a 100% network uptime guarantee and one hour hardware replacement.

This move is targeted squarely at organisations who are not quite ready to go all out with the public cloud, serving customers who want public, private and hybrid cloud environments built on the Microsoft Azure stack. Alongside the support, Rackspace will offer architectural guidance to customers, helping them build applications – in some cases to take into account on premise IT environments – and optimise databases.

Taylor Rhodes, Rackspace CEO, said following the announcement: “Our strategy at Rackspace has always been to provide the world’s best expertise and service for industry-leading technologies. We’re pleased to expand our relationship with Microsoft and the options we provide for our customers by offering Fanatical Support for Azure.”

Rackspace recently won the 2015 Microsoft Hosting Partner of the Year award. Following an uneasy last 12 months, in which the company abandoned takeover plans in September, shares in the company plummeted. Following the Microsoft deal, investors are being much more kind. Zacks upgraded the firm’s shares from a hold rating to a strong buy rating, while Wells Fargo & Co reaffirmed its outperform rating.

Elsewhere, the open cloud provider is shovelling a “significant investment” into CrowdStrike’s $100 million series C financing round. The company, whose total funding round pot stands at $156m, includes Rackspace as a customer and provides software as a service around an endpoint protection platform.