Equinix to offer direct access to Alibaba’s cloud service

Equinix will offer direct links to Alibaba's cloud

Equinix will offer direct links to Alibaba’s cloud

Equinix has signed an agreement with Alibaba that will see the American datacentre incumbent provide direct access to Chinese ecommerce firm’s cloud computing service.

The deal will see Equinix add Aliyun, Alibaba’s cloud computing division, to its growing roster of cloud services integrated with its cloud interconnection service, and offer direct access to Aliyun’s IaaS and SaaSs in both Asia and North America.

Equinix said it’s aiming this primarily at large multinationals looking to expand their infrastructure into Asia.

“Our multi-national enterprise customers are increasingly asking for access to the Aliyun cloud platform, as they deploy cloud-based applications across Asia,” said Chris Sharp, vice president of cloud innovation, Equinix.

“By providing this access in two strategic markets, we’re empowering businesses to build secure, private clouds, without compromising network and application performance,” Sharp said.

Sicheng Yu, vice president of Aliyun said: “Aliyun is very excited about our global partnership with Equinix, who not only has a global footprint of cutting-edge datacentres, but has also brought together the most abundant cloud players and tenants in the cloud computing ecosystem on its Equinix Cloud Exchange platform. Connecting the Equinix ecosystem with our Aliyun cloud services on Cloud Exchange will provide customers with the best-of-breed choices and flexibility.”

The move will see Equinix expand its reach in Asia, a fast-growing market for cloud services, and comes just one week after Equinix announced it would bolster its European footprint with the TelecityGroup merger.

The History of Windows in One Infographic

With the official release of Windows 10 just around the corner, we’re waxing nostalgic over our favorite versions of the OS. In fact, we even put together this multi-page slideshow of Windows over the years. However, if you prefer a visual summary, check out this awesome infographic detailing the history of Windows from humble start to OS superpower […]

The post The History of Windows in One Infographic appeared first on Parallels Blog.

Cisco Acquires Piston Cloud Computing

Cisco announced their intent to acquire Piston Cloud Computing, a four-year old company offering OpenStack cloud distribution. The acquisition is intended to help improve product, delivery and operational capabilities of its Intercloud services. Cisco has also recently acquired Metacloud, a private cloud provider, for the same purpose as the Piston Cloud Computing acquisition.

 

piston_cloud_logo

 

Intercloud was launched in 2014 in an effort to create a connected cloud network. It is made up of the Intercloud Fabric (this allows workloads to be migrated among various public clouds) and the Application Centric Infrastructure software (automatically provisions resources depending on the workload). Intercloud is part of Cisco’s Data Center division that has seen rapid growth in recent years. In the 2014 Fiscal Year, divisional sales grew by 27% due to expansion in the company’s Unified Computing System products and cloud offerings.. This year they have grown by 25% so far.

 

Cisco currently has over 350 data centers around the world, as well as partnerships with companies such as Microsoft, Telestra, Johnson Controls, Wipro and Red Hat. These partnerships are to expand their infrastructure-as-a-service offerings and speed up their Internet of Everything concept. Intercloud uses Cisco’s Application Centric Infrastructure to improve application performance and security.

 

Marie_hattar_2_3

 

Cisco has stated that they plan to spend $1 billion over the next two years expanding their cloud business. They also estimate that their Internet of Everything market to be worth $19 trillion in the next 10 years.

The post Cisco Acquires Piston Cloud Computing appeared first on Cloud News Daily.

Microservices vs Microsegmentation | @DevOpsSummit #DevOps #Docker #Microservices

Let’s just nip the conflation of these terms in the bud, shall we?

“MIcro” is big these days. Both microservices and microsegmentation are having and will continue to have an impact on data center architecture, but not necessarily for the same reasons. There’s a growing trend in which folks – particularly those with a network background – conflate the two and use them to mean the same thing.

They are not.

One is about the application. The other, the network. There is a relationship, but it’s a voluntary one. They are two very different things and we need to straighten out the misconceptions that are rapidly becoming common.

Microservices

Microservices are the resulting set of services (mini applications, if you will) that arise from the process of decomposing an application into smaller pieces. If you take a monolithic application and segment it into many pieces, you end up with microservices. It is an application architecture; an approach to designing applications.

monolithic vs microservicesThis architectural approach has a significant impact on the network architecture, as it forces broader distribution of application-affine services like load balancing, caching and acceleration to be located closer to the individual service. Microservices as an approach is a forcing factor in the bifurcation of the network as it separates application-affine services from corporate-affine services.

Microservice architectures are beneficial in that they are highly efficient; it separates functional or object domains and thus lends itself well to a more targeted and efficient scalability model. It is particularly useful when designing APIs, as in addition to the scalability benefits it also localizes capabilities and enables isolated upgrades and new features without necessarily disrupting other services (and the teams developing other services). This lends itself well to agile methodologies while enabling a greater focus on API development as it relates to other services as well as the applications that will use the service.

Microsegmentation

Microsegmentation is about the network; to be precise, at the moment it’s about the security functions in the network and where they reside. It’s a network architecture that, like microservices, breaks up a monolithic approach to something (in this case security) and distributes it into multiple services. You could say that microsegmentation is micro-security-services, in that it decomposes a security policy into multiple, focused security policies and distributes them in an resource-affine manner. That is, security policies peculiar to an application are physically located closer to that application, rather than at the edge of the network as part of a grandiose, corporate policy.

This approach, while initially focusing on security, can be applied to other services as well. As noted above, a result of a microservice approach to applications the network naturally bifurcates and application-affine services (like security) move closer to the application. Which is kind of what microsegmentation is all about; smaller, distributed “segments” of security (and other application-affine services like load balancing and caching) logically deployed close to the application.

Thus, if there is any relationship between the two approaches, it is that microservices tend to create an environment in which microsegmentation occurs.

migrosegmentation

There are other reasons for microsegmentation, including the reality that the scale required at the edge to support every application-specific service is simply pushing IT to the edge of its wits (pun only somewhat intended). The other driving factor (or maybe it’s a benefit?) is that of service isolation, which provides for fewer disruptions in the event of changes occurring in a single service. For example, a change to the core firewall is considered potentially highly disruptive because if it goes wrong, every thing breaks. Changing the firewall rules on a localized, isolated service responsible for serving two or three applications, has a much lower rate of disruption should something go wrong.

This is highly desirable in a complex environment  in which stability is as important as agility.

COHABITATION

In a nutshell, microservices are to applications what microsegmentation is to network services. Both are about decomposing a monolithic architecture into its core components and distributing them topologically in a way that enables more scalable, secure and isolated domains of control.

The thing to remember is that just because dev has decided to leverage microservices does not in turn mean that the network somehow magically becomes microsegmented or that if microsegmentation is used to optimize the network service architecture that suddenly apps become microservices. Microsegmentation can be used to logically isolate monolithic applications as easily as it can microservices.

Either approach can be used independently of one another, although best practices in networking seem to indicate that if dev decides to go with microservices, microsegmentation is not going to be far behind. But the use of microsegmentation in the network does not mean dev is going to go all in with microservices.

read more

Why human error is still the biggest risk to your cloud system going down

(c)iStock.com/mediaphotos

The number one risk to system availability remains human error, according to the latest disaster recovery industry report from CloudEndure.

The research examines the various protocols businesses have in place for downtime if – or when – it occurs. On a scale of one to 10, human errors – including application bugs – hit 8.1, compared to network failures (7.2), cloud provider downtime (6.9) and external threats (6.7).

Even though the majority (83%) of organisations have a SLA goal of 99.9% or better, this doesn’t often translate into actual results. 44% of firms said they had at least one outage in the past three months, with 27% admitting their systems had gone down within the past month. 9% of respondents said their systems had never gone down.

Most intriguingly, more than a quarter of firms surveyed (28%) don’t measure service availability at all, and 15% said they do not share system availability numbers with customers. 37% said they meet their availability goals consistently, with 50% saying they hit their goals “most of the time.”

It’s worth noting what the accepted definition of ‘downtime’ is – as the report does not give a clear one. Half of respondents say downtime is simply where the system is not accessible, while roughly a quarter say it means the system is accessible but performance is highly degraded (26%) or some functions are not operational (24%).

Overwhelmingly, the respondents’ cloud provider of choice was Amazon Web Services (AWS). 59% of those polled said they used public cloud, with three quarters (74%) of that number opting for Amazon, ahead of Microsoft (7%), Google (6%) and Rackspace (4%). Not surprisingly, service availability was considered most critical to the customers of 33% of firms.

The report’s main claim is a “strong correlation” between the cost of downtime and the average hours per week invested in disaster recovery. 49% of respondents said they used their own measurement tools, with a quarter (24%) using some sort of third party tool. According to respondents remote storage backup (57%) is the most frequently used strategy to ensure system availability, ahead of storage replication (46%).

Previous reports from CloudEndure examined AWS and Microsoft Azure uptime figures for 2014: AWS showed a 41% reduction in performance issues quarter to quarter last year, while there were significantly more service interruptions in the last three quarters for Azure. 

Monitoring for #Microservices By @AppDynamics | @DevOpsSummit #DevOps #Docker #IoT

It’s no news that microservices are one of the top trends, if not the top trend, in application architectures today. Take large monolithic applications which are brittle and difficult to change and break them into smaller manageable pieces to provide flexibility in deployment models, facilitating agile release and development to meet today’s rapidly shifting digital businesses. Unfortunately, with this change, application and infrastructure management is more complex due to size and technology changes, most often adding significantly more virtual machines and/or containers to handle the growing footprint of application instances.

read more

G-Cloud: Much has been achieved, but the programme still needs work

The UK government is ahead of the curve in cloud, but work still needs doing

The UK government is ahead of the curve in cloud, but work still needs doing

Thanks to G-Cloud, the once stagnant public sector IT marketplace that was dominated by a small number of large incumbent providers, is thriving. More and more SMEs are listing their assured cloud services on the framework, which is driving further competition and forcing down costs for public sector organisations, ultimately benefitting each and every UK tax payer.  But the programme still needs work.

G-Cloud initially aimed to achieve annual savings of more than £120m and to account for at least half of all new central Government spend by this year. The Government Digital Service has already estimated that G-Cloud is yielding efficiencies of at least 50 per cent, comfortably exceeding the initial target set when the Government’s Cloud Strategy was published in 2011.

According to the latest figures, the total reported G-Cloud sales to date have now exceeded £591m, with 49 per cent of total sales by value and 58 per cent by volume, having been awarded to SMEs. 76 per cent of total sales by value were through central Government; 24 per cent through the wider public sector, so while significant progress has been made, more work is clearly needed to educate local Government organisations on the benefits of G-Cloud and assured cloud services.

To provide an example of the significant savings achieved by a public sector organisation following a move to the cloud, the DVLA’s ‘View driving record’ platform, hosted on GOV.UK, secured online access to driving records for up to 40 million drivers for the insurance industry, which it is hoped will help to reduce premiums. Due to innovative approaches including cloud hosting, the DVLA managed to save 66 per cent against the original cost estimate.

Contracts held within the wider public sector with an estimated total value of over £6bn are coming to an end.  Therefore continued focus must be placed on disaggregating large contracts to ensure that all digital and ICT requirements that can be based on the cloud are based on the cloud, and sourced from the transparent and vendor-diverse Government Digital Marketplace.

Suppliers, especially SMEs and new players who don’t have extensive networks within the sector, also need much better visibility of downstream opportunities. Currently, G-Cloud is less transparent than conventional procurements in this respect, where pre-tender market engagements and prior information notices are now commonplace and expected.

However, where spend controls cannot be applied, outreach and education must accelerate, and G-Cloud terms and conditions must also meet the needs of the wider public sector. The G-Cloud two year contract term is often cited as a reason for not procuring services through the framework, as is the perceived inability for buyers to incorporate local, but mandatory terms and conditions.

The Public Contracts Regulations 2015 introduced a number of changes to EU procurement regulations, and implemented the Lord Young reforms, which aim to make public procurements more accessible and less onerous for SMEs. These regulations provide new opportunities for further contractual innovation, including (but not limited to) dynamic purchasing systems, clarification of what a material contract change means in practice, and giving buyers the ability to take supplier performance into account when awarding a contract.

The G-Cloud Framework terms and conditions must evolve to meet the needs of the market as a whole, introducing more flexibility to accommodate complex legacy and future requirements, and optimising the opportunities afforded by the new public contract regulations. The introduction of the Experian score as the sole means of determining a supplier’s financial health in the G-Cloud 6 Framework is very SME unfriendly, and does not align to the Crown Commercial Service’s own policy on evaluation of financial stability. The current drafting needs to be revisited for G-Cloud 7.

As all parts of the public sector are expected to be subject to ongoing fiscal pressure, and because digitising public services will continue to be a focus for the new Conservative Government, the wider public sector uptake of G-Cloud services must continue to be a priority. Looking to the future of G-Cloud, the Government will need to put more focus on educating buyers on G-Cloud procurement, the very real opportunities that G-Cloud can bring, underlined by the many success stories to date, and ensuring the framework terms and conditions are sufficiently flexible to support the needs of the entire buying community. G-Cloud demonstrates the possibilities when Government is prepared to be radical and innovative and in order to build on the significant progress that has been made, we hope that G-Cloud will be made a priority over the next five years.

Written by Nicky Stewart, commercial director at Skyscape Cloud Services

Philips health cloud lead: ‘Privacy, compliance, upgradability shaping IoT architecture’

Ad Dijkhoff says the company's healthcare cloud ingests petabytes of data, experiencing 140 million device calls on its servers each data

Ad Dijkhoff says the company’s healthcare cloud ingests petabytes of data, experiencing 140 million device calls on its servers each day

Data privacy, compliance and upgradeability are having a deep impact on the architectures being developed for the Internet of Things, according to Ad Dijkhoff, platform manager HealthSuite Device Cloud, Philips.

Dijkhoff, who formerly helped manage the electronics giant’s infrastructure as the company’s datacentre programme manager, helped develop and now manages the company’s HealthSuite device cloud, which links over 7 million healthcare devices and sensors in conjunction with social media and electronic medical health record data to a range of backend data stores and front-end applications for disease prevention and social healthcare provision.

It collects all of the data for analysis and to help generate algorithms to improve the quality of the medical advice that can be generated from it; it also opens those datastores to developers, which can tap into the cloud service using purpose-built APIs.

“People transform from being consumers to being patients, and then back to being consumers. This is a tricky problem – because how do you deal with privacy? How do you deal with identity? How do you manage all of the service providers?” Dijkhoff said.

On the infrastructure side for its healthcare cloud service Philips is working with Rackspace and Alibaba’s cloud computing unit; it started in London and the company also has small instances deployed in Chicago, Houston and Hong Kong. It started with a private cloud, in part because the technologies used meant the most straightforward transition from its hosting provider at the time, and because it was the most feasible way to adapt the company’s existing security and data privacy policies.

“These devices are all different but they all share similar challenges. They all need to be identified and authenticated, first of all. Another issue is firmware downloadability – what we saw with consumer devices and what we’re seeing in professional spaces is that these devices with be updated a number of times during a lifetime, so you need that process to be cheap and easy.”

“Data collection is the most important service of them all. It’s around getting the behaviour of the device, or sensor behavior, or the blood pressure reading or heart rate reading into a back end, but doing it in a safe and secure way.”

Dijkhoff told BCN that these issues had a deep influence architecturally, and explained that it had to adopt a more modular approach to how it deployed each component so that it could drop in cloud services where feasible – or use on-premise alternatives where necessary.

“Having to deal with legislation in different countries on data collection, how it can be handled, stored and processed, had to be built into the architecture from the very beginning, which created some pretty big challenges, and it’s probably going to be a big challenge for others moving forward with their own IoT plans,” he said. “How do you create something architecturally modular enough for that? We effectively treat data like a post office treats letters, but sometimes the addresses change and we have to account for that quickly.”

Day 3 Keynote at @CloudExpo New York | #Cloud #BigData #DevOps

In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discusses the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation.

read more

CSA tool helps cloud users evaluate data protection posture of providers

The CSA says the tool can help customers and providers improve their cloud data protection practices

The CSA says the tool can help customers and providers improve their cloud data protection practices

The Cloud Security Alliance this week unveiled the next generation of a tool designed to enable cloud customers to evaluate the level of data protection precautions implemented by cloud service providers.

The Privacy Level Agreement (PLA) v2 tool aims to give customers a better sense of the extent to which their providers have practices, procedures and technologies in place to ensure data protection vis-à-vis European data privacy regulations.

It also provides a guidance for cloud service providers to achieve compliance with privacy legislation in EU, and on how these providers can disclose the level of personal data protection they offer to customers.

“The continued reliance and adoption of the PLA by cloud service providers worldwide has been an important building block for developing a modern and ethical privacy-rich framework to address the security challenges facing enterprises worldwide,” said Daniele Catteddu, EMEA managing director of CSA.

“This next version that addresses personal data protection compliance will be of significant importance in building the confidence of cloud consumers,” Catteddu said.

The tool, originally created in 2013, was developed by the PLA working group, which was organised to help transpose the Art. 29 Working Party and EU National Data Protection Regulator’s recommendations on cloud computing into an outline CSPs can use to disclose personal data handling practices.

“PLA v2 is a valuable tool to guide CSPs of any size to address EU personal data protection compliance,” said Paolo Balboni, co-chair of the PLA Working Group and founding partner of ICT Legal Consulting. “In a market where customers still struggle to assess CSP data protection compliance, PLA v2 aims to fill this gap and facilitate customer understanding.”