Archivo de la categoría: Datacentre

Snooper’s charter a potential disaster warns lobby of US firms

security1The ‘snooper’s charter’ could neutralise the contribution of Britain’s digital economy, according to a representation of US tech corporations including Facebook, Google, Microsoft, Twitter and Yahoo.

In a collective submission to the Draft Investigatory Powers Bill Joint Committee they argue that surveillance should be “is targeted, lawful, proportionate, necessary, jurisdictionally bounded, and transparent.”

These principles, the collective informs the parliamentary committee, reflect the perspective of global companies that offer “borderless technologies to billions of people around the globe”.

The extraterritorial jurisdiction will create ‘conflicting legal obligations’ for them, the collective said. If the UK government instructs foreign companies what to do, then foreign governments may follow suit, they warn. A better long term resolution might be the development of an ‘international framework’ with ‘a common set of rules’ to resolve jurisdictional conflicts.

“Encryption is a fundamental security tool, important to the security of the digital economy and crucial to the safety of web users worldwide,” the submission said. “We reject any proposals that would require companies to deliberately weaken the security of their products via backdoors, forced decryption or any other means.”

Another area of concern mentioned is the bill’s proposed legislation on Computer Network Exploitation which, the companies say, gives intelligence services legal powers to break into any system. This would be a very dangerous precedent to set, the submission argues, “we would urge your Government to reconsider,” it said.

Finally, Facebook and co registered concern that the new law would prevent any discussion of government surveillance, even in court. “We urge the Government to make clear that actions taken under authorization do not introduce new risks or vulnerabilities for users or businesses, and that the goal of eliminating vulnerabilities is one shared by the UK Government. Without this, it would be impossible to see how these provisions could meet the proportionality test.”

The group submission joins other individual protest registered by Apple, EE, F-Secure, the Internet Service Providers’ Association, Mozilla, The Tor Project and Vodafone.

The interests of British citizens hang in a very tricky balance, according to analyst Clive Longbottom at Quocirca. “Forcing vendors to provide back door access to their systems and platforms is bloody stupid, as the bad guys will make just as much use of them. However, the problem with terrorism is that it respects no boundaries. Neither, to a greater extent, do any of these companies. They have built themselves on a basis of avoiding jurisdictions – only through such a means can they minimise their tax payments,” said Longbottom.

Apple to build new cloud infrastructure as Verizon sells off data centres – reports

datacentreTwo US tech giants are heading in opposite directions regarding datacenters, according to a couple of recent reports

Local US news sources report that Apple has filed a permit with Washoe County in Nevada, to build a new cluster of data centre facilities near its original Reno site. The planning application for Apple’s new ‘Project Huckleberry’ involves the construction of the full shell of a new data centre, several data centre clusters and a support building. The new Huckleberry project will have essentially the same design as an earlier installation in Reno, dubbed Project Mills, according to Trevor Lloyd, senior planner for Washoe County Planning and Development’s Community Services.

Apple was first attracted to invest in the area in 2012 when it received an $89 million tax abatement incentive to locate in Reno Technology Park. Apple recently applied for permission to build a new substation to support further development as the existing site is reaching its capacity, according to Lloyd.

Permission for the site, based on past trends, should be granted by the end of January, according to Lloyd. Tax incentives for cloud infrastructure projects could make economic sense for regional development authorities given their long term impact, according to Mike Kazmierski, president of western Nevada’s Economic Development Authority. “When you put tens of hundreds of millions of dollars on a huge data centre project, you’re in it for the long haul,” said Kazmierksi.

Cloud service provider Rackspace is also planning to build a data centre at Reno Technology Park.

The demands that data centres make on the local community are minor in comparison to benefits that a cloud computing infrastructure brings to the community though economic investments – and owners of data centres should use this in negotiations, according to Kazmierski.

Meanwhile, a large stock of cloud infrastructure could come on the market as telco Verizon Communications reportedly began a process to sell its global estate of 48 data centres. According to insiders quoted by Reuters Verizon is aiming to raise over $2.5 billion and streamline its business. Currently the colocation portfolio generates $275 million in EBITDA.

Telcos such as AT&T, CenturyLink and Windstream have also divested themselves of their data centres businesses in recent years.

Software defined storage and security drive cloud growth, say studies

Cloud securityData centre builders and cloud service developers are at loggerheads over their priorities, according to two new reports.

The explosive growth of modern data centres is being catalysed by new hyperconverged infrastructures and software defined storage, says one study. Meanwhile another claims that enthusiasm for cloud projects to run over this infrastructure is being suffocated by security fears.

A global study by ActualTech Media for Atlantis Computing suggests that a large majority of data centres are now using hyperconverged infrastructure (HCIS) and software defined storage (SDS) techniques in the race to built computing arenas. Of the 1,267 leaders quizzed in 53 countries, 71 per cent said they are using or considering HCIS and SDS to beef up their infrastructure. However, another study, conducted on behalf of hosting company Rackspace, found that security was the over riding concern among the parties who will use these facilities.

The Hyperconverged Infrastructure and Software-Defined Storage 2016 Survey proves there is much confusion and hype in these markets, according to Scott D. Lowe, a partner at ActualTech Media, who said there is not enough data about real-world usage available.

While 75 per cent of data centres surveyed use disk-based storage, only 44 per cent have long term plans for it in their infrastructure plans and 19 per cent will ditch it for HCIS or SDS. These decisions are motivated by the need for speed, convenience and money, according to the survey, with performance (72 per cent), high availability (68 per cent) and cost (68 per cent) as top requirements.

However, the developers of software seem to have a different set of priorities, according to the Anatomy of a Cloud Migration study conducted for Rackspace by market researcher Vanson Bourne. The verdict from this survey group – 500 business decision markers rather than technology builders – was that security will be the most important catalyst and can either speed or slow down cloud adoption.

Company security was the key consideration in the top three motives named by the survey group. The biggest identified threat the survey group wanted to eliminate was escalating IT costs, which 61 per cent of the group named. The next biggest threat they want to avert is downtime, with 50 per cent identifying a need for better resilience and disaster recovery from the cloud. Around a third (38 per cent) identified IT itself as a source of threats (such as viruses and denial of service) that they would want a cloud project to address.

“Cloud has long been associated with a loss of control over information,” said Rackspace’s Chief Security Officer Brian Kelly, “but businesses are now realising this is a misconception.”

Toyota to build massive data centre and recruit partners to support smart car fleet

Toyota smart car standCar maker Toyota is to build a massive new IT infrastructure and data centre to support all the intelligence to be broadcast its future range of smart cars. It is also looking for third party partners to develop supporting services for its new fleet of connected vehicles.

The smart car maker unveiled its plans for a connected vehicle framework at the 2016 Consumer Electronics Show (CES) in Las Vegas.

A new data centre will be constructed and dedicated to collecting information from new Data Communication Modules (DCM), which are to be installed on the frameworks of all new vehicles. The Toyota Big Data Center (TBDC) – to be stationed in Toyota’s Smart Center – will analyse everything sent by the DCMs and ‘deploy services’ in response. As part of the connected car regime, Toyota cars could automatically summon the emergency services in response to all accidents, with calls being triggered by the release of an airbag. The airbag-induced emergency notification system will come as a standard feature, according to Toyota.

The new data comms modules will appear as a feature in 2017 for Toyota models in the US market only, but it will roll out the service into other markets later, as part of a plan to build a global DCM architecture by 2019. A global rollout out is impossible until devices are standardised across the globe, it said.

Toyota said it is to invite third party developers to create services that will use the comms modules. It has already partnered with UIEvolution, which is building apps to provide vehicle data to Toyota-authorised third-party service providers.

Elsewhere at CES, Nvidia unveiled artificial-intelligence technology that will let cars sense the environment and decide their best course. NVIDIA CEO Jen-Hsun Huang promised that the DRIVE PX 2 will have ten times the performance of the first model. The new version will use an automotive supercomputing platform with 8 teraflops of processing power that can process 24 trillion deep learning operations a second.

Volvo said that next year it lease out 100 XC90 luxury sports utility vehicles that will use DRIVE PX 2 technology to drive autonomously around Volvo’s hometown of Gothenburg. “The rear-view mirror is history,” said Huang.

ESI installs HPC data centre to support virtual prototyping

Cloud computingManufacturing service provider ESI Group has announced that a new high performance computing (HPC) system is powering its cloud-based virtual prototyping service to a range of industries across Europe.

The new European HPC-driven data centre is based on the Teratec Campus in Paris, close to Europe’s biggest HPC centre, the Très Grand Centre de Calcul, the data centre of The French Alternative Energies and Atomic Energy Commission (CEA). The location was chosen in order to make collaborative HPC projects possible, according to ESI. The 13,000 square metre CEA campus has a supercomputer with a peak performance of 200 Teraflops and a CURIE supercomputer capable of running a 2 Petaflops per second.

ESI’s virtual prototyping, a product development process run on computer-aided design (CAD), computer-automated design (CAutoD) and computer-aided engineering (CAE) software in order to validate designs, is increasingly run on the cloud, it reports. Before manufacturers commit to making a physical prototype they create a 3D computer-generated model and simulate different test environments.

The launch of the new HPC data centre gives ESI a cloud computing point of delivery (PoD) to serve all 40 of ESI’s offices across Europe and the world. The HPC cloud PoD will also act as a platform for ESI’s new software development and engineering services.

The HPC facility was built by data centre specialist Legrande. The new HPC is needed to meet the change in workloads driven by virtualization and cloud computing with the annual growth in data is expected to rise from 50% in 2010 to reach 4400% in 2020, according to Pascal Perrin, Datacenter Business Development Manager at Legrand.

Legrand subsidiary Minkels supplied and installed the supporting data centre hardware, including housing, UPS, cooling, monitoring and power distribution systems. The main challenge with supporting a super computer that can ramp up CPU activity by the petaflop and with petabytes of data moving in and out of memory is securing the supporting resources, said Perrin. “Our solutions ensure the electrical and digital supply of the data centre at all times,” he said.

Skyhigh Networks opens European data centre to resolve Safe Harbour fears

datacentreCloud security vendor Skyhigh Networks has opened a new data centre in Germany as it moves to strengthen its support of European customers and multi-nationals.

The Frankfurt facility is a response to increasing demand for data localisation within Europe, which has been stoked by the recent Safe Harbour ruling by the European Court of Justices.

In October BCN reported how a Court of Justice of the European Union (CREU) ruling puts many companies at risk of prosecution by European privacy regulators if they transfer the data of EU citizen’s to the US without a demonstrable set of privacy safeguards.

The 4,000 firms that transfer their clients’ personal data to the United States currently have no means of demonstrating compliance to EC privacy regulations. As the legal situation currently stands, EU data protection law says companies cannot transfer EU citizens’ personal data to countries outside the EU which have insufficient privacy safeguards.

The new data centre will use a Hadoop cluster to analyse traffic analysis and identify and report on the risk of cloud services. It will provide interception, inspection, encryption and decryption services. The system will also run anomaly detection, reporting and data leak prevention services to secure SkyHigh’s clients’ cloud services.

SkyHigh said the new data centre gives customers a choice over where their data is processed and better performance in addition to privacy and sovereignty. The data centre is on a site owned and managed by European employees.

“We are delighted that Skyhigh Networks has opened a data centre in Europe,” said David Cahill, Security Strategy and Architecture Manager at AIB, a bank with 2.6 million customers and 14,000 employees. Cahill said that conforming to existing European data protection laws and the General Data Protection Regulation expected in 2016 need to be taken “very seriously”.

EMC launches new open source tech for the software defined datacentre

EMC2EMC is launching RackHD and revised version of CoprHD and REX-Ray in its quest to be a top open source influence on tomorrow’s software defined datacentre industry.

RackHD is a hardware management and orchestration software that promises to automate functions such as the discovery, description, provisioning and programming of servers. EMC says it will speed up the process of installing third platform apps by automatically updating firmware and installing operating systems.

Meanwhile, version 2.4 of storage automator CoprHD was improved with help from Intel and Oregon State University. It can now centralise and transform storage from multiple vendors into a simple management platform and interface, EMC claims.

The updated version of storage orchestration engine REX-Ray 0.3 has added storage platform support for Google Compute Engine in addition to EMC Isilon and EMC VMAX.

These products are aimed at modern data centres with a multi-vendor mix of storage, networking and servers and an increasing use of commodity hardware as building blocks of software defined hyperscale infrastructure. In these cases the use of low-level operating systems or updating firmware and BIOS across numerous devices is a cumbersome manual task for data centre engineers, says EMC. RackHD was created to automate and simplify these fundamental tasks across a broad range of datacentre hardware.

According to EMC, developers can use the RackHD API as a component in a larger orchestration system or create a user interface for managing hardware services regardless of the underlying hardware in place.

Intel and Oregon State University have joined EMC’s CoprHD Community as the newest contributors to the storage vendor’s open source initiative. Intel is leading a project to integrate Keystone with CoprHD, allowing the use of the Cinder API and the CoprHD API to provide block storage services.

“We discovered how difficult it was to implement any kind of automation tooling for a mix of storage systems,” said Shayne Huddleston, Director of IT Infrastructure at Oregon State University. “Collaborating with the CoprHD community will allow us avoid vendor lock-in and support our entire infrastructure.”

Red Hat helps Medlab share supercomputer in the cloud

redhat office logoA cloud of bioinformatics intelligence has been harmonised by Red Hat to create ‘virtual supercomputers’ that can be shared by the eMedlab collective of research institutes.

The upshot is that researchers at institutes such as the Wellcome Trust Sanger, UCL and King’s College London can carry out much more powerful data analysis when researching cancers, cardio-vascular conditions and rare diseases.

Since 2014 hundreds of researchers across the eMedlab have been able to use a high performance computer (HPC) with 6,000 cores of processing power and 6 Petabytes of storage from their own locations. However, the cloud environment now collectively created by technology partners Red Hat, Lenovo, IBM and Mellanox, along with supercomputing integrator OCF, means none of the users have to shift their data to the computer. Each of the seven institutes can configure their share of the HPC according to their needs, by self-selecting the memory, processors and storage they’ll need.

The new HPC cloud environment uses a Red Hat Enterprise Linux OpenStack platform with Lenovo Flex hardware to create virtual HPC clusters bespoke to each individual researchers’ requirements. The system was designed and configured by OCF, working with partners Red Hat, Lenovo, Mellanox and eMedlab’s research technologists.

With the HPC hosted at a shared data centre for education and research, the cloud configuration has made it possible to run a variety of research projects concurrently. The facility, aimed solely at the biomedical research sector, changes the way data sets are shared between leading scientific institutions internationally.

The eMedLab partnership was formed in 2014 with funding from the Medical Research Council. Original members University College London, Queen Mary University of London, London School of Hygiene & Tropical Medicine, the Francis Crick Institute, the Wellcome Trust Sanger Institute and the EMBL European Bioinformatics Institute have been joined recently by King’s College London.

“Bioinformatics is a very, data-intensive discipline,” says Jacky Pallas, Director of Research Platforms at University College London. “We study a lot of de-identified, anonymous human data. It’s not practical for scientists to replicate the same datasets across their own, separate physical HPC resources, so we’re creating a single store for up to 6 Petabytes of data and a shared HPC environment within which researchers can build their own virtual clusters to support their work.”

In other news Red Hat has announced a new upgrade of CloudForms with better hybrid cloud management through more support for Microsoft Azure Support, advanced container management and improvements to its self-service features.

Abraxas uses Huawei Cloud Fabric for SDN datacentre

Cloud service provider Abraxas has built a new a virtualized multi-tenant cloud datacentre in Geneva, Switzerland using Huawei’s Cloud Fabric systems.

Huawei’s Cloud Fabric will give the datacentre the foundations on which to build a software defined network later, according to outsourcing giant Abraxas, which runs cloud computing services for enterprises, government agencies and scientific research institutions across Europe.

The Cloud Fabric is built out of a network of Huawei’s CloudEngine datacentre switches to create what Huawei describes as a Transparent Interconnection of Lots of Links (TRILL) and Ethernet Virtual Network (EVN). The Huawei equipment helped Abraxas build an ultra-large cross-datacentre Layer 2 network, which it says will give datacentre managers and cloud operators complete flexibility when installing Virtual Machine (VM) resources.

Virtualization of these core switches, using a technique that Huawei describes as “1: N”, helps to lower the running cost of the network and gives more service options with its variety of Virtual Switches (vSwitches), each of which can create completely independent autonomous sub-networks. The CloudEngine datacentre switches, when used with Huawei’s Agile Controller, can create the right conditions for a completely software defined network, when the time comes.

Abraxas needed to make more efficient use of its IT resources and to create the foundation for a strategy to migrate services onto its datacentres, said Olaf Sonderegger, ICT Architect, Infrastructure Management at Abraxas. But it also had to prepare for the virtualised future, said Sonderegger. “In order to fulfil sustainable service development, our datacentre network architecture has to be flexible enough to evolve into SDN-enabled architecture,” said Sonderegger.

Google signs five deals for green powering its cloud services

Cloud service giant Google has announced five new deals to buy 781MW of renewable energy from suppliers in the US, Sweden and Chile, according to a report on Bloomberg.

The deals add up to the biggest-ever purchase of renewable energy ever by a company that is not a utility, according to Michael Terrell, Google’s principal of energy and global infrastructure.

Google will buy 200 megawatts of power from Oklahoma-based Renewable Energy Systems Americas’s Bluestem wind project. From the same US state another 200 megawatts will be contributed by Great Western wind project run by Electricite de France. In addition, Google will also power its cloud services with 225 megawatts of wind power from independent power producer Invenergy.

Google’s data centres and cloud services in South America could become carbon free when the 80 megawatts of solar power that it has ordered from Acciona Energia’s El Romero farm in Chile comes online.

In Scandinavia the cloud service provider has agreed to buy 76 megawatts of wind power from Eolus Vind’s Jenasen wind project to be built in Vasternorrland County, Sweden.

In July, Google committed to tripling its purchases of renewable energy by 2025. At the time, it had contracts to buy 1.1 GW of sustainably sourced power.

Google’s first ever green power deal was in 2010 when it agreed to buy power from a wind farm in Iowa. Last week, it announced plans to purchase buy 61 megawatts from a solar farm in North Carolina.