Archivo de la categoría: Datacentre

Rackspace updates OpenStack-powered cloud server, OnMetal

Cisco and IBM are teaming up on converged hardware solutions

Rackspace has updated its OpenStack-powered cloud server, OnMetal, focusing its new features on building connectivity between public cloud and dedicated hardware.

The company highlighted it delivers enhanced compute power, and is designed for customers aiming to run workloads such as Cassandra, Docker and Spark, which require intensive data processing as well as the ability to quickly scale and deploy.

“With the combination of new features and performance capabilities in the next generation of OnMetal, it can be a solution for many customers seeking OpenStack as the platform to run their most demanding workloads,” said Paul Voccio, VP Software Development at Rackspace.

The new servers, designed from Open Compute Project specs, feature the Intel Xeon E5-2600 v3 processors, and build on Rackspace’s journey to lead the OpenStack market. Last month, Rackspace added an OpenStack-as-a-Service option, in partnership with Red Hat, to its proposition while highlighting its ambitions “to deliver the most reliable and easy-to-use OpenStack private and hybrid clouds in the world.”

Rackspace claims app performance and reliability indicators are increased with OnMetal cloud servers. The bare metal offering, generally associated with increased security, has helped its customer Brigade avoid performance limitations common with virtualized environments.

“OnMetal has played a significant role in our ability to deliver the Brigade app with optimal uptime, and to innovate and grow the application with the performance of a dedicated environment,” said John Thrall, CTO of Brigade.

Juniper Networks and Lenovo form global datacentre partnership

datacentre1Lenovo and Juniper Networks have announced a global strategic partnership to drive development of next-generation datacentre infrastructure solutions.

The partnership will focus on next-generation converged, hyper-converged, and hyper-scale data centre infrastructure solutions for enterprise and web-scale customers. The aim of the union will be to deliver flexible and cheaper solutions for customers, with a strong focus on simplifying user experience.

“Partnering with Lenovo expands Juniper’s strategy to deliver a full-stack solution for a wide-range of data centres, from the mid-range enterprise to private cloud and to hyper-scale customers,” said Juniper Networks CEO Rami Rahim. “We are excited about collaborating with Lenovo to leverage the full power of our IP-networking portfolio based on JunosOS and Contrail, in delivering the next generation of converged, hyper-converged, and hyper-scale solution to customers in China and globally”

As part of the partnership, customers will be able to purchase Juniper networking products directly through Lenovo, as well as receiving a consolidated support function for both companies. With the move to disaggregation of hardware and software in the datacentre, the two companies intend to bring open, flexible solutions to market, leveraging the ONIE (Open Network Install Environment) model.

“Lenovo is on a mission to become the market leader in datacentre solutions. We will continue to invest in the development and delivery of disruptive IT solutions to shape next-generation data centres,” said Gerry Smith, Executive VP and COO at Lenovo’s PC and Enterprise Business Group “Our partnership with Juniper Networks provides Lenovo access to an industry leading portfolio of products that include Software Defined Networking solutions – essential for state-of-the-art data centre offerings”

With a focus on the Chinese market, currently plans centre on a joint go-to-market strategy, as well as a tailor-made resell model to address unique localization requirements in China.

Spotify shifts all music from data centres to Google Cloud

Spotify_Icon_RGB_GreenMusic streaming service Spotify has announced that it is to switch formats for storing tunes for customers and is copying all the music from its data centres onto the Google’s Cloud Platform.

In a blog written by Spotify’s VP of Engineering & Infrastructure, Nicholas Harteau explained that though the company’s data centres had served it well, the cloud is now sufficiently mature to surpass the level of quality, performance and cost Spotify got from owning its infrastructure. Spotify will now get its platform infrastructure from Google Cloud Platform ‘everywhere’, Harteau revealed.

“This is a big deal,” he said. Though Spotify has taken a traditional approach to delivering its music streams, it no longer feels it needs to buy or lease data-centre space, server hardware and networking gear to guarantee being as close to its customers as possible, according to Harteau.

“Like good engineers, we asked ourselves: do we really need to do all this stuff? For a long time the answer was yes. Recently that balance has shifted,” he said.

Operating data centres was a painful necessity for Spotify since it began in 2008 because it was the only way to guarantee the quality, performance and cost for its cloud. However, these days the storage, computing and network services available from cloud providers are as high quality, high performance and low cost as anything Spotify could create from the traditional ownership model, said Harteau.

Harteau explained why Spotify preferred Google’s cloud service to that of runaway market leader Amazon Web Services (AWS). The decision was shaped by Spotify’s experience with Google’s data platform and tools. “Good infrastructure isn’t just about keeping things up and running, it’s about making all of our teams more efficient and more effective, and Google’s data stack does that for us in spades,” he continued.

Harteau cited the Dataproc’s batch processing, event delivery with Pub/Sub and the ‘nearly magical’ capacity of BigQuery as the three most persuasive features of Google’s cloud service offering.

Cohesity claims data silo fragmentation solution

Data visualisationsSanta Clara based start-up Cohesity claims it will be able to drastically reduce the escalating costs of secondary storage.

The new Cohesity Data Platform achieves this, it reckons, by consolidating all the diverse backup, archive, testing, development and replication systems onto a single, scalable entity.

In response to feedback from early adopters, it has now added site-to-site replication, cloud archive, and hardware-accelerated, 256-bit encryption to version 2.0 of the Data Platform (DP).

The system tackles one of the by-products of the proliferation of cloud systems, the creation of fragmented data silos. These are the after effects of the rapid unstructured growth of IT which led to the adoption of endless varieties of individual systems for handling backup, file services, analytics and other secondary storage use cases. By unifying them, Cohesity claims it can cut the storage footprint of a data centre by 80%. It promises an immediate tangible return on investment by obviating the need for backup.

Among the time saving features that have been added to the system are automated virtual machine cloning for testing and development and a newly added public cloud archival tier. The latter gives enterprise users the option of spilling over their least-used data to Google Cloud Storage Nearline, Microsoft Azure and Amazon S3 and Glacier in order to cut costs. The Cohesity Data Platform 2.0 also provides ‘adaptive throttling of backup streams’, which minimises the burden that storage places on the production infrastructure.

“We manage data sprawl with a hyperconverged solution that uses flash, compute and policy-based quality of service,” said Cohesity CEO Mohit Aron.

Apple augments CloudKit with new APIs – eyes enterprise

Apple cloudkitApple has given developers a new option in CloudKit, a new application programming interface (API) into its servers. The new feature is announced on the Apple developer news blog.

The new web interface gives users access to the same data as a developer’s app. It also makes it easier to read and write to the CloudKit public database from a server-side process or script with a server-to-server key, says Apple.

The interface is designed to help developers to extend the use of the iCloud CloudKit database beyond user interaction with iOS, Mac or web apps and run independent code on servers that can add, delete and modify records in the CloudKit stack. Originally, any user interaction with CloudKit was limited to the APIs that Apple provided in apps but now Apple has granted developers greater license for using the technology outside of the confines of its own technology.

Developers had complained that though the CloudKit stack was useful its limitations stopped them from putting the system to more advanced use. One of the complaints was that modern apps rely on servers to perform tasks whilst users are away. The addition of the web API means developers can create a wider portfolio of apps using CloudKit as the backend.

The restrictions had meant that even simple transactions were difficult to set up outside of the confines of Apple. According to specialist Apple user blog 9 to 5 Mac, users were previously restricted from using RSS readers unless they opened a CloudKit-powered app. This, said the blog, was ‘impractical’ and forced developers to use other tools. As a result of the more open API it is much easier to add new feed items to the CloudKit stack from the server.

“Expect CloudKit adoption to rise with this announcement,” predicted blog author Benjamin Mayo. However, the lack of native software development kits for non-Apple platforms may continue to limit uptake, Mayo warned.

With rival cloud framework Parse due to close in 2017, Apple’s addition of a server side request endpoint could position it as the replacement to Parse as a cloud database engine.

Meanwhile, there’s speculation among analysts that Apple is preparing for a move into cloud computing services for enterprise customers.

With Apple expected to invest $4 billion in 2016 on warehouse-sized data centres, analysts at investment banks Morgan Stanley and Openheimer Holdings have suggested that Apple may move its cloud business away from AWS as competition intensifies.

In a report, Oppenheimer analyst Tim Horan mooted the idea that Apple might start its own infrastructure as a service (IaaS) business as it targets the corporate market. IBM and Apple have partnered for enterprise marketing.

Hitachi launches Hyper Scalable Platform with in-built Pentaho

HDS HSPHitachi Data System (HDS) has launched a rapid assembly datacentre infrastructure product that comes with a ready-mixed enterprise big data system built in.

The HDS hyper scalable platform (HSP) is a building block for infrastructure that comes with computing, virtualisation and storage pre-configured, so that modules can be snapped together quickly without any need for integrating three different systems. HDS has taken the integration stage further by embedding the big data technology it acquired when it bought Pentaho in 2015. As a consequence the new HSP 400 system creates a simple to install but sophisticated system for building enterprise big data platforms fast, HDS claims.

HDS claims that the HSP’s software-definition centralises the processing and management of large datasets and supports a pay-as-you-grow model. The systems can be supplied pre-configured, which means installing and supporting production workloads can take hours, whereas comparable systems can take months. The order of the day, says HDS, is to make it simple for clients to create elastic data lakes, by bringing all their data together and integrating it in preparation for advanced analytic techniques.

The system’s virtualised environments can work with open source big data frameworks, such as Apache Hadoop, Apache Spark and commercial open source stacks like the Hortonworks Data Platform (HDP).

Few enterprises have the internal expertise for analytics of complex big data sources in production environments, according to Nik Rouda, Senior Analyst at HDS’s Enterprise Strategy Group. Most want to avoid experimenting with still-nascent technologies and want a clear direction without risk and complexity. “HSP addresses the primary adoption barriers to big data,” said Rouda.

Hitachi will offer HSP in two configurations, Serial Attached SCSI (SAS) disk drives, generally available now, and all-flash, expected to ship in mid-2016. These will support all enterprise applications and performance eventualities, HDS claims.

“Our enterprise customers say data silos and complexity are major pain points,” said Sean Moser, senior VP at HDS, “we have solved these problems for them.”

Microsoft’s submarine datacentre makes a splash

Microsoft project natickMicrosoft has released details of a new pilot project for an undersea datacentre designed to cut power costs with free water cooling.

Project Natick, which connects the undersea module using giant steel tubes linked by fibre optic cables, could also use turbines to convert tides and currents into electricity to power the computing and comms equipment. The new sea bed data centres could also improve cloud response times for users living near the coast.

A prototype was placed on the sea bed off the coast of California in August 2015 as art of an investigation into the environmental and technical issues involved in this form of low power cloud service. Microsoft researchers believe that economies of scale through mass production would cut deployment time from two years to 90 days. The project is the latest initiative from Microsoft Research’s New Experiences and Technologies (NExT) which began investigating new ways to power cloud computing in 2014.

In the 105 day trial an eight foot wide steel capsule was placed 30 feet underwater in the Pacific Ocean near San Luis Obispo, California. The underwater system had 100 sensors to measure pressure, humidity, motion and other conditions but the system stayed up, which encouraged Microsoft to extend the experiment to run data-processing projects from Microsoft’s Azure cloud computing service.

In the next stage of the research, Microsoft said, it will create an underwater data centre system that will be three times as large. This will be built in partnership with an alternative energy vendor. The identity of the trial partner has yet to be decided, but the launch date is mooted for 2017 at a venue either in Florida or Northern Europe, where hydro power is more advanced.

This “refactoring” of traditional methods will help fuel other innovations even if it doesn’t accomplish its goal of establishing underwater data farms, according to Norman Whitaker, MD for special projects at Microsoft Research and the former deputy director at the Pentagon’s Defense Advanced Research Projects Agency. “The idea with refactoring is that it tickles a whole bunch of things at the same time,” said Whitaker.

Microsoft manages more than 100 data centres around the globe and is adding always looking for new venues to support its raid expansion. The company has spent more than $15 billion on a global datacentre system that now provides more than 200 online services.

Qualcomm and Guizhou to make new server chipsets in China

qualcomm sales officeSan Diego based chip maker Qualcomm and China’s Guizhou Huaxintong Semi-Conductor company have announced a joint venture to develop new server chip sets designed for the Chinese market.

The news comes only a week after chip maker AMD announced its new Opteron A1100 System-on-Chip (SoC) for ARM-based systems in data centre. Both partnerships reflect how server design for data centres is evolving to suit the cloud industry.

The Qualcomm partnership, announced on its web site, was formalised at China National Convention Center in Beijing as officials from both companies and the People’s Government of Guizhou Province signed a strategic cooperation agreement. The $280 million joint venture will be 55% owned by the Guizhou provincial government’s investment arm, while 45% will belong to Qualcomm subsidiary.

The plan is to develop advanced server chipsets in China, which is now the world’s second largest market for server technology sales.

The action is an important step for Qualcomm as it looks to deepen its level of cooperation and investment in China, said Qualcomm president Derek Aberle. In February 2015 BCN sister publication Telecoms.com reported how the chip giant had fallen foul of the Chinese authorities for violating China’s trading laws. It was fined 6 billion yuan (around $1 billion) after its marketing strategy was judged to be against the nation’s anti-monopoly law.

“The strategic cooperation with Guizhou represents a significant increase in our collaboration in China,” said Aberle. Qualcomm is to provide investment capital, license its server technology to the joint venture, help with research and development and provide implementation expertise. “This underscores our commitment as a strategic partner in China,” said Aberle.

Last week, AMD claimed the launch of its new Opteron A1100 SoC will catalyse a much more rapid development process for creating servers suited to hosting cloud computing in data centres.

AMD’s partner in chip development for servers, ARM, is better placed to create processors for the cloud market as it specialises in catering for a wider diversity of needs. Whereas Intel makes its own silicon and can only hope to ship 30 custom versions of its latest Xeon processor to large customers like Ebay or Amazon, ARM can licenses its designs to 300 third-party silicon vendors, each developing their own use case for different clients and variants of server workloads, it claimed.

“The ecosystem for ARM in the data centre is approaching an inflection point and the addition of AMD’s high-performance processor is another strong step forward for customers looking for a data centre-class ARM solution,” said Scott Aylor, AMD’s general manager of Enterprise Solutions.

AWS, Azure and Google intensify cloud price war

AzureAs price competition intensifies among the top three cloud service providers, one analyst has warned that cloud buyers should not get drawn into a race to the bottom.

Following price cuts by AWS and Google, last week Microsoft lowered the price bar further with cuts to its Azure service. Though smaller players will struggle to compete on costs, the cloud service is a long way from an oligopoly, according to Quocirca analyst Clive Longbottom.

Amazon Web Services began the bidding in early January as chief technology evangelist Jeff Barr announced the company’s 51st cloud price cut on his official AWS blog.

In January 8th Google’s Julia Ferraioli argued via a blog post that Google is now a cheaper offering (in terms of cost effectiveness) as a result of its discounting scheme. “Google is anywhere from 15 to 41% less expensive than AWS for compute resources,” said Ferraioli. The key to the latest Google lead in cost effectiveness is automatic sustained usage discounts and custom machine types that AWS can’t match, claimed Ferraioli.

Last week Microsoft’s Cloud Platform product marketing director Nicole Herskowitz announced the latest round of price competition in a company blog post announcing a 17% cut off the prices of its Dv2 Virtual Machines.

Herskowitz claimed that Microsoft offers better price performance because, unlike AWS EC2, its Azure’s Dv2 instances have include load balancing and auto-scaling built-in at no extra charge.

Microsoft is also aiming to change the perception of AWS’s superiority as an infrastructure service provider. “Azure customers are using the rich set of services spanning IaaS and PaaS,” wrote Herskowitz, “today, more than half of Azure IaaS customers are benefiting by adopting higher level PaaS services.”

Price is not everything in this market warned Quocirca analyst Longbottom, an equally important side of any cloud deal is overall value. “Even though AWS, Microsoft and Google all offer high availability and there is little doubting their professionalism in putting the stack together, it doesn’t mean that these are the right platform for all workloads. They have all had downtime that shouldn’t have happened,” said Longbottom.

The level of risk the provider is willing to protect the customer from and the business and technical help they provide are still deal breakers, Longbottom said. “If you need more support, then it may well be that something like IBM SoftLayer is a better bet. If you want pre-prepared software as a service, then you need to look elsewhere. So it’s still horses for courses and these three are not the only horses in town.”

AWS adds hydro-powered Canadian region to its estate

AWS has announced it will open a new carbon-neutral Canadian region to its estate as well as running a new free test drive service for cloud service buyers.

AWS chief technology evangelist Jeff Barr announced on the AWS official blog that a new AWS region in Montreal, Canada will run on hydro power.

The addition of data centre facilities in the Canada-Montreal region means that AWS partners and customers can run workloads and store data in Canada. AWS has four regions in North America but they are all in the United States, with coverage in US East (Northern Virginia), US West (Northern California),US West (Oregon), and AWS GovCloud (US). There is also an additional region for Ohio planned for some time in 2016. The Ohio and Montreal additions will give AWS 14 Availability Zones in North America.

AWS’s data centre estate now comprises 32 Availability Zones across 12 geographic regions worldwide, according to the AWS Global Infrastructure page. Another 5 AWS regions (and 11 Availability Zones) are in the pipeline including new sites in China and India. These will come online “throughout the next year” said Barr.

The Montreal facilities are not exclusive to Canadian customers and partners and open to all existing AWS customers who want to process and store data in Canada, said Barr.

Meanwhile, AWS announced a collaboration with data platform provider MapR to create a ‘try before you buy’ service. Through AWS facilities MapR is to offer free test drives of the Dataguise DgSecure, HPE Vertica, Apache Drill and TIBCO Spotfire services that it runs from its integrated Spark/Hadoop systems.

The AWS Test Drives for Big Data will provide private IT sandbox environments with preconfigured servers so that cloud service shoppers can launch, login and learn about popular third-party big data IT services as they research their buying options. MapR claims that it has made the system so easy that the whole process, from launching to learning, can be achieved within an hour using its step-by-step lab manual and video. The test drives are powered by AWS CloudFormation.

MapR is currently the only Hadoop distribution on the AWS Cloud that is available as an option on Amazon Elastic MapReduce (EMR), AWS Marketplace and now via AWS Test Drive.