Category Archives: HPC

Amazon Web Services buys HPC management specialist Nice

amazon awsAmazon Web Services (AWS) has announced its intention to acquire high performance and grid computer boosting specialist Nice.

Details of the takeover of the Asti based software and services company were not revealed. However, in his company blog AWS chief evangelist Jeff Barr outlined the logic of the acquisition. “These [Nice] products help customers to optimise and centralise their high performance computing and visualization workloads,” wrote Barr, “they also provide tools that are a great fit for distributed workforces making use of mobile devices.”

The NICE brand and team will remain intact in Italy, said Barr. Their brief is to continue to develop and support the company’s EnginFrame and Desktop Cloud Visualization (DCV) products. The only difference, said Barr, is that they now have the backing of the AWS team. In future, however, NICE and AWS are to collaborate on projects to create better tools and services for high performance computing and virtualisation.

NICE describes itself as a ‘Grid and Cloud Solutions’ developer, specialising in technical computing portals, grid and high performance computing (HPC) technologies. Its services include remote visualization, application grid-enablement, data exchange, collaboration, software as a service and grid intelligence.

The EnginFrame product is a grid computing portal designed to make it easier to submit analysis jobs to super computers and to manage and monitor the results. EnginFrame is an open framework based on Java, XML and Web Services. Its purpose is to make it easier to set up user-friendly, application- and data-oriented portals. It simplifies the submission and control of grid computing enabled applications. It also acts to monitor workloads, data and licenses from within the same user dashboard. By hiding the diversity and complexity of the native interfaces, it aims to allow more users get the full range of benefits from high performing computing platforms, whose operating systems are off-puttingly complex.

Desktop Cloud Visualization is a remote 3D visualization technology that enables technical computing users to connect to OpenGL and Direct/X applications running in a data centre. NICE has customers in industries ranging from aerospace to industrial, energy and utilities.

The deal is expected to close by the end of March 2016.

Red Hat helps Medlab share supercomputer in the cloud

redhat office logoA cloud of bioinformatics intelligence has been harmonised by Red Hat to create ‘virtual supercomputers’ that can be shared by the eMedlab collective of research institutes.

The upshot is that researchers at institutes such as the Wellcome Trust Sanger, UCL and King’s College London can carry out much more powerful data analysis when researching cancers, cardio-vascular conditions and rare diseases.

Since 2014 hundreds of researchers across the eMedlab have been able to use a high performance computer (HPC) with 6,000 cores of processing power and 6 Petabytes of storage from their own locations. However, the cloud environment now collectively created by technology partners Red Hat, Lenovo, IBM and Mellanox, along with supercomputing integrator OCF, means none of the users have to shift their data to the computer. Each of the seven institutes can configure their share of the HPC according to their needs, by self-selecting the memory, processors and storage they’ll need.

The new HPC cloud environment uses a Red Hat Enterprise Linux OpenStack platform with Lenovo Flex hardware to create virtual HPC clusters bespoke to each individual researchers’ requirements. The system was designed and configured by OCF, working with partners Red Hat, Lenovo, Mellanox and eMedlab’s research technologists.

With the HPC hosted at a shared data centre for education and research, the cloud configuration has made it possible to run a variety of research projects concurrently. The facility, aimed solely at the biomedical research sector, changes the way data sets are shared between leading scientific institutions internationally.

The eMedLab partnership was formed in 2014 with funding from the Medical Research Council. Original members University College London, Queen Mary University of London, London School of Hygiene & Tropical Medicine, the Francis Crick Institute, the Wellcome Trust Sanger Institute and the EMBL European Bioinformatics Institute have been joined recently by King’s College London.

“Bioinformatics is a very, data-intensive discipline,” says Jacky Pallas, Director of Research Platforms at University College London. “We study a lot of de-identified, anonymous human data. It’s not practical for scientists to replicate the same datasets across their own, separate physical HPC resources, so we’re creating a single store for up to 6 Petabytes of data and a shared HPC environment within which researchers can build their own virtual clusters to support their work.”

In other news Red Hat has announced a new upgrade of CloudForms with better hybrid cloud management through more support for Microsoft Azure Support, advanced container management and improvements to its self-service features.

IBM to create HPC and big data centre of excellence in UK

datacenterIBM and the UK’s Science & Technology Facilities Council (STFC) have jointly announced they will create a centre that tests how to use high performance computing (HPC) for big data analytics.

The Hartree Power Acceleration and Design Centre (PADC) in Daresbury, Cheshire is the first UK facility to specialise in modelling and simulation and their use in Big Data Analytics. It was recently the subject of UK government investment in big data research and was tipped as the foundation for chancellor George Osborne’s northern technology powerhouse.

The new facility launch follows the government’s recently announced investment and expansion of the Hartree Centre. In June Universities and Science Minister Jo Johnson unveiled a £313 million partnership with IBM to boost Big Data research in the UK. IBM said it will further support the project with a package of technology and onsite expertise worth up to £200 million.

IBM’s contributions will include access to the latest data-centric and cognitive computing technologies, with at least 24 IBM researchers to be based at the Hartree Centre to work side-by-side with existing researchers. It will also offer joint commercialization of intellectual property assets produced in partnership with the STFC.

The supporting cast have a brief to help users to cajole the fullest performance possible out of all the components of the POWER-based system, and have specialised knowledge of architecture, memory, storage, interconnects and integration. The Centre will also be supported by the expertise of other OpenPOWER partners, including Mellanox, and will host a POWER-based system with the Tesla Accelerated Computing Platform. This will provide options for using energy-efficient, high-performance NVIDIA Tesla GPU accelerators and enabling software.

One of the target projects will be a search for ways to boost application performance while minimising energy consumption. In the race towards exascale computing significant gains can be made if existing applications can be optimised on POWER-based systems, said Dr Peter Allan, acting Director of the Hartree Centre.

“The Design Centre will help industry and academia use IBM and NVIDIA’s technological leadership and the Hartree Centre’s expertise in delivering solutions to real-world problems,” said Allan. “The PADC will provide world-leading facilities for Modelling and Simulation and Big Data Analytics. This will develop better products and services that will boost productivity, drive growth and create jobs.”

Penguin Computing Offers HPC Compute Clouds Built for Academia, Research

Penguin Computing today announced partnerships with multiple universities to enable easy, quick and unbureaucratic on-demand access to scalable HPC compute resources for academic researchers.

“Penguin Computing has traditionally been very successful with HPC deployments in academic environments with widely varying workloads, many departments competing for resources and very limited budgets for capital expenses, a cloud based model for compute resources makes perfect sense,” says Tom Coull, Senior Vice President and General Manager of Software and Services at Penguin Computing. “The new partnerships help academic institutions with a flexible cloud based resource allocation for their researchers. At the same time, they present an opportunity for IT departments to create an ongoing revenue stream by offering researchers from other schools access to their cloud.”

Penguin has implemented three versions of academic HPC clouds:

Hybrid Clouds – Which are a local ‘on-site’ cluster configured to support the use of Penguin-on-Demand (POD) cloud resources as needed on a pay-as-you go basis. Local compute resources can be provisioned for average demand and utilization peaks can be offloaded transparently. This model lowers the initial capital expense and for temporary workload peaks excess cycles are provided cost effectively by Penguin’s public HPC cloud. Examples of hybrid cloud deployments include the University of Delaware and Memphis University.

Channel Partnership – Between Universities and Penguin Computing, allow educational institutions to become distributors for POD compute cycles. University departments with limited access to compute resources for research can use Penguin’s virtual supercomputer on-demand and pay-as-they-go, allowing them to use their IT budget for operational expenses. When departments use the university’s HPC cloud, revenue can supplement funding for IT staff or projects, increasing the department’s capabilities. This model has been successfully implemented at the California Institute for Technology in conjunction with Penguin’s PODshell, a web-service based solution that supports the submission and monitoring of HPC cloud compute jobs from any Linux system with internet connectivity.

Combination Hybrid / Channel – The Benefits of the first two models have been successfully implemented at Indiana University (IU) as a public-private partnership. Penguin leverages the University’s HPC facilities and human resources while IU benefits from fast access to local compute resources and Penguin’s HPC experience. IU can use POD resources and provide compute capacity to other academic institutions. The agreement between IU and Penguin also has the support of a group of founding user-partners including the University of Virginia, the University of California, Berkeley and the University of Michigan who along with IU will be users of the new service. The POD collocation offers access through the high-speed national research network internet2 and is integrated with the XSEDE infrastructure that enables scientists to transparently share computing resources.

“This is a great example of a community cloud service,” said Brad Wheeler, vice president for information technology and CIO at Indiana University. “By working together in a productive private-public partnership, we can achieve cost savings through larger scales while also ensuring security and managing the terms of service in the interests of researchers.”

For more information about Penguin Computing’s HPC compute resources, please visit