Archivo de la categoría: High Performance Computing

ESI installs HPC data centre to support virtual prototyping

Cloud computingManufacturing service provider ESI Group has announced that a new high performance computing (HPC) system is powering its cloud-based virtual prototyping service to a range of industries across Europe.

The new European HPC-driven data centre is based on the Teratec Campus in Paris, close to Europe’s biggest HPC centre, the Très Grand Centre de Calcul, the data centre of The French Alternative Energies and Atomic Energy Commission (CEA). The location was chosen in order to make collaborative HPC projects possible, according to ESI. The 13,000 square metre CEA campus has a supercomputer with a peak performance of 200 Teraflops and a CURIE supercomputer capable of running a 2 Petaflops per second.

ESI’s virtual prototyping, a product development process run on computer-aided design (CAD), computer-automated design (CAutoD) and computer-aided engineering (CAE) software in order to validate designs, is increasingly run on the cloud, it reports. Before manufacturers commit to making a physical prototype they create a 3D computer-generated model and simulate different test environments.

The launch of the new HPC data centre gives ESI a cloud computing point of delivery (PoD) to serve all 40 of ESI’s offices across Europe and the world. The HPC cloud PoD will also act as a platform for ESI’s new software development and engineering services.

The HPC facility was built by data centre specialist Legrande. The new HPC is needed to meet the change in workloads driven by virtualization and cloud computing with the annual growth in data is expected to rise from 50% in 2010 to reach 4400% in 2020, according to Pascal Perrin, Datacenter Business Development Manager at Legrand.

Legrand subsidiary Minkels supplied and installed the supporting data centre hardware, including housing, UPS, cooling, monitoring and power distribution systems. The main challenge with supporting a super computer that can ramp up CPU activity by the petaflop and with petabytes of data moving in and out of memory is securing the supporting resources, said Perrin. “Our solutions ensure the electrical and digital supply of the data centre at all times,” he said.

Scyld Cloud Management Platform Moves HPC Applications to the Cloud

Image representing Penguin Computing as depict...Penguin Computing today announced the availability of the Scyld Cloud Management Platform (SCMP) on its public HPC cloud Penguin Computing on Demand (POD). SCMP is a comprehensive software suite that makes it easy to implement service-based on-demand access for HPC applications. SCMP provides services for:

  • web-based user sign-up and account management
  • generating detailed resource usage reports
  • provisioning and managing of virtual servers
  • instantly allocating storage
  • managing users and user groups

SCMP’s storage system is based on the distributed open-source storage system Ceph, which supports file-based, block-based and object-based storage. The management of virtual servers leverages OpenStack, an open-source solution for creating and managing large groups of virtual servers in a cloud computing environment. All SCMP components are accessible through an intuitive web-based interface, as well as a web-service API.

“As the first organization to offer commercial cluster management solutions for HPC and as one of the first to offer a public HPC cloud, we have a solid foundation on which we built SCMP,” says Tom Coull, Senior VP of Software and Services at Penguin Computing.

SCMP is also the foundation of Penguin Computing’s upcoming Scyld Cloud Manager (SCM), a packaged software suite that will enable customers to build their own public and private HPC clouds.

An early adopter of SCMP is the global biotechnology company Life Technologies. The Scyld Cloud Management Platform has enabled Life Technologies to offer cloud-based genomic sequencing analysis services through its Torrent Suite Cloud offering.

“SCMP is the core component of our Torrent Suite Cloud infrastructure,” says Matt Dyer, associate director of Bioinformatics at Life Technologies. “It enables us to offer a flexible solution for processing and managing genomic sequencing data to our customers. Typical use cases include software development and testing, as well as data sharing in collaborative projects.”


Numira Biosciences, Penguin Computing Partner for Cloud Pre-Clinical Imaging

Penguin Computing and Numira Biosciences announced a partnership to bring graphics-intense data on-demand to Life Sciences customers. The platform will leverage a convergence of enabling technologies from Penguin Computing’s HPC as a Service offering, POD; Numira Biosciences’ AltaPortal product; and Nvidia’s GPU technology to deliver advanced pre-clinical imaging services to researchers at pharmaceutical and biotechnology companies around the globe.

Researchers will benefit from the cutting edge graphics, GPU power, global accessibility and reliability of Penguin Computing’s HPC as a Service platform, POD. Numira’s AltaPortal product provides a secure web interface for managing projects and exploring datasets, media and documents. By leveraging POD, AltaPortal will enable researchers to interactively navigate rich 3D data from microCT scans with overlaid analytics customized to the task at hand. The data will be delivered as a media stream to the client’s web browser, eliminating the need to transfer large and potentially proprietary datasets over networks, and allowing them to be secured in a central location.

“Penguin has enjoyed steady growth in the genomics sector with its POD offering. Adding Numira’s medical imaging software and services to our POD partner portfolio will enhance our breadth in the bioinformatics market and further establish POD as a destination for that set of users” said Matt Jacobs, SVP of Corporate Development for Penguin Computing.

“At Numira we’ve built our reputation delivering rich quantitative analytics for preclinical medical imaging. With our AltaPortal web service, we’re taking it to the next level; our customers will be able to interactively explore and quantitatively assess their preclinical imaging data with our custom 3D visual analytic tools. Penguin’s POD service offers the ideal platform for our rendering-on-demand engine,” said David Weinstein, Numira’s Chief Technology Officer.


Google Fiber Has Far-reaching Implications

Image representing Google as depicted in Crunc...

Reading this post on Google’s low-cost, super-fast fiber-to-the-home initiative (makes me sort of wish I lived in Kansas City) brought to mind all the other Google products and initiatives that might be empowered by it. Go read it, then come back here and consider:

Chrome OS: it takes a long time to make a new operating system and it looks trivial today, but with widely available gigabit internet at the household and small business level it begins to look like a realistic “the network is the computer” future.

Mobile OS: Google already has that covered with Android.

Add Google Drive: Ubiquitous very high speed connectivity at a low price makes Drive viable for more than backup, sharing and synch. Actually synch becomes easier if the only copy is on a server.

Add Google Compute Engine: A thin-client netbook running Chrome OS, or Android on tablets and handsets, become more appealing if you  can quickly access network-based computing resources for high-performance computing tasks like video transcoding.

Add Google Voice: consider all those hypothetical hotspots. Combine with Android and Voice. Can a Google competitor to cell phone providers be far behind, one that leverages the coming Google network? All it would take is a couple extra capabilities in the fiber/WiFi box that seems inevitable. And don’t forget they now own Motorola, a top-notch mobile phone company.

YouTube/Google TV: Already dipping its toe into original programming, and fast fiber means TV will change dramatically.

Living In the cloud would become a real option for everyday consumers. What about effects on professionals and small businesses?

And what about those other seemingly sci-fi projects, self driving cars and Glass? Hey, if the car drives itself my brain then has the bandwidth for augmented reality. How might they benefit from the ability to hop from fiber-connected WiFi hotspot to hotspot?

All this based on a good search engine algorithm, and then ads next to search results? Who’d a thunk it?


Amazon Web Services Launches High Performance Storage Option for Amazon Elastic Block Store

Image representing Amazon as depicted in Crunc...

Amazon Web Services today announced new features for customers looking to run high performance databases in the cloud with the launch of Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS. Provisioned IOPS (input/output operations per second) are a new EBS volume type designed to deliver predictable, high performance for I/O intensive workloads, such as database applications, that rely on consistent and fast response times. With Provisioned IOPS, customers can flexibly specify both volume size and volume performance, and Amazon EBS will consistently deliver the desired performance over the lifetime of the volume. To get started with Amazon EBS, visit http://aws.amazon.com/ebs.

Provisioned IOPS volumes are engineered to allow customers to develop, test, and deploy production applications and be confident that they will receive their desired performance. With a few clicks in the AWS Management Console, customers can create an EBS volume provisioned with the storage and IOPS they need and attach it to their Amazon EC2 instance. Amazon EBS currently supports up to 1,000 IOPS per Provisioned IOPS volume, with plans to deliver higher limits soon. Customers can attach multiple Amazon EBS volumes to an Amazon EC2 instance and stripe across them to deliver thousands of IOPS to their application.

To enable Amazon EC2 instances to fully utilize the IOPS provisioned on an EBS volume, Amazon EC2 is introducing the ability to launch selected Amazon EC2 instance types as EBS-Optimized instances. EBS-Optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Megabits per second and 1,000 Megabits per second depending on the instance type used. The combination of EBS Provisioned IOPS and EBS-Optimized instances allows customers to run their most performance-sensitive applications on Amazon EC2, giving them predictable scaling with the same ease of use, durability, and flexibility of provisioning benefits they expect from Amazon EC2 and Amazon EBS.

“AWS introduced Amazon EBS in 2008 to provide a highly scalable virtual storage service and now, four years later, our customers are running applications on Amazon EC2 using EBS volumes at tremendous scale,” said Peter De Santis, Vice President of Amazon EC2. “Customers have been asking for the ability to set their performance rate to achieve consistently high performance. With EBS Provisioned IOPS volumes, EBS-Optimized instances and the recently launched High I/O SSD-based EC2 instances, customers have a range of choices for running their most demanding applications and databases on AWS while achieving peak performance in a predictable manner.”

At NASA’s Jet Propulsion Laboratory, Amazon EBS is used to support various missions and research programs. Consistent performance of I/O is a major requirement for numerous use cases across NASA ranging from scientific computing to large scale database deployments. JPL now routinely provisions cloud compute capacity in an elastic manner but database latencies have proven difficult. To help meet this challenge, JPL’s missions and its Office of the CIO prototyped the new EBS Provisioned IOPS capability to provision flexible compute capacity and overcome database latency restrictions. The results were highly successful and the release of EBS Provisioned IOPS, coupled with Amazon EC2 High I/O SSD-based instances, will introduce a whole new realm of I/O intensive scientific applications for JPL from radar data processing to the quest of black holes.

Stratalux is a leader in building and managing tailored cloud solutions for customers of all sizes. “A common request we see from both our large and small customers is the need to support high performance database applications. Throughput consistency is critical for these workloads,” said Jeremy Przygode, CEO at Stratalux. “Based on positive results in our early testing, the combination of EBS Provisioned IOPS and EBS-Optimized instances will enable our customers to consistently scale their database applications to thousands of IOPS, enabling us to increase the number of I/O intensive workloads we support.”

Amazon EBS Provisioned IOPS volumes are currently available in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), Asia Pacific (Singapore), and Asia Pacific (Japan) regions with additional Region launches coming soon.


Penguin Computing Offers HPC Compute Clouds Built for Academia, Research

Penguin Computing today announced partnerships with multiple universities to enable easy, quick and unbureaucratic on-demand access to scalable HPC compute resources for academic researchers.

“Penguin Computing has traditionally been very successful with HPC deployments in academic environments with widely varying workloads, many departments competing for resources and very limited budgets for capital expenses, a cloud based model for compute resources makes perfect sense,” says Tom Coull, Senior Vice President and General Manager of Software and Services at Penguin Computing. “The new partnerships help academic institutions with a flexible cloud based resource allocation for their researchers. At the same time, they present an opportunity for IT departments to create an ongoing revenue stream by offering researchers from other schools access to their cloud.”

Penguin has implemented three versions of academic HPC clouds:

Hybrid Clouds – Which are a local ‘on-site’ cluster configured to support the use of Penguin-on-Demand (POD) cloud resources as needed on a pay-as-you go basis. Local compute resources can be provisioned for average demand and utilization peaks can be offloaded transparently. This model lowers the initial capital expense and for temporary workload peaks excess cycles are provided cost effectively by Penguin’s public HPC cloud. Examples of hybrid cloud deployments include the University of Delaware and Memphis University.

Channel Partnership – Between Universities and Penguin Computing, allow educational institutions to become distributors for POD compute cycles. University departments with limited access to compute resources for research can use Penguin’s virtual supercomputer on-demand and pay-as-they-go, allowing them to use their IT budget for operational expenses. When departments use the university’s HPC cloud, revenue can supplement funding for IT staff or projects, increasing the department’s capabilities. This model has been successfully implemented at the California Institute for Technology in conjunction with Penguin’s PODshell, a web-service based solution that supports the submission and monitoring of HPC cloud compute jobs from any Linux system with internet connectivity.

Combination Hybrid / Channel – The Benefits of the first two models have been successfully implemented at Indiana University (IU) as a public-private partnership. Penguin leverages the University’s HPC facilities and human resources while IU benefits from fast access to local compute resources and Penguin’s HPC experience. IU can use POD resources and provide compute capacity to other academic institutions. The agreement between IU and Penguin also has the support of a group of founding user-partners including the University of Virginia, the University of California, Berkeley and the University of Michigan who along with IU will be users of the new service. The POD collocation offers access through the high-speed national research network internet2 and is integrated with the XSEDE infrastructure that enables scientists to transparently share computing resources.

“This is a great example of a community cloud service,” said Brad Wheeler, vice president for information technology and CIO at Indiana University. “By working together in a productive private-public partnership, we can achieve cost savings through larger scales while also ensuring security and managing the terms of service in the interests of researchers.”

For more information about Penguin Computing’s HPC compute resources, please visit www.penguincomputing.com.