{"id":35986,"date":"2018-08-07T11:15:28","date_gmt":"2018-08-07T11:15:28","guid":{"rendered":"https:\/\/www.cloudcomputing-news.net\/news\/2018\/aug\/07\/google-cloud-secures-support-nvidias-tesla-p4-gpus-more-machine-learning-goodness\/"},"modified":"2018-08-07T11:15:28","modified_gmt":"2018-08-07T11:15:28","slug":"google-cloud-secures-support-for-nvidias-tesla-p4-gpus-with-more-machine-learning-goodness","status":"publish","type":"post","link":"https:\/\/icloud.pe\/blog\/google-cloud-secures-support-for-nvidias-tesla-p4-gpus-with-more-machine-learning-goodness\/","title":{"rendered":"Google Cloud secures support for NVIDIA\u2019s Tesla P4 GPUs with more machine learning goodness"},"content":{"rendered":"<p><img decoding=\"async\" src=\"http:\/\/www.cloudcomputing-news.net\/media\/img\/news\/iStock-944347592_1.jpg\"><\/p>\n<p>Google has announced its support for NVIDIA&rsquo;s Tesla P4 GPUs to help customers with graphics-intensive and machine learning applications.<\/p>\n<p>The Tesla P4, according to NVIDIA&rsquo;s <a href=\"http:\/\/images.nvidia.com\/content\/pdf\/tesla\/184457-Tesla-P4-Datasheet-NV-Final-Letter-Web.pdf\">data sheet<\/a>, is &lsquo;purpose-built to boost efficiency for scale-out servers running deep learning workloads, enabling smart responsive AI-based services.&rsquo; The P4, which is run on NVIDIA&rsquo;s Pascal architecture, has a GPU memory of 8GB, and memory bandwidth of 192 GB per second.<\/p>\n<p>While not at the same performance level as the V100, run on Volta instead of Pascal architecture, Google said the P4 accelerators, which are now in beta, represent a &lsquo;good balance of price\/performance for remote display applications and real-time machine learning inference.&rsquo;<\/p>\n<p>&ldquo;Graphics-intensive applications that run in the cloud benefit greatly from workstation-class GPUs,&rdquo; wrote Ari Liberman, Google Cloud product manager <a href=\"https:\/\/cloud.google.com\/blog\/products\/gcp\/introducing-nvidia-tesla-p4-gpus-accelerating-virtual-workstations-and-ml-inference-compute-engine\">in a blog post<\/a>. &ldquo;We now support virtual workstations with NVIDIA GRID on the P4 and P100, allowing you to turn any instance with one or more GPUs into a high-end workstation optimised for graphics-accelerated use cases.<\/p>\n<p>&ldquo;Now, artists, architects and engineers can create breathtaking 3D scenes for their next blockbuster film, or design a computer-aided photorealistic composition,&rdquo; Liberman added.<\/p>\n<p>As is often the case with these announcements, a brand new, shiny customer was rolled out to explain how Google&rsquo;s services had improved their operations. Except this one wasn&rsquo;t quite as new; regular readers of this publication may remember <a href=\"https:\/\/www.cloudcomputing-news.net\/news\/2017\/nov\/22\/google-announces-lower-prices-nvidia-tesla-gpus\/\">oilfield services provider Schlumberger<\/a> from Google&rsquo;s GPU price reduction news back in November. The company said it was using Google&rsquo;s workstations, powered by NVIDIA GPUs, to help visualise oil and gas scenarios for its customers.<\/p>\n<p>The link with machine learning capabilities is again an irresistible one, with Google saying the P4 is ideal for use cases such as visual search, interactive speech, and video recommendations.<\/p>\n<p>Whither NVIDIA, however? The big cloud providers are certainly a key opportunity for the graphics processor. Speaking <a href=\"https:\/\/www.cloudcomputing-news.net\/news\/2017\/nov\/14\/nvidia-boasts-cloud-prowess-tesla-v100-gpu-results-soar\/\">at the end of last year<\/a>, the company said its V100 GPU had been chosen by every major cloud firm, saying the applications for GPU servers had &lsquo;now grown to many markets.&rsquo;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google has announced its support for NVIDIA&rsquo;s Tesla P4 GPUs to help customers with graphics-intensive and machine learning applications.<br \/>\nThe Tesla P4, according to NVIDIA&rsquo;s data sheet, is &lsquo;purpose-built to boost efficiency for scale-o&#8230;<\/p>\n","protected":false},"author":50,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[],"tags":[],"class_list":["post-35986","post","type-post","status-publish","format-standard","hentry"],"_links":{"self":[{"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/posts\/35986","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/users\/50"}],"replies":[{"embeddable":true,"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/comments?post=35986"}],"version-history":[{"count":1,"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/posts\/35986\/revisions"}],"predecessor-version":[{"id":35987,"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/posts\/35986\/revisions\/35987"}],"wp:attachment":[{"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/media?parent=35986"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/categories?post=35986"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/icloud.pe\/blog\/wp-json\/wp\/v2\/tags?post=35986"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}