November 12th, 2015 by Lyle Smith
NVIDIA Announces Tesla Accelerated Computing Platform
NVIDIA has announced the Tesla Accelerated Computing Platform, an end-to-end hyperscale data center platform that allows web-services companies accelerate their huge machine learning workloads. The new NVIDIA hyperscale accelerator line includes two accelerators: the Tesla M40 GPU, which enables researchers to accelerate the innovation and design of new deep neural networks for each of the increasing number of applications they want to power with AI; and the Tesla M4 GPU, which is a power-efficient accelerator designed to deploy these networks across the data center. The Tesla GPU line is often found in server builds such as the recently reviewed Dell PowerEdge R730. The line also includes NVIDIA’s Hyperscale Suite, an array of software applications that is optimized for machine learning and video processing.
The NVIDIA Tesla M40 GPU accelerator is designed to save scientists a significant amount of time while training their deep neural networks against massive amounts of data to improve overall accuracy.
- Optimized for Machine Learning - Reduces training time by 8X compared with CPUs (1.2 days vs. 10 days for a typical AlexNet training).
- Built for 24/7 reliability - Designed and tested for high reliability in data center environments.
- Scale-out performance - Support for NVIDIA GPUDirect allowing fast multi-node neural network training.
The NVIDIA Tesla M4 accelerator is a power efficient GPU that is specifically designed for hyperscale environments. It is also optimized for demanding, high-growth web services applications, such as video transcoding, image and video processing, and machine learning inference.
- Higher throughput: Transcodes, enhances and analyzes up to 5X more simultaneous video streams compared with CPUs.
- Low power consumption: With a user-selectable power profile, the Tesla M4 consumes 50-75 watts of power, and delivers up to 10X better energy efficiency than a CPU for video processing and machine learning algorithms.
- Small form factor: Low-profile PCIe design fits into enclosures required for hyperscale data center systems.
The NVIDIA hyperscale accelerator line was created to accelerate supercomputing power to innovate and train the growing number of deep neural networks and to increase the processing power so it can instantly respond to the billions of queries from consumers using the services.
The new NVIDIA Hyperscale Suite offers an extensive set of tools for both developers and data center managers, all of which are designed for web services deployments, such as:
- cuDNN: Processes deep convolutional neural networks used for AI applications.
- GPU-accelerated FFmpeg multimedia software: Accelerates video transcoding and video processing.
- NVIDIA GPU REST Engine: Enables the easy creation and deployment of high-throughput, low-latency accelerated web services spanning dynamic image resizing, search acceleration, image classification and other tasks.
- NVIDIA Image Compute Engine: GPU-accelerated service with REST API that provides image resizing 5 times faster compared to a CPU.
Mesosphere has also announced that will be partnering with NVIDIA, which will give GPU technology to their Apache Mesos and the Mesosphere Datacenter Operating System (DCOS). NVIDIA indicates that this will make it easier for web-services companies when building and deploying accelerated data centers for their next-generation applications.
NVIDIA has will be releasing the Tesla M40 GPU accelerator and Hyperscale Suite software later this year, while the Tesla M4 GPU is slated for a release sometime in the first quarter of 2016.