GPU enabled hardware is the must have for Deep Learning. As Deep Learning methods proliferate in many domains of industry, science and society, the abundance of GPUs has increased dramatically as well. And yet, the total cost of ownership of a GPU cluster is often considered an argument in favor of using cloud based resources.
In this presentation, we would like to present our reproducible benchmarks of GPUs in the cloud versus GPUs in local servers and their relevance for Deep Learning. We will also present benchmarks training state-of-the-art networks in the cloud and compare their performance to local GPU infrastructure.
Attendees should have an idea of the key performance challenges a standard Convolutional Neural Network has. A basic knowledge of modern CPU and GPU design is beneficial, but not required.
In this talk we would like to outline our approach to benchmarking hardware in the context of deep learning. We will highlight the hardware and software used and give an in-depth discussion of the outcome. We hope that this will enable attendees to transfer our findings to their use cases.
// Peter Steinbach
is an IT specialist and Scientific Software Engineer at the Max Planck Institute of Molecular Cell Biology and Genetics (as a client of Scionics Computer Innovation GmbH) where he is helping scientists to port their code to GPUs or other parallel architectures, run it on HPC clusters and provide general (big) data analysis and software development support.
is a research scientist at Zalando and a postdoctoral researcher and teacher at the Free University of Berlin. His current research interests include Deep Learning, Deep Reinforcement Learning, Approximate Algorithms for Big Data, High Performance Computing and Distributed Systems. He has been the cofounder of two startups in the area of Geospatial Databases and Crowdsourced Logistics.