Skip to main content

November 16 | Virtual & Free

SigOpt AI & HPC Summit 2021

Scroll
Gradient

Talk: How Can We Be So Slow? Winning the Hardware Lottery by Accelerating Sparse Networks

11:30 - 12pm

Most deep learning networks today rely on dense representations. This stands in stark contrast to our brains which are extremely sparse, both in connectivity and in activations. Implemented correctly, the potential performance benefits of sparsity in weights and activations is massive.  Unfortunately, the benefits observed to date have been extremely limited. It is challenging to optimize training to achieve highly sparse and accurate networks. Hyperparameters and best practices that work for dense networks do not apply to sparse networks. In addition, it is difficult to implement sparse networks on hardware platforms designed for dense computations. In this talk we present novel sparse networks that achieve high accuracy and leverage sparsity to run 100X faster than their dense counterparts. We discuss the hyperparameter optimization strategies used to achieve high accuracy, and describe the hardware techniques developed to achieve this speedup. Our results show that a careful evaluation of the training process combined with an optimized architecture can dramatically scale deep learning networks in the future.

Add to Calendar 11/16/2021 11:30 am 11/16/2021 12:00 pm America/Los_Angeles Talk: How Can We Be So Slow? Winning the Hardware Lottery by Accelerating Sparse Networks SigOpt AI & HPC Summit 2021 - Virtual & Free

Speaker

Subutai Ahmad, PhD

VP Research, Numenta, Inc.