ICML 2017

Combined Group and Exclusive Sparsity for Deep Neural Networks

Jaehong Yoon and SungJu Hwang

The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network.

View Publication
Source code