4.5 (120) In stock
Iclr2020: Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network - Download as a PDF or view online for free
1) The document presents a new compression-based bound for analyzing the generalization error of large deep neural networks, even when the networks are not explicitly compressed.
2) It shows that if a trained network's weights and covariance matrices exhibit low-rank properties, then the network has a small intrinsic dimensionality and can be efficiently compressed.
3) This allows deriving a tighter generalization bound than existing approaches, providing insight into why overparameterized networks generalize well despite having more parameters than training examples.
ICLR 2020
Koopman-based generalization bound: New aspect for full-rank weights
Perception & Robotics Group at UMD
Jokyokai2
Stochastic Alternating Direction Method of Multipliers
A cortical information bottleneck during decision-making
Continuum Modeling and Control of Large Nonuniform Networks
How does unlabeled data improve generalization in self training
Learning group variational inference
Iclr2020: Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network
Design principles for lifelong learning AI accelerators
Publications - Jiatao Gu
PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model
Continuum Modeling and Control of Large Nonuniform Networks