Large deviation analysis of function sensitivity in random deep neural networks

Bo Li, David Saad

Research output: Contribution to journalArticlepeer-review

Abstract

Mean field theory has been successfully used to analyze deep neural networks (DNN) in the infinite size limit. Given the finite size of realistic DNN, we utilize the large deviation theory and path integral analysis to study the deviation of functions represented by DNN from their typical mean field solutions. The parameter perturbations investigated include weight sparsification (dilution) and binarization, which are commonly used in model simplification, for both ReLU and sign activation functions. We find that random networks with ReLU activation are more robust to parameter perturbations with respect to their counterparts with sign activation, which arguably is reflected in the simplicity of the functions they generate.
Original languageEnglish
Article number104002
JournalJournal of Physics A: Mathematical and Theoretical
Volume53
Issue number10
Early online date10 Jan 2020
DOIs
Publication statusPublished - 20 Feb 2020

Bibliographical note

© 2020 The Author(s). Published by IOP Publishing Ltd. Original content from this work may be used under the terms of the Creative
Commons Attribution 4.0 licence. Any further distribution of this work must maintain
attribution to the author(s) and the title of the work, journal citation and DOI.

Keywords

  • deep neural networks
  • function sensitivity
  • large deviation theory
  • path integral

Fingerprint

Dive into the research topics of 'Large deviation analysis of function sensitivity in random deep neural networks'. Together they form a unique fingerprint.

Cite this