A Variational Inequality Model for Learning Neural Networks
Patrick Combettes (); Jean-Christophe Pesquet (); Audrey Repetti (Heriot Watt University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Neural networks have become ubiquitous tools for solving signal and
image processing problems, and they often outperform standard
approaches. Nevertheless, training the layers of a neural network
is a challenging task in many applications. The prevalent training
procedure consists of minimizing highly non-convex objectives based
on data sets of huge dimension. In this context, current
methodologies are not guaranteed to produce global solutions. We
present an alternative approach which foregoes the optimization
framework and adopts a variational inequality formalism. The
associated algorithm guarantees convergence of the iterates to a
true solution of the variational inequality and it possesses an
efficient block-iterative structure. A numerical application is
presented.