Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 1:21:45 AM
05 Sep 2022

The operation of CNNs from the matched filter perspective will be explained, and it will be shown that their very backbone ? the convolution operation ? represents a matched filter which examines the input for the presence of characteristic patterns in data. This fact serves as a vehicle for a unifying account on the overall functionality of CNNs, whereby the convolution-activation-pooling chain and learning strategies are also shown to admit a compact and elegant interpretation under the umbrella of matched filtering. Then, a review of graphs, as a basis for a signal and signal processing on irregular domain will be given. Graph Convolutional Neural Networks (GCNN) are becoming a model of choice for learning on irregular domains; yet due to the black-box nature of neural networks (NNs) their underlying principles are rarely examined in depth. To this end, we revisit the operation of GCNNs and show that the convolutional layer effectively performs, as in the standard CNN, graph matched filtering of its inputs with a set of predefined patterns (features). We then show how this serves as a basis for an analogous framework for understanding GCNNs which maintains physical relevance throughout the information flow, including nonlinear activations and max-pooling. Such an approach is shown to be quite general, and yields both standard CNNs and fully connected NNs as special cases. For enhanced intuition, a step-by-step numerical example is provided through which the information propagation through GCNNs is visualized, illuminating all stages of GCNN operation and learning.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00