Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:13:45
10 Jun 2021

Weak submodular optimization underpins many problems in signal processing and machine learning. For such problems, under a cardinality constraint, a simple greedy algorithm is guaranteed to find a solution with a value no worse than $1-e^{-\gamma}$ of the optimal. Given the high cost of queries to large-scale signal processing models, the complexity of \textsc{Greedy} becomes prohibitive in modern applications. In this work, we study the tradeoff between performance and complexity when one resorts to random sampling strategies to reduce the query complexity of \textsc{Greedy}. Specifically, we quantify the effect of uniform sampling strategies on the performance through two criteria: (i) the probability of identifying an optimal subset, and (ii) the suboptimality of the solution’s value with respect to the optimal. Building upon this insight, we propose a simple progressive stochastic greedy algorithm, study its approximation guarantees, and consider its applications to dimensionality reduction and feature selection tasks.

Chairs:
Ignacio Santamaría

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00