Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:15
21 Sep 2020

Hyper-parameter optimization remains as the core issue of Gaussian process (GP) for machine learning nowadays. The benchmark method using maximum likelihood (ML) estimation and gradient descent (GD) is impractical for processing big data due to its O(n^3) complexity. Many sophisticated global or local approximation models have been proposed to address such complexity issue. In this paper, we propose two novel and exact GP hyper-parameter training schemes by replacing ML with cross-validation (CV) as the fitting criterion and replacing GD with a non-linearly constrained alternating direction method of multipliers (ADMM) as the optimization method. The proposed schemes are of O(n^2) complexity for any covariance matrix without special structure. We conduct experiments based on synthetic and real datasets, wherein the proposed schemes show excellent performance in terms of convergence, hyper-parameter estimation, and computational time in comparison with the traditional ML based routines.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00