ONLINE CACHING WITH FETCHING COST FOR ARBITRARY DEMAND PATTERN: A DRIFTPLUSPENALTY APPROACH
Shashank P (IIT Dharwad); Bharath Bettagere (IIT Dharwad)

SPS
IEEE Members: $11.00
Nonmembers: $15.00
In this paper, we investigate the problem of caching in a single server setting from the stochastic optimization viewpoint. The goal here is to optimize the time average cache hit subject to a time average constraint on the fetching cost and the cache constraint when the demands are nonstationary and possibly correlated across time. We propose a modified DriftPlusPenalty (DPP) algorithm where at each time slot, we greedily minimize the difference of fetching cost and an estimated cache hit multiplied by a factor $V>0$. Since the problem does not exhibit an equilibrium optimal solution, we use a $T$ slot \emph{lookahead} metric where we benchmark the performance of the proposed algorithm with respect to a genie aided cache hit which has access to demands of the future $T$ slots. We show that with a probability of at least $1\delta$, the cache hit of the proposed algorithm with respect to the genie scales as $\mathcal{O}\left(\frac{T^2 + T \log R}{V \sqrt{R}} + \frac{T}{V}\right) + \texttt{mse}_{R,T}$ and a fetching cost of $\mathcal{O}\left(\frac{ V \log(\frac{1}{\delta})}{R}\right)$ is achievable, here we divide time into $R$ blocks of $T$ slots each and $\texttt{mse}_{R,T}$ is the "Mean Squared Error" (MSE) of the predictor. We make the following observations to achieve better performance: (i) the MSE of the predictor should be less, (ii) $V$ should be chosen large to achieve better cache hit but results in higher fetching cost, (iii) higher $R$ to compensate for larger $V$, i.e., more time is required to achieve lower fetching cost. We corroborate these findings using a real world dataset, and show that the proposed algorithm outperforms some well known caching algorithms.