Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:14:44
17 Oct 2022

This paper introduces a novel coding/decoding mechanism that mimics one of the most important properties of the human visual system: its ability to enhance the visual perception quality in time. We propose a compression architecture built of neuroscience models; it first uses the leaky integrate-and-fire (LIF) model to transform the visual stimulus into a spike train and then it combines two different kinds of spike interpretation mechanisms (SIM), the time-SIM and the rate-SIM for the encoding of the spike train. The time-SIM allows a high quality interpretation of the neural code and the rate-SIM allows a simple decoding mechanism by counting the spikes. For that reason, the proposed mechanisms is called Dual-SIM quantizer (Dual- SIMQ). We show that (i) the time-dependency of Dual-SIMQ automatically controls the reconstruction accuracy of the visual stimulus, (ii) the numerical comparison of Dual-SIMQ to the state-of-the-art shows that the performance of the proposed algorithm is similar to the uniform quantization schema while it approximates the optimal behavior of the non-uniform quan- tization schema and (iii) from the perceptual point of view the reconstruction quality using the Dual-SIMQ is higher than the state-of-the-art.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00