Skip to main content

MT-UNET: A NOVEL U-NET BASED MULTI-TASK ARCHITECTURE FOR VISUAL SCENE UNDERSTANDING

Ankit Jha, Awanish Kumar, Shivam Pande, Biplab Banerjee, Subhasis Chaudhuri

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 09:08
28 Oct 2020

We tackle the problem of deep end-to-end multi-task learning (MTL) for jointly performing image segmentation and depth estimation from monocular images. It is proven already that learning several related tasks together helps in attaining improved performance per task than training them autonomously. To this end, we follow the typical U-Net based encoder-decoder architecture (MT-UNet) where the densely connected deep convolutional neural network (CNN) based feature encoder is shared among the tasks while the soft attention based task-specific decoder modules produce the desired outputs. Additionally, we encourage cross-talk (CT) between the tasks by introducing cross-task skip connections at the decoder end with adaptive weight learning for the task-specific loss functions in the final cost measure. We validate the proposed framework on the challenging CityScapes and NYUv2 datasets, where our method sharply outperforms the current state-of-the-art.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00