Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 10 Oct 2023

The task of image outpainting extends an image beyond its boundaries with semantically plausible content. Recently, Scene Graph Transformer (SGT) introduced a transformer architecture to leverage scene graph guidance for image outpainting. Despite its success, we identified two shortcomings: (a) SGT uses a positional encoding that was originally proposed for 1D signal; (b) SGT uses a scene graph attention layer that propagates information between neighboring nodes which limited the model to learning local graph features. To address these issues, we propose incorporating Laplacian positional encoding and introducing a multi-scale scene graph attention into SGT. Extensive results on MS-COCO and Visual Genome show that our proposed approach generates more plausible outpainted images with higher quality.