Skip to main content

Text2Poster: Laying out Stylized Texts on Retrieved Images

Chuhao Jin, Hongteng Xu, Ruihua Song, Zhiwu Lu

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:47
11 May 2022

Poster generation is a significant task for a wide range of applications, which is often time-consuming and requires a lot of manual editing and artistic experience. In this paper, we propose a novel data-driven framework, called \textit{Text2Poster}, to automatically generate visually-effective posters from textual information. Imitating the process of manual poster editing, our framework lays out stylized texts on retrieved images. It leverages a large-scale pretrained visual-textual model to retrieve background images from given texts, lays out the texts on the images iteratively by cascaded auto-encoders, and finally, stylizes the texts by a matching-based method. We learn the modules of the framework by weakly-and self-supervised learning strategies, mitigating the demand for labeled data. Both objective and subjective experiments demonstrate that our Text2Poster outperforms state-of-the-art methods, including academic work and commercial software, on the quality of generated posters.