DIAG2GRAPH: REPRESENTING DEEP LEARNING DIAGRAMS IN RESEARCH PAPERS AS KNOWLEDGE GRAPHS
Aditi Roy, Ioannis Akrotirianakis, Amar V. Kannan, Dmitriy Fradkin, Arquimedes Canedo, Kaushik Koneripalli, Tugba Kulahcioglu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:52
‘Which are the segmentation algorithms proposed during 2018-2019 in CVPR that have CNN architecture?’ Answering this question involves identifying and analyzing the deep learning architecture diagrams from several research papers. Retrieving such information poses significant challenge as most of the existing academic search engines are primarily based on only the text content. In this paper, we introduce Diag2Graph, an end-to-end framework for parsing deep learning diagram-figures, that enables powerful search and retrieval of architectural details in research papers. Our proposed approach automatically localizes figures from research papers, classifies them, and analyses the content of the diagram-figures. The key steps in analyzing the figure content is the extraction of the different components data and finding their structural relation. Finally, the extracted components and their relations are represented in the form of a deep knowledge graph. A thorough evaluation on a real-word annotated dataset has been done to demonstrate the efficacy of our approach.