Geospatial-Temporal Convolutional Neural Network For Video-Based Precipitation Intensity Recognition
Chih-Wei Lin, Suhui Yang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:30
In this work, we propose a new framework, called Geospatial-temporal Convolutional Neural Network (GT-CNN), and construct the video-based geospatial-temporal precipitation dataset from the surveillance cameras of the eight weather stations (sampling points) to recognize the precipitation intensity. GT-CNN has three key modules: (1) Geospatial module, (2) Temporal module, (3) Fusion module. In the geospatial module, we extract the precipitation information from each sampling point simultaneously, and that is used to construct the geospatial relationships using LSTM between various sampling points. In the temporal module, we take 3D convolution to grab the precipitation features with time information, considering a series of precipitation images for each sampling point. Finally, we generate the fusion module to fuse the geospatial and temporal features. We evaluate our framework with three metrics and compare GT-CNN with the state-of-the-art methods using the self-collected dataset. Experimental results demonstrated that our approach surpasses state-of-the-art methods concerning various metrics.