Paper ID | MLR-APPL-IVASR-2.7 | ||
Paper Title | Geospatial-temporal Convolutional Neural Network for Video-Based Precipitation Intensity Recognition | ||
Authors | Chih-Wei Lin, Suhui Yang, Fujian Agriculture and Forestry University, China | ||
Session | MLR-APPL-IVASR-2: Machine learning for image and video analysis, synthesis, and retrieval 2 | ||
Location | Area D | ||
Session Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Applications of Machine Learning: Machine learning for image & video analysis, synthesis, and retrieval | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | In this work, we propose a new framework, called Geospatial-temporal Convolutional Neural Network (GT-CNN), and construct the video-based geospatial-temporal precipitation dataset from the surveillance cameras of the eight weather stations (sampling points) to recognize the precipitation intensity. GT-CNN has three key modules: (1) Geospatial module, (2) Temporal module, (3) Fusion module. In the geospatial module, we extract the precipitation information from each sampling point simultaneously, and that is used to construct the geospatial relationships using LSTM between various sampling points. In the temporal module, we take 3D convolution to grab the precipitation features with time information, considering a series of precipitation images for each sampling point. Finally, we generate the fusion module to fuse the geospatial and temporal features. We evaluate our framework with three metrics and compare GT-CNN with the state-of-the-art methods using the self-collected dataset. Experimental results demonstrated that our approach surpasses state-of-the-art methods concerning various metrics. |