Multi-modal Spatio-temporal Forecasting in Sensor-less Regions: A Dual-stage Graph Approach from Disease to Crime
Published in Proceedings of the 33st ACM International Conference on Advances in Geographic Information Systems, 2025
Spatio-temporal forecasting is critical for urban applications such as epidemic control and crime prevention, yet many existing methods assume dense and consistent sensor data, which is often unavailable due to infrastructural or cost constraints. This work explores the challenge of forecasting in sensor-less regions, where direct temporal observations are missing. Building on two prior studies: Multi-View Graph Fusion Approach with Approximation Module (MVGAM) for disease risk prediction and Graph Disentangler with POI Weighted Module (GDPW), a contrastive learning framework for enhancing POI embedding, we outline a new research direction. Our framework integrates large language models (LLMs), gated recurrent units (GRUs) and multi-layer perceptron (MLP) to encode multi-modal signals, with contrastive learning aligning heterogeneous representations. A dual-stage graph propagation mechanism consolidates knowledge in sensor-rich areas and transfer it to sensor-less regions via localized subgraphs. We anticipate using crime forecasting in Chicago as a case study, this work lays the foundation for robust and interpretable forecasting in data-scarce urban settings.
Recommended citation: Pei-Xuan Li, Hsun-Ping Hsieh. Multi-modal Spatio-temporal Forecasting in Sensor-less Regions: A Dual-stage Graph Approach from Disease to Crime. In Proceedings of the 33st ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL '25).
Download Paper | Download Slides
