What is it about?

Due to confidentiality and privacy issues, ride-hailing data are sometimes released to the researchers by removing spatial adjacency information of the zones that hinders the detection of spatio-temporal dependencies. We proposed a novel spatio-temporal deep learning architecture, integrating feature importance layer with a spatio-temporal deep learning architecture containing 1D convolutional neural network (1D-CNN) and zone-distributed independently recurrent neural network (IndRNN), to learn spatio-temporal dependencies from ride-hailing data with anonymised spatial adjacency information while forecasting demand and supply-demand gap. Experiments with real-world datasets from Didi Chuxing shows that the proposed approach outperforms conventional time-series models and machine learning models. Additionally, the feature importance layer provides an interpretation of the model by revealing the contribution of the input features to prediction.

Featured Image

Why is it important?

Our proposed approach detects spatial dependencies from the anonymised zones in a multivariate manner through 1D CNN and temporal dependencies in an independent manner through zone-distributed IndRNN. Furthermore, we integrate feature importance layer with a spatio-temporal deep learning architecture, which determines spatial weighting that adapts the model to accurately forecast both demand and supply-demand gap and also indicates the contribution of the corresponding input features for spatio-temporal forecasting.


I hope this article opens the door to applying spatio-temporal deep learning on confidential data for generating city-level forecasting.

Md. Hishamur Rahman
International University of Business Agriculture and Technology

Read the Original

This page is a summary of: Using spatio‐temporal deep learning for forecasting demand and supply‐demand gap in ride‐hailing system with anonymised spatial adjacency information, IET Intelligent Transport Systems, May 2021, the Institution of Engineering and Technology (the IET), DOI: 10.1049/itr2.12073.
You can read the full text:




The following have contributed to this page