6.4. UCTB.model package¶
6.4.1. UCTB.model.ARIMA module¶
-
class
UCTB.model.ARIMA.
ARIMA
(time_sequence, order=None, seasonal_order=(0, 0, 0, 0), max_ar=6, max_ma=4, max_d=2)¶ Bases:
object
ARIMA is a generalization of an ARMA (Autoregressive Moving Average) model, used in predicting future points in time series analysis.
Since there may be three kinds of series data as closeness, period and trend history, this class trains three different ARIMA models for each node according to the three kinds of history data, and returns average of the predicted values by the models in prediction.
Parameters: - time_sequence (array_like) – The observation value of time_series.
- order (iterable) – It stores the (p, d, q) orders of the model for the number of AR parameters , differences, MA parameters. If set to None, ARIMA class will calculate the orders for each series based on max_ar, max_ma and max_d. Default: None
- seasonal_order (iterable) – It stores the (P,D,Q,s) order of the seasonal ARIMA model for the AR parameters, differences, MA parameters, and periodicity. s is an integer giving the periodicity (number of periods in season).
- max_ar (int) – Maximum number of AR lags to use. Default: 6
- max_ma (int) – Maximum number of MA lags to use. Default: 4
- max_d (int) – Maximum number of degrees of differencing. Default: 2
- Attribute:
- order(iterable): (p, d, q) orders for ARIMA model. seasonal_order(iterable): (P,D,Q,s) order for seasonal ARIMA model. model_res(): Fit method for likelihood based models.
-
static
adf_test
(time_series, max_lags=None, verbose=True)¶ Augmented Dickey–Fuller test. The Augmented Dickey-Fuller test can be used to test for a unit root in a univariate process in the presence of serial correlation.
-
get_order
(series, order=None, max_ar=6, max_ma=2, max_d=2)¶ If order is None, it simply returns order, otherwise, it calculates the (p, d, q) orders for the series data based on max_ar, max_ma and max_d.
-
predict
(time_sequences, forecast_step=1)¶ - Argues:
- time_sequences: The input time_series features. forecast_step: The number of predicted future steps. Default: 1
Returns: Prediction results with shape of (len(time_sequence)/forecast_step,forecast_step=,1). Type: np.ndarray
6.4.2. UCTB.model.DCRNN module¶
-
class
UCTB.model.DCRNN.
DCRNN
(num_node, num_diffusion_matrix, num_rnn_units=64, num_rnn_layers=1, max_diffusion_step=2, seq_len=6, use_curriculum_learning=False, input_dim=1, output_dim=1, cl_decay_steps=1000, target_len=1, lr=0.0001, epsilon=0.001, optimizer_name='Adam', code_version='DCRNN-QuickStart', model_dir='model_dir', gpu_device='0', **kwargs)¶ Bases:
UCTB.model_unit.BaseModel.BaseModel
References
- Diffusion convolutional recurrent neural network: Data-driven traffic forecasting (Li Yaguang, et al., 2017).
- A TensorFlow implementation of Diffusion Convolutional Recurrent Neural Network (liyaguang).
Parameters: - num_node (int) – Number of nodes in the graph, e.g. number of stations in NYC-Bike dataset.
- num_diffusion_matrix – Number of diffusion matrix used in model.
- num_rnn_units – Number of RNN units.
- num_rnn_layers – Number of RNN layers
- max_diffusion_step – Number of diffusion steps
- seq_len – Input sequence length
- use_curriculum_learning (bool) – model’s prediction (True) or the previous ground truth in training (False).
- input_dim – Dimension of the input feature
- output_dim – Dimension of the output feature
- cl_decay_steps – When use_curriculum_learning=True, cl_decay_steps is used to adjust the ratio of using ground true labels, where with more training steps, the ratio drops.
- target_len (int) – Output sequence length.
- lr (float) – Learning rate
- epsilon – epsilon in Adam
- optimizer_name (str) – ‘sgd’ or ‘Adam’ optimizer
- code_version (str) – Current version of this model code, which will be used as filename for saving the model
- model_dir (str) – The directory to store model files. Default:’model_dir’.
- gpu_device (str) – To specify the GPU to use. Default: ‘0’.
-
build
(init_vars=True, max_to_keep=5)¶
6.4.3. UCTB.model.DeepST module¶
-
class
UCTB.model.DeepST.
DeepST
(closeness_len, period_len, trend_len, width, height, external_dim, kernel_size=3, num_conv_filters=64, lr=1e-05, code_version='QuickStart-DeepST', model_dir='model_dir', gpu_device='0')¶ Bases:
UCTB.model_unit.BaseModel.BaseModel
Deep learning-based prediction model for Spatial-Temporal data (DeepST)
DeepST is composed of three components: 1) temporal dependent instances: describing temporal closeness, period and seasonal trend; 2) convolutional neural networks: capturing near and far spatial dependencies; 3) early and late fusions: fusing similar and different domains’ data.
Parameters: - closeness_len (int) – The length of closeness data history. The former consecutive
closeness_len
time slots of data will be used as closeness history. - period_len (int) – The length of period data history. The data of exact same time slots in former consecutive
period_len
days will be used as period history. - trend_len (int) – The length of trend data history. The data of exact same time slots in former consecutive
trend_len
weeks (every seven days) will be used as trend history. - width (int) – The width of grid data.
- height (int) – The height of grid data.
- externai_dim (int) – Number of dimensions of external data.
- kernel_size (int) – Kernel size in Convolutional neural networks. Default: 3
- num_conv_filters (int) – the Number of filters in the convolution. Default: 64
- lr (float) – Learning rate. Default: 1e-5
- code_version (str) – Current version of this model code.
- model_dir (str) – The directory to store model files. Default:’model_dir’
- gpu_device (str) – To specify the GPU to use. Default: ‘0’
-
build
()¶
- closeness_len (int) – The length of closeness data history. The former consecutive
6.4.4. UCTB.model.GeoMAN module¶
-
class
UCTB.model.GeoMAN.
GeoMAN
(total_sensers, input_dim, external_dim, output_dim, input_steps, output_steps, n_stacked_layers=2, n_encoder_hidden_units=128, n_decoder_hidden_units=128, dropout_rate=0.3, lr=0.001, gc_rate=2.5, code_version='GeoMAN-QuickStart', model_dir='model_dir', gpu_device='0', **kwargs)¶ Bases:
UCTB.model_unit.BaseModel.BaseModel
Multi-level Attention Networks for Geo-sensory Time Series Prediction (GeoMAN)
GeoMAN consists of two major parts: 1) A multi-level attention mechanism (including both local and global spatial attentions in encoder and temporal attention in decoder) to model the dynamic spatio-temporal dependencies; 2) A general fusion module to incorporate the external factors from different domains (e.g., meteorology, time of day and land use).
References
- GeoMAN: Multi-level Attention Networks for Geo-sensory Time Series Prediction (Liang Yuxuan, et al., 2018).
- An easy implement of GeoMAN using TensorFlow (yoshall & CastleLiang).
Parameters: - total_sensers (int) – The number of total sensors used in global attention mechanism.
- input_dim (int) – The number of dimensions of the target sensor’s input.
- external_dim (int) – The number of dimensions of the external features.
- output_dim (int) – The number of dimensions of the target sensor’s output.
- input_steps (int) – The length of historical input data, a.k.a, input timesteps.
- output_steps (int) – The number of steps that need prediction by one piece of history data, a.k.a, output timesteps. Have to be 1 now.
- n_stacked_layers (int) – The number of LSTM layers stacked in both encoder and decoder (These two are the same). Default: 2
- n_encoder_hidden_units (int) – The number of hidden units in each layer of encoder. Default: 128
- n_decoder_hidden_units (int) – The number of hidden units in each layer of decoder. Default: 128
- dropout_rate (float) – Dropout rate of LSTM layers in both encoder and decoder. Default: 0.3
- lr (float) – Learning rate. Default: 0.001
- gc_rate (float) – A clipping ratio for all the gradients. This operation normalizes all gradients so that
their L2-norms are less than or equal to
gc_rate
. Default: 2.5 - code_version (str) – Current version of this model code. Default: ‘GeoMAN-QuickStart’
- model_dir (str) – The directory to store model files. Default:’model_dir’
- gpu_device (str) – To specify the GPU to use. Default: ‘0’
- **kwargs (dict) – Reserved for future use. May be used to pass parameters to class
BaseModel
.
-
build
(init_vars=True, max_to_keep=5)¶
-
UCTB.model.GeoMAN.
input_transform
(local_features, global_features, external_features, targets)¶ Split the model’s inputs from matrices to lists on timesteps axis.
-
UCTB.model.GeoMAN.
split_timesteps
(inputs)¶ Split the input matrix from (batch, timesteps, input_dim) to a step list ([[batch, input_dim], …, ]).
6.4.5. UCTB.model.HM module¶
6.4.6. UCTB.model.STMeta module¶
-
class
UCTB.model.STMeta.
STMeta
(num_node, external_dim, closeness_len, period_len, trend_len, input_dim=1, num_graph=1, gcn_k=1, gcn_layers=1, gclstm_layers=1, num_hidden_units=64, num_dense_units=32, graph_merge_gal_units=32, graph_merge_gal_num_heads=2, temporal_merge_gal_units=64, temporal_merge_gal_num_heads=2, st_method='GCLSTM', temporal_merge='gal', graph_merge='gal', output_activation=<function sigmoid>, lr=0.0001, code_version='STMeta-QuickStart', model_dir='model_dir', gpu_device='0', **kwargs)¶ Bases:
UCTB.model_unit.BaseModel.BaseModel
Parameters: - num_node (int) – Number of nodes in the graph, e.g. number of stations in NYC-Bike dataset.
- external_dim (int) – Dimension of the external feature, e.g. temperature and wind are two dimension.
- closeness_len (int) – The length of closeness data history. The former consecutive
closeness_len
time slots - data will be used as closeness history. (of) –
- period_len (int) – The length of period data history. The data of exact same time slots in former consecutive
- days will be used as period history. (period_len) –
- trend_len (int) – The length of trend data history. The data of exact same time slots in former consecutive
- weeks (trend_len) –
- input_dim (int) – The dimension of input features. 1 if “with_tpe” (data_loader parameters) = False, otherwise 0.
- num_graph (int) – Number of graphs used in STMeta.
- gcn_k (int) – The highest order of Chebyshev Polynomial approximation in GCN.
- gcn_layers (int) – Number of GCN layers.
- gclstm_layers (int) – Number of STRNN layers, it works on all modes of STMeta such as GCLSTM and DCRNN.
- num_hidden_units (int) – Number of hidden units of RNN.
- num_dense_units (int) – Number of dense units.
- graph_merge_gal_units (int) – Number of units in GAL for merging different graph features. Only works when graph_merge=’gal’
- graph_merge_gal_num_heads (int) – Number of heads in GAL for merging different graph features. Only works when graph_merge=’gal’
- temporal_merge_gal_units (int) – Number of units in GAL for merging different temporal features. Only works when temporal_merge=’gal’
- temporal_merge_gal_num_heads (int) – Number of heads in GAL for merging different temporal features. Only works when temporal_merge=’gal’
- st_method (str) – must in [‘GCLSTM’, ‘DCRNN’, ‘GRU’, ‘LSTM’], which refers to different spatial-temporal modeling methods. ‘GCLSTM’: GCN for modeling spatial feature, LSTM for modeling temporal feature. ‘DCRNN’: Diffusion Convolution for modeling spatial feature, GRU for modeling temporam frature. ‘GRU’: Ignore the spatial, and model the temporal feature using GRU ‘LSTM’: Ignore the spatial, and model the temporal feature using LSTM
- temporal_merge (str) – must in [‘gal’, ‘concat’], refers to different temporal merging methods, ‘gal’: merge using GAL, ‘concat’: merge by concat and dense
- graph_merge (str) – must in [‘gal’, ‘concat’], refers to different graph merging methods, ‘gal’: merge using GAL, ‘concat’: merge by concat and dense
- output_activation (function) – activation function, e.g. tf.nn.tanh
- lr (float) – Learning rate. Default: 1e-5
- code_version (str) – Current version of this model code, which will be used as filename for saving the model
- model_dir (str) – The directory to store model files. Default:’model_dir’.
- gpu_device (str) – To specify the GPU to use. Default: ‘0’.
-
build
(init_vars=True, max_to_keep=5)¶
6.4.7. UCTB.model.ST_MGCN module¶
-
class
UCTB.model.ST_MGCN.
ST_MGCN
(T, input_dim, num_graph, gcl_k, gcl_l, lstm_units, lstm_layers, lr, external_dim, code_version, model_dir, gpu_device)¶ Bases:
UCTB.model_unit.BaseModel.BaseModel
References
- Spatiotemporal multi-graph convolution network for ride-hailing demand forecasting (Geng Xu, et al., 2019).
- A PyTorch implementation of the ST-MGCN model (shawnwang-tech).
Parameters: - T (int) – Input sequence length
- input_dim (int) – Input feature dimension
- num_graph (int) – Number of graphs used in the model.
- gcl_k (int) – The highest order of Chebyshev Polynomial approximation in GCN.
- gcl_l (int) – Number of GCN layers.
- lstm_units (int) – Number of hidden units of RNN.
- lstm_layers (int) – Number of LSTM layers.
- lr (float) – Learning rate
- external_dim (int) – Dimension of the external feature, e.g. temperature and wind are two dimension.
- code_version (str) – Current version of this model code, which will be used as filename for saving the model
- model_dir (str) – The directory to store model files. Default:’model_dir’.
- gpu_device (str) – To specify the GPU to use. Default: ‘0’.
-
build
(init_vars=True, max_to_keep=5)¶
6.4.8. UCTB.model.ST_ResNet module¶
-
class
UCTB.model.ST_ResNet.
ST_ResNet
(width, height, external_dim, closeness_len, period_len, trend_len, num_residual_unit=4, kernel_size=3, lr=5e-05, model_dir='model_dir', code_version='QuickStart', conv_filters=64, gpu_device='0')¶ Bases:
UCTB.model_unit.BaseModel.BaseModel
ST-ResNet is a deep-learning model with an end-to-end structure based on unique properties of spatio-temporal data making use of convolution and residual units.
References
- Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction (Junbo Zhang et al., 2016).
- Github repository (lucktroy).
Parameters: - width (int) – The width of grid data.
- height (int) – The height of grid data.
- externai_dim (int) – Number of dimensions of external data.
- closeness_len (int) – The length of closeness data history. The former consecutive
closeness_len
time slots of data will be used as closeness history. - period_len (int) – The length of period data history. The data of exact same time slots in former consecutive
period_len
days will be used as period history. - trend_len (int) – The length of trend data history. The data of exact same time slots in former consecutive
trend_len
weeks (every seven days) will be used as trend history. - num_residual_unit (int) – Number of residual units. Default: 4
- kernel_size (int) – Kernel size in Convolutional neural networks. Default: 3
- lr (float) – Learning rate. Default: 1e-5
- code_version (str) – Current version of this model code.
- model_dir (str) – The directory to store model files. Default:’model_dir’
- conv_filters (int) – the Number of filters in the convolution. Default: 64
- gpu_device (str) – To specify the GPU to use. Default: ‘0’
-
build
()¶
6.4.9. UCTB.model.XGBoost module¶
-
class
UCTB.model.XGBoost.
XGBoost
(n_estimators=10, max_depth=5, verbosity=0, objective='reg:squarederror', eval_metric='rmse')¶ Bases:
object
XGBoost is an optimized distributed gradient boosting machine learning algorithm.
Parameters: - *n_estimators (int) – Number of boosting iterations. Default: 10
- *max_depth (int) – Maximum tree depth for base learners. Default: 5
- *verbosity (int) – The degree of verbosity. Valid values are 0 (silent) - 3 (debug). Default: 0
- *objective (string or callable) – Specify the learning task and the corresponding learning objective or
a custom objective function to be used. Default:
'reg:squarederror'
- *eval_metric (str, list of str, or callable, optional) – If a str, should be a built-in evaluation metric to use. See more in
API Reference of XGBoost Library.
Default:
'rmse'
-
fit
(X, y)¶ Training method.
Parameters: - X (np.ndarray/scipy.sparse/pd.DataFrame/dt.Frame) – The training input samples.
- y (np.ndarray, optional) – The target values of training samples.
-
predict
(X)¶ Prediction method.
Returns: Predicted values with shape as [time_slot_num, node_num, 1]. Return type: np.ndarray
6.4.10. UCTB.model.AGCRN module¶
-
class
UCTB.model.AGCRN.
AGCRN
(num_node, input_dim, hidden_dim, output_dim, pred_step, num_layers, default_graph, embed_dim, cheb_k)¶ Bases:
torch.nn.modules.module.Module
References
- Adaptive graph convolutional recurrent network for traffic forecasting..
- A PyTorch implementation of the AGCRN model (LeiBAI).
Parameters: - num_node (int) – Number of nodes.
- input_dim (int) – Input feature dimension.
- hidden_dim (int) – Number of hidden units of RNN.
- output_dim (int) – Number of dimension of output.
- pred_step (int) – Number of steps to predict.
- num_layers (int) – Number of layers of AGCRNCell.
- default_graph (bool) – Whether to use default graph or not.
- embed_dim (int) – Number of dimension of embedding.
- cheb_k (int) – Order of chebyshev polynomial.
6.4.11. UCTB.model.ASTGCN module¶
-
class
UCTB.model.ASTGCN.
ASTGCN_submodule
(DEVICE, num_blocks, in_channels, K, num_chev_filter, num_time_filter, time_strides, cheb_polynomials, pred_step, len_input, num_node)¶ Bases:
torch.nn.modules.module.Module
References
- Attention based spatial-temporal graph convolutional networks for traffic flow forecasting...
- A PyTorch implementation of the ASTGCN model (guoshnBJTU).
Parameters: - DEVICE (torch.device) – Which device use to train.
- num_blocks (int) – Number of blocks.
- in_channels (int) – Number of input channels.
- K (int) – Order of chebyshev polynomial.
- num_chev_filter (int) – Number of chebyshev filter.
- num_time_filter (int) – Number of time filter.
- time_strides (int) – Number of time strides.
- cheb_polynomials (int) – Chebyshev Polynomials.
- pred_step (int) – Number of steps of prediction.
- len_input (int) – Number of steps of sequence input.
- num_node (int) – Number of nodes.
-
forward
(x)¶ Parameters: x – (B, N_nodes, F_in, T_in) Returns: (B, N_nodes, T_out)
-
class
UCTB.model.ASTGCN.
Spatial_Attention_layer
(DEVICE, in_channels, num_of_vertices, num_of_timesteps)¶ Bases:
torch.nn.modules.module.Module
compute spatial attention scores
-
forward
(x)¶ Parameters: x – (batch_size, N, F_in, T) Returns: (B,N,N)
-
-
class
UCTB.model.ASTGCN.
cheb_conv
(K, cheb_polynomials, in_channels, out_channels)¶ Bases:
torch.nn.modules.module.Module
K-order chebyshev graph convolution
-
forward
(x)¶ Chebyshev graph convolution operation :param x: (batch_size, N, F_in, T) :return: (batch_size, N, F_out, T)
-
-
class
UCTB.model.ASTGCN.
cheb_conv_withSAt
(K, cheb_polynomials, in_channels, out_channels)¶ Bases:
torch.nn.modules.module.Module
K-order chebyshev graph convolution
-
forward
(x, spatial_attention)¶ Chebyshev graph convolution operation :param x: (batch_size, N, F_in, T) :return: (batch_size, N, F_out, T)
-
-
UCTB.model.ASTGCN.
cheb_polynomial
(L_tilde, K)¶ compute a list of chebyshev polynomials from T_0 to T_{K-1}
Parameters: - L_tilde (scaled Laplacian, np.ndarray, shape (N, N)) –
- K (the maximum order of chebyshev polynomials) –
Returns: cheb_polynomials
Return type: list(np.ndarray), length: K, from T_0 to T_{K-1}
-
UCTB.model.ASTGCN.
make_model
(DEVICE, nb_block, in_channels, K, nb_chev_filter, nb_time_filter, time_strides, L_tilde, num_for_predict, len_input, num_of_vertices)¶ Parameters: - DEVICE –
- nb_block –
- in_channels –
- K –
- nb_chev_filter –
- nb_time_filter –
- time_strides –
- cheb_polynomials –
- nb_predict_step –
:param len_input :return:
6.4.12. UCTB.model.GMAN module¶
-
UCTB.model.GMAN.
GMAN
(X, TE, SE, P, Q, T, L, K, d, bn, bn_decay, is_training)¶ References: - `Gman: A graph multi-attention network for traffic prediction.
Parameters:
-
UCTB.model.GMAN.
STEmbedding
(SE, TE, T, D, bn, bn_decay, is_training)¶ spatio-temporal embedding
SE: [N, D] TE: [batch_size, P + Q, 2] (dayofweek, timeofday) T: num of time steps in one day D: output dims retrun: [batch_size, P + Q, N, D]
-
UCTB.model.GMAN.
alias_draw
(J, q)¶ Draw sample from a non-uniform discrete distribution using alias sampling.
-
UCTB.model.GMAN.
alias_setup
(probs)¶ Compute utility lists for non-uniform sampling from discrete distributions. Refer to https://hips.seas.harvard.edu/blog/2013/03/03/the-alias-method-efficient-sampling-with-many-discrete-outcomes/ for details
-
UCTB.model.GMAN.
gatedFusion
(HS, HT, D, bn, bn_decay, is_training)¶ gated fusion
HS: [batch_size, num_step, N, D] HT: [batch_size, num_step, N, D] D: output dims return: [batch_size, num_step, N, D]
-
UCTB.model.GMAN.
spatialAttention
(X, STE, K, d, bn, bn_decay, is_training)¶ spatial attention mechanism
X: [batch_size, num_step, N, D] STE: [batch_size, num_step, N, D] K: number of attention heads d: dimension of each attention outputs return: [batch_size, num_step, N, D]
-
UCTB.model.GMAN.
temporalAttention
(X, STE, K, d, bn, bn_decay, is_training, mask=True)¶ temporal attention mechanism
X: [batch_size, num_step, N, D] STE: [batch_size, num_step, N, D] K: number of attention heads d: dimension of each attention outputs return: [batch_size, num_step, N, D]
-
UCTB.model.GMAN.
transformAttention
(X, STE_P, STE_Q, K, d, bn, bn_decay, is_training)¶ transform attention mechanism
X: [batch_size, P, N, D] STE_P: [batch_size, P, N, D] STE_Q: [batch_size, Q, N, D] K: number of attention heads d: dimension of each attention outputs return: [batch_size, Q, N, D]
6.4.13. UCTB.model.GraphWaveNet module¶
-
class
UCTB.model.GraphWaveNet.
gwnet
(device, num_node, dropout=0.3, supports=None, gcn_bool=True, addaptadj=True, aptinit=None, in_dim=2, out_dim=12, residual_channels=32, dilation_channels=32, skip_channels=256, end_channels=512, kernel_size=2, blocks=4, layers=2)¶ Bases:
torch.nn.modules.module.Module
References
- Graph wavenet for deep spatial-temporal graph modeling..
- A PyTorch implementation of the GraphWaveNet model (nnzhan).
Parameters: - device (torch.device) – Which device use to train.
- num_node (int) – Number of blocks.
- drop_out (int) – Number of input channels.
- supports (int) – Order of chebyshev polynomial.
- gcn_bool (int) – Number of chebyshev filter.
- addaptadj (bool) – Whether to add adaptive adjacent matrix.
- aptinit (torch.tensor) – Initialization of adjacent matrix.
- in_dim (int) – Number of input’s dimension.
- out_dim (int) – Number of output’s dimension.
- residual_channels (int) – Number of channels after residual module.
- dilation_channels (int) – Number of channels after dilation module.
- skip_channels (int) – Number of skip channels.
- end_channels (int) – Number of end channels.
- kernel_size (int) – Kernel Size for dilation convolution.
- blocks (int) – Number of block.
- layers (int) – Number of layer.
6.4.14. UCTB.model.STGCN module¶
-
UCTB.model.STGCN.
build_model
(inputs, n_his, Ks, Kt, blocks, keep_prob)¶ References
- Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting..
- A Tensorflow implementation of the STGCN model (VeritasYin).
Parameters:
-
UCTB.model.STGCN.
cheb_poly_approx
(L, Ks, n)¶ Chebyshev polynomials approximation function.
Parameters: - L – np.matrix, [n_route, n_route], graph Laplacian.
- Ks – int, kernel size of spatial convolution.
- n – int, number of routes / size of graph.
Returns: np.ndarray, [n_route, Ks*n_route].
-
UCTB.model.STGCN.
fully_con_layer
(x, n, channel, scope)¶ Fully connected layer: maps multi-channels to one.
Parameters: - x – tensor, [batch_size, 1, n_route, channel].
- n – int, number of route / size of graph.
- channel – channel size of input x.
- scope – str, variable scope.
Returns: tensor, [batch_size, 1, n_route, 1].
-
UCTB.model.STGCN.
gconv
(x, theta, Ks, c_in, c_out)¶ - Spectral-based graph convolution function.
- x: tensor, [batch_size, n_route, c_in]. theta: tensor, [Ks*c_in, c_out], trainable kernel parameters. Ks: int, kernel size of graph convolution. c_in: int, size of input channel. c_out: int, size of output channel.
Returns: tensor, [batch_size, n_route, c_out].
-
UCTB.model.STGCN.
gen_batch
(inputs, batch_size, dynamic_batch=False, shuffle=False)¶ Data iterator in batch.
Parameters: - inputs – np.ndarray, [len_seq, n_frame, n_route, C_0], standard sequence units.
- batch_size – int, the size of batch.
- dynamic_batch – bool, whether changes the batch size in the last batch if its length is less than the default.
- shuffle – bool, whether shuffle the batches.
-
UCTB.model.STGCN.
layer_norm
(x, scope)¶ Layer normalization function.
Parameters: - x – tensor, [batch_size, time_step, n_route, channel].
- scope – str, variable scope.
Returns: tensor, [batch_size, time_step, n_route, channel].
-
UCTB.model.STGCN.
output_layer
(x, T, scope, act_func='GLU')¶ Output layer: temporal convolution layers attach with one fully connected layer, which map outputs of the last st_conv block to a single-step prediction.
Parameters: - x – tensor, [batch_size, time_step, n_route, channel].
- T – int, kernel size of temporal convolution.
- scope – str, variable scope.
- act_func – str, activation function.
Returns: tensor, [batch_size, 1, n_route, 1].
-
UCTB.model.STGCN.
spatio_conv_layer
(x, Ks, c_in, c_out)¶ Spatial graph convolution layer.
Parameters: - x – tensor, [batch_size, time_step, n_route, c_in].
- Ks – int, kernel size of spatial convolution.
- c_in – int, size of input channel.
- c_out – int, size of output channel.
Returns: tensor, [batch_size, time_step, n_route, c_out].
-
UCTB.model.STGCN.
st_conv_block
(x, Ks, Kt, channels, scope, keep_prob, act_func='GLU')¶ Spatio-temporal convolutional block, which contains two temporal gated convolution layers and one spatial graph convolution layer in the middle.
Parameters: - x – tensor, batch_size, time_step, n_route, c_in].
- Ks – int, kernel size of spatial convolution.
- Kt – int, kernel size of temporal convolution.
- channels – list, channel configs of a single st_conv block.
- scope – str, variable scope.
- keep_prob – placeholder, prob of dropout.
- act_func – str, activation function.
Returns: tensor, [batch_size, time_step, n_route, c_out].
-
UCTB.model.STGCN.
temporal_conv_layer
(x, Kt, c_in, c_out, act_func='relu')¶ Temporal convolution layer.
Parameters: - x – tensor, [batch_size, time_step, n_route, c_in].
- Kt – int, kernel size of temporal convolution.
- c_in – int, size of input channel.
- c_out – int, size of output channel.
- act_func – str, activation function.
Returns: tensor, [batch_size, time_step-Kt+1, n_route, c_out].
-
UCTB.model.STGCN.
variable_summaries
(var, v_name)¶ Attach summaries to a Tensor (for TensorBoard visualization). Ref: https://zhuanlan.zhihu.com/p/33178205
Parameters: - var – tf.Variable().
- v_name – str, name of the variable.
6.4.15. UCTB.model.STSGCN module¶
-
UCTB.model.STSGCN.
construct_adj
(A, steps)¶ construct a bigger adjacency matrix using the given matrix
Parameters: - A (np.ndarray, adjacency matrix, shape is (N, N)) –
- steps (how many times of the does the new adj mx bigger than A) –
Returns: new adjacency matrix
Return type: csr_matrix, shape is (N * steps, N * steps)
-
UCTB.model.STSGCN.
gcn_operation
(data, adj, num_of_filter, num_of_features, num_of_vertices, activation, prefix='')¶ graph convolutional operation, a simple GCN we defined in paper
Parameters: Returns: Return type: output shape is (3N, B, C’)
-
UCTB.model.STSGCN.
get_adjacency_matrix
(distance_df_filename, num_of_vertices, type_='connectivity', id_filename=None)¶ Parameters: Returns: A
Return type: np.ndarray, adjacency matrix
-
UCTB.model.STSGCN.
output_layer
(data, num_of_vertices, input_length, num_of_features, num_of_filters=128, predict_length=12)¶ Parameters: Returns: Return type: output shape is (B, T’, N)
-
UCTB.model.STSGCN.
position_embedding
(data, input_length, num_of_vertices, embedding_size, temporal=True, spatial=True, init=<mxnet.initializer.Xavier object>, prefix='')¶ Parameters: Returns: data
Return type: output shape is (B, T, N, C)
-
UCTB.model.STSGCN.
sthgcn_layer_individual
(data, adj, T, num_of_vertices, num_of_features, filters, activation, temporal_emb=True, spatial_emb=True, prefix='')¶ STSGCL, multiple individual STSGCMs
Parameters: Returns: Return type: output shape is (B, T-2, N, C’)
-
UCTB.model.STSGCN.
sthgcn_layer_sharing
(data, adj, T, num_of_vertices, num_of_features, filters, activation, temporal_emb=True, spatial_emb=True, prefix='')¶ STSGCL, multiple a sharing STSGCM
Parameters: Returns: Return type: output shape is (B, T-2, N, C’)
-
UCTB.model.STSGCN.
stsgcl
(data, adj, T, num_of_vertices, num_of_features, filters, module_type, activation, temporal_emb=True, spatial_emb=True, prefix='')¶ STSGCL
Parameters: - data (mx.sym.var, shape is (B, T, N, C)) –
- adj (mx.sym.var, shape is (3N, 3N)) –
- T (int, length of time series, T) –
- num_of_vertices (int, N) –
- num_of_features (int, C) –
- filters (list[int], list of C') –
- module_type (str, {'sharing', 'individual'}) –
- activation (str, {'GLU', 'relu'}) –
- spatial_emb (temporal_emb,) –
- prefix (str) –
Returns: Return type: output shape is (B, T-2, N, C’)
-
UCTB.model.STSGCN.
stsgcm
(data, adj, filters, num_of_features, num_of_vertices, activation, prefix='')¶ STSGCM, multiple stacked gcn layers with cropping and max operation
Parameters: Returns: Return type: output shape is (N, B, C’)
-
UCTB.model.STSGCN.
stsgcn
(data, adj, label, input_length, num_of_vertices, num_of_features, filter_list, module_type, activation, use_mask=True, mask_init_value=None, temporal_emb=True, spatial_emb=True, prefix='', rho=1, predict_length=12)¶ References
- Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting..
- A Mxnet implementation of the stsgcn model (Davidham3).
Parameters: - data (mxnet.sym) – Input data.
- adj (mxnet.sym) – Adjacent matrix.
- label (mxnet.sym) – Prediction label.
- input_length (int) – Length of input data.
- num_of_vertices (int) – Number of vertices in the graph.
- num_of_features (int) – Number of features of each vertice.
- filter_list (list) – Filters.
- module_type (str) – Whether sharing weights.
- activation (str) – Choose which activate function.
- use_mask (bool) – Whether we use mask.
- mask_init_value (int) – Initial value of mask.
- temporal_emb (bool) – Whether to use temporal embedding.
- spatial_emb (bool) – Whether to use spatial embedding.
- prefix (str) – String prefix of mask.
- rho (float) – Hyperparameters used to calculate huber loss.
- predict_length (int) – Length of prediction.