Dynamic Graph | TGN |
---|---|
Multiple choices can be considered for implementing the Message Aggregator module. While the original paper only considered two efficient non-learnable solutions: most recent message (keep only most recent message for a given node) and mean message (average all messages for a given node), our tgn-aa design a learnable attention-based aggregation to aggregate the messages from multiple events for nodes in the same batch
|-modules
|- |-message_aggregator.py add AttentionMessageAggregator
|-train_self_supervised.py add attention for argument --aggregator
|-train_supervised.py add attention for argument --aggregator
We use the dense npy
format to save the features in binary format. If edge features or nodes
features are absent, they will be replaced by a vector of zeros.
python utils/preprocess_data.py --data wikipedia --bipartite
### tgn-aa
# TGN-attn with attention aggregator: Self-Supervised learning on the wikipedia dataset
## unlearnable
python train_self_supervised.py --aggregator attention --use_memory --prefix tgn-attn --n_runs 10
## learnable
python train_self_supervised.py --aggregator attention --learnable --use_memory --prefix tgn-attn --n_runs 10 --add_cls_token
### Baselines
# Jodie
python train_self_supervised.py --use_memory --memory_updater rnn --embedding_module time --prefix jodie_rnn --n_runs 10
# Jodie
python train_supervised.py --use_memory --memory_updater rnn --embedding_module time --prefix jodie_rnn --n_runs 10