-
-
-
performance-law-planner Public
Training plan generator based on performance law of large language models
-
Fastformer Public
A pytorch &keras implementation and demo of Fastformer.
-
KDD-NPA Public
Resources for the paper "NPA: News Recommendation with Personalized Attention"
-
-
OpinionAttack Public
Source code of the paper "Risk of Opinion Distortion Attack in AI-based News Delivery and Its Defense"
-
Flipformer Public
The pytorch code of "Flipformer: Ultra-Efficient Transformer with Only Shift and Flip"
-
-
-
-
-
EMUL Public
source code for "Inference-efficient Machine Unlearning via Model Knowledge Assembling"
-
-
User-as-Graph Public
Source codes for our IJCAI 2021 paper "User-as-Graph: User Modeling with Heterogeneous Graph Pooling for News Recommendation"
-
Sentiment-debiasing Public
Source codes for sentiment-debiasing in news recommendation.
-
FedAttack Public
Source code of FedAttack.
-
-
NRNF Public
Neural News Recommendation with Negative Feedback
-
DebiasGAN Public
DebiasGAN: Eliminating Position Bias in News Recommendation with Adversarial Learning
-
AAAI-FairRec Public
Fairrec: fairness-aware news recommendation with decomposed adversarial learning
-
Reviews-Meet-Graphs Public
Source codes for the paper "Reviews Meet Graphs: Enhancing User and Item Representations for Recommendation with Hierarchical Attentive Graph Neural Network"
-
MT-BERT Public
One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers
2 UpdatedJun 1, 2021 -
-
WWW19-NER Public
Neural Chinese named entity recognition via CNN-LSTM-CRF and joint training with word segmentation
-
IJCAI2020-CPRS Public
The resources for the paper "User Modeling with Click Preference and Reading Satisfaction for News Recommendation"
-
EMNLP2019-NRMS Public
The source codes for the paper "Neural News Recommendation with Multi-Head Self-Attention".
-
NAACL2019-HUITA Public
hierarchical user and item representation for recommendation with three-tier attention
-
IJCAI2019-NAML Public
The codes of Neural News Recommendation with Attentive Multi-view Learning
-
PTUM Public
Resources of "PTUM: Pre-training User Model from Unlabeled User Behaviors via Self-supervision"