Skip to content

Yiwei98/TDG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Turning Dust into Gold [AAAI 2024]

img.png

This is the repo for AAAI 2024 paper: Turning Dust into Gold: Distilling Complex Reasoning Capabilities from LLMs by Leveraging Negative Data. [Arxiv]

The repo contains:

  • The synthetic data from ChatGPT and GPT4.
  • The training and inference code for this work.
  • The experimental results.
  • Current works related to MATH dataset and math reasoning.

Data

We provide the synthetic samples from GPT3.5-turbo/GPT4 through ICL on the MATH training set, which are saved in the data folder GPT3.5-turbo-MATH and GPT4-MATH. For each sample, 8 samples are generated.
The demonstrations for generating rationales are in our paper.

Code

The training and inference code are as follows:

step1:

prepare llama-7b checkpoint and store it in the code directory

step2:

prepare conda environment with requirements.txt

step3:

conda activate llm

step4:

training LoRA-neg

cd code

bash run_neg.sh

step5:

training LoRA-NAT

bash run_NAT.sh

step6:

training NCE

bash run_NCE.sh

step7:

training ASC

bash run_ASC.sh

Results

img_1.png

A list of work related to MATH and math reasoning

We have also organized some work related to the MATH dataset and mathematical reasoning tasks to promote future research

A. Involves distillation on mathematical reasoning tasks

1. Teaching Small Language Models to Reason(Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn

2. Specializing Smaller Language Models towards Multi-Step Reasoning(Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot

dataset:Google Drive

3. Large Language Models Are Reasoning Teachers(Namgyu Ho, Laura Schmid, Se-Young Yun

dataset:DropboxGoogle Drive

4. PaD: Program-aided Distillation Specializes Large Models in Reasoning(Xuekai Zhu, Biqing Qi, Kaiyan Zhang, Xingwei Long, Bowen Zhou

B. Experiment on the MATH dataset

1. Measuring Mathematical Problem Solving With the MATH Dataset(original paper)(Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, Jacob Steinhardt

3. A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human Level(Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, Roman Wang, Nikhil Singh, Taylor L. Patti, Jayson Lynch, Avi Shporer, Nakul Verma, Eugene Wu, Gilbert Strang

4. ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models(Zhipeng Chen, Kun Zhou, Beichen Zhang, Zheng Gong, Wayne Xin Zhao, Ji-Rong Wen)

dataset:MATH、HotPotQA

5.Deductive Verification of Chain-of-Thought Reasoning(Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, Hao Su)

dataset:MATH

6.CREATOR: Disentangling Abstract and Concrete Reasonings of Large Language Models through Tool Creation(Cheng Qian, Chi Han, Yi R. Fung, Yujia Qin, Zhiyuan Liu, Heng Ji)

dataset:MATH、TabMWP、Creation Challenge

7.An Empirical Study on Challenging Math Problem Solving with GPT-4 (Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang)

8.Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference (Chi Wang, Susan Xueqing Liu, Ahmed H. Awadallah)

C. Research work related to MATH

1. MINIF2F: A CROSS-SYSTEM BENCHMARK FOR FORMAL OLYMPIAD-LEVEL MATHEMATICS(Kunhao Zheng, Jesse Michael Han, Stanislas Polu

(Drawing on the MATH dataset, propose miniF2F)

2. DRAFT, SKETCH, AND PROVE: GUIDING FORMALTHEOREM PROVERS WITH INFORMAL PROOFS(Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, Guillaume Lample

( MATH is only used as a source of informal data, a way to map informal proofs to formal proofs)

3. LAMBADA: Backward Chaining for Automated Reasoning in Natural Language(Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran

(The reference is the post pretrain method in MATH, reverse reasoning)

4.AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models(Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, Nan Duan

(MATH is part of the benchmark, AGIEval: A Human-Centric Benchmark for Evaluating Base Models)

dataset: data/v1

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published