Skip to content
This repository has been archived by the owner on Jan 11, 2024. It is now read-only.

Commit

Permalink
Add Large Language Models Are Reasoning Teachers
Browse files Browse the repository at this point in the history
  • Loading branch information
Timothyxxx authored Feb 9, 2023
1 parent d444c47 commit 2efa0da
Showing 1 changed file with 7 additions and 3 deletions.
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,14 +142,18 @@ A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Langu

*Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun* [[pdf](https://arxiv.org/abs/2212.10001)] [[code](https://github.com/sunlab-osu/Understanding-CoT)] 2022.12

35. **Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning**
35. **Large Language Models Are Reasoning Teachers.**

*Namgyu Ho, Laura Schmid, Se-Young Yun* [[pdf](https://arxiv.org/abs/2212.10071)] 2022.12

36. **Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning**

*Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, Yongbin Li* [[pdf](https://arxiv.org/abs/2301.13808)] 2023.02

36. **Multimodal Chain-of-Thought Reasoning in Language Models**
37. **Multimodal Chain-of-Thought Reasoning in Language Models**

*Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola* [[pdf](https://arxiv.org/abs/2302.00923)] 2023.02

37. **Large Language Models Can Be Easily Distracted by Irrelevant Context**
38. **Large Language Models Can Be Easily Distracted by Irrelevant Context**

*Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou* [[pdf](https://arxiv.org/abs/2302.00093)] 2023.02

0 comments on commit 2efa0da

Please sign in to comment.