Skip to content

Raina-Xin/KnowSum

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Evaluating the Unseen Capabilities: How Many Theorems Do LLMs Know?

License: MIT

Official implementation for Evaluating the Unseen Capabilities: How Many Theorems Do LLMs Know?.

Overview

Accurate evaluation of large language models (LLMs) is crucial for understanding their capabilities and guiding their development. However, current evaluations often inconsistently reflect the actual capacities of these models. In this paper, we demonstrate that one of many contributing factors to this evaluation crisis is the oversight of unseen knowledge--information encoded by LLMs but not directly observed or not yet observed during evaluations. We introduce KnowSum, a statistical framework designed to provide a more comprehensive assessment by quantifying the unseen knowledge for a class of evaluation tasks. KnowSumestimates the unobserved portion by extrapolating from the appearance frequencies of observed knowledge instances. We demonstrate the effectiveness and utility of KnowSum across three critical applications: estimating total knowledge, evaluating information retrieval effectiveness, and measuring output diversity. Our experiments reveal that a substantial volume of knowledge is omitted when relying solely on observed LLM performance. Importantly, KnowSum yields significantly different comparative rankings for several common LLMs based on their internal knowledge.

Application 1: Knowledge Estimation

Couting Math Objects

  • Response from Open-Sourced LLMs: run theorem/LLM_HF-math_objects-get_response.py
  • Response from Databricks/Azure Local LLMs: run theorem/LLM_local-math_objects-get_response.py
  • Response from Claude/Gemini APIs: run theorem/LLM_API-math_objects-get_response.py
  • Scripts that create figures: theorem/visualization
  • Detailed frequency data for each figure is available in the corresponding figure directory. For example, see theorem/visualization/figure_4/detailed_frequency_math_10_key['theorem'].json.

Couting Human Diseases

  • Response from Open-Sourced Hugging Face LLMs: run human_diseases/LLM_HF-human_diseases-get_response.ipynb
  • Response from Databricks/Azure Local LLMs: run human_diseases/LLM_local-human_diseases-get_response.ipynb
  • Response from Claude/Gemini APIs: run human_diseases/API_local-human_diseases-get_response.ipynb
  • Scripts that create figures: human_diseases/visualization
  • Detailed frequency data for each figure is available in the corresponding figure directory. For example, see human_diseases/visualization/figure_4/detailed_frequency_disease_keyAnatomical.json.

Application 2: Information Retrieval

Subtask 1: Document Retrieval

  • Response from Open-Sourced LLMs: run biomed_IR/BioASQ_task12b_subtask1/run_all_HF_llms.ipynb
  • Response from Databricks/Azure Local LLMs: run biomed_IR/BioASQ_task12b_subtask1/run_all_local_llms.ipynb
  • Response from Claude/Gemini APIs: run biomed_IR/BioASQ_task12b_subtask1/run_all_api_llms.ipynb

Subtask 2: Question Answering

  • Response from Open-Sourced LLMs: run biomed_IR/BioASQ_task12b_subtask2/scripts/run_all_HF_llms.ipynb
  • Response from Databricks/Azure Local LLMs: run biomed_IR/BioASQ_task12b_subtask2/scripts/run_all_local_llms.ipynb
  • Response from Claude/Gemini APIs: run biomed_IR/BioASQ_task12b_subtask2/scripts/run_all_api_llms.ipynb

Application 3: Creativity

  • Response from Open-Sourced LLMs: run creativity/LLM_HF-creativity-get_response.py
  • Response from Databricks/Azure Local LLMs: run creativity/LLM_local-creativity-get_response.py
  • Response from Claude/Gemini APIs: run creativity/LLM_API-creativity-get_response.py

Citation

If you find this repository useful, please consider citing:

@article{li2025evaluating,
  title={Evaluating the Unseen Capabilities: How Many Theorems Do LLMs Know?},
  author={Li, Xiang and Xin, Jiayi and Long, Qi and Su, Weijie J},
  journal={arXiv preprint arXiv:2506.02058},
  year={2025}
}

About

Code for Evaluating the Unseen Capabilities: How Many Theorems Do LLMs Know?

Topics

Resources

Stars

Watchers

Forks