Folktexts is a python package to evaluate and benchmark calibration of large language models. It enables using any transformers model as a classifier for tabular data tasks, and extracting risk score estimates from the model's output log-odds.
Several benchmark tasks are provided based on data from the American Community Survey. Namely, each prediction task from the popular folktables package is made available as a natural-language prompting task.
Package documentation can be found here.
Table of contents:
Install package from PyPI:
pip install folktexts
You'll need to go through these steps to run the benchmark tasks.
- Create conda environment
conda create -n folktexts python=3.11
conda activate folktexts
- Install folktexts package
pip install folktexts
- Create models dataset and results folder
mkdir results
mkdir models
mkdir data
- Download transformers model and tokenizer
python -m folktexts.cli.download_models --model "google/gemma-2b" --save-dir models
- Run benchmark on a given task
python -m folktexts.cli.run_acs_benchmark --results-dir results --data-dir data --task-name "ACSIncome" --model models/google--gemma-2b
Run python -m folktexts.cli.run_acs_benchmark --help
to get a list of all
available benchmark flags.
To use one of the pre-defined survey prediction tasks, simply use the following code snippet:
from folktexts.acs import ACSDataset, ACSTaskMetadata
acs_task_name = "ACSIncome"
# Create an object that classifies data using an LLM
clf = LLMClassifier(
model=model,
tokenizer=tokenizer,
task=ACSTaskMetadata.get_task(acs_task_name),
)
# Use a dataset or feed in your own data
dataset = ACSDataset(acs_task_name)
# Get risk score predictions out of the model
y_scores = clf.predict_proba(dataset)
# Optionally, you can fit the threshold based on a small portion of the data
clf.fit(dataset[0:100])
# ...in order to get more accurate binary predictions
clf.predict(dataset)
# Compute a variety of evaluation metrics on calibration and accuracy
from folktexts.benchmark import CalibrationBenchmark
benchmark_results = CalibrationBenchmark(clf, dataset).run(results_root_dir=".")
usage: run_acs_benchmark.py [-h] --model MODEL --task-name TASK_NAME --results-dir RESULTS_DIR --data-dir DATA_DIR [--few-shot FEW_SHOT] [--batch-size BATCH_SIZE] [--context-size CONTEXT_SIZE] [--fit-threshold FIT_THRESHOLD]
[--subsampling SUBSAMPLING] [--seed SEED] [--dont-correct-order-bias] [--chat-prompt] [--direct-risk-prompting] [--reuse-few-shot-examples] [--use-feature-subset [USE_FEATURE_SUBSET ...]]
[--use-population-filter [USE_POPULATION_FILTER ...]] [--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]
Run an LLM as a classifier experiment.
options:
-h, --help show this help message and exit
--model MODEL [str] Model name or path to model saved on disk
--task-name TASK_NAME
[str] Name of the ACS task to run the experiment on
--results-dir RESULTS_DIR
[str] Directory under which this experiment's results will be saved
--data-dir DATA_DIR [str] Root folder to find datasets on
--few-shot FEW_SHOT [int] Use few-shot prompting with the given number of shots
--batch-size BATCH_SIZE
[int] The batch size to use for inference
--context-size CONTEXT_SIZE
[int] The maximum context size when prompting the LLM
--fit-threshold FIT_THRESHOLD
[int] Whether to fit the prediction threshold, and on how many samples
--subsampling SUBSAMPLING
[float] Which fraction of the dataset to use (if omitted will use all data)
--seed SEED [int] Random seed -- to set for reproducibility
--dont-correct-order-bias
[bool] Whether to avoid correcting ordering bias, by default will correct it
--chat-prompt [bool] Whether to use chat-based prompting (for instruct models)
--direct-risk-prompting
[bool] Whether to directly prompt for risk-estimates instead of multiple-choice Q&A
--reuse-few-shot-examples
[bool] Whether to reuse the same samples for few-shot prompting (or sample new ones every time)
--use-feature-subset [USE_FEATURE_SUBSET ...]
[str] Optional subset of features to use for prediction
--use-population-filter [USE_POPULATION_FILTER ...]
[str] Optional population filter for this benchmark; must follow the format 'column_name=value' to filter the dataset by a specific value.
--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
[str] The logging level to use for the experiment
Code licensed under the MIT license.
The American Community Survey (ACS) Public Use Microdata Sample (PUMS) is governed by the U.S. Census Bureau terms of service.