llmstep
is a Lean 4 tactic for suggesting proof steps using a language model:
Calling llmstep "prefix"
gives suggestions that start with prefix
:
example (f : ℕ → ℕ) : Monotone f → ∀ n, f n ≤ f (n + 1) := by
intro h n
llmstep "exact"
==> Lean Infoview
Try This:
* exact h (Nat.le_succ _)
* exact h (Nat.le_succ n)
* exact h (Nat.le_add_right _ _)
Clicking a suggestion places it in the proof:
example (f : ℕ → ℕ) : Monotone f → ∀ n, f n ≤ f (n + 1) := by
intro h n
exact h (Nat.le_succ _) -- llmstep "exact"
llmstep
checks the language model suggestions in Lean, and highlights those that are valid and/or close the proof.
By default, llmstep
uses a language model finetuned on Mathlib4 extracted with LeanDojo, and
supports other LMs.
First, install Lean 4 in VS Code and the python requirements (pip install -r requirements.txt
).
Then start the server:
python python/server.py
Open LLMstep/Examples.lean
in VS Code and try out llmstep
.
llmstep
has three parts:
The Lean tactic calls a Python script, which sends a request to the server.
The server calls the language model and returns the generated suggestions.
The suggestions are displayed by the tactic in VS Code.
llmstep
supports faster suggestions via vLLM. First, install vLLM (requires a supported GPU). Then start llmstep
's server using:
python python/server_vllm.py
Fast suggestions are optional; you can use python/server.py
to run llmstep
without vLLM.
By default, llmstep
uses a Pythia 2.8b language model fine-tuned on LeanDojo Benchmark 4:
The model is fine-tuned on sequences of the form:
[GOAL]tactic-state[PROOFSTEP]next-tactic[END]
This format corresponds to the proofstep objective from Han et al ICLR 2022.
The python/train directory shows how the model was fine-tuned.
The scripts in python/train show how to finetune a model.
Swap in other language models with the --hf-model
argument:
python server.py --hf-model some/other-model-7B
We recommend using a fine-tuned model, though in principle fine-tuning is not strictly needed.
llmstep
assumes the model uses the proofstep format described above, but this is easy to modify.
Starting the server downloads the default language model, and loads the model. As a result, you will likely experience a delay the first time llmstep
is run.
Roughly speaking, when server.py
is run on a typical MacBook Pro, llmstep
provides suggestions in a few seconds, with a GPU suggestions take ~1 second, and with vLLM suggestions take less than 1 second.
Actual suggestion latency is variable and depends on multiple factors.
- The
llmstep
tactic is inspired bygpt-f
. - Fine-tuning data for the default model is from the amazing LeanDojo.
- The fine-tuning code is based on the script from Stanford Alpaca.
- The tactic implementation adopts ideas and code from Mathlib4's
Polyrith
andStd.Tactic.TryThis
.
llmstep
was initially created for an IJCAI-2023 tutorial on neural theorem proving.
It aims to provide LM-based suggestions built with open-source components.
If you find this repository useful in your work, please cite:
@misc{llmstep,
author = {Sean Welleck},
title = {llmstep: LLM proofstep suggestions in Lean},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/wellecks/llmstep}},
}
Naturally, please cite LeanDojo, PACT, and other relevant resources.