Skip to content

Conversation

@ttulttul
Copy link

@ttulttul ttulttul commented Oct 4, 2025

This is such a great project, but the README.md doesn't adequately address (in my opinion) how this library actually works. I think the world needs to know! So I hereby contribute a HOWITWORKS.md that I graciously submit for your consideration.

Instead of answering the question directly, the system uses an LLM (e.g. `GPT-5`) to act as a programmer.

* **Prompting**: The user's question is embedded into a highly detailed prompt template. This prompt instructs the LLM on how to decompose the question and convert it into a JSON-based Domain-Specific Language (DSL) that represents the problem's logic.
* **Logical Decomposition**: The LLM breaks down the question. For "Would Nancy Pelosi publicly denounce abortion?", it reasons that:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curious: In a large system how can we make sure that the LLM doesn't add biases or fallacies into this logical decomposition? For smaller examples we can indeed read and see all the decompositions.

This question is more toward the repo owner than the PR author.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a section in an attempt to answer your question thoughtfully.

@naveensd101
Copy link

Really nice write-up.

@ttulttul
Copy link
Author

ttulttul commented Oct 6, 2025

I have also added the use of structured responses when these are available in the LLM's API (see https://platform.openai.com/docs/guides/structured-outputs, for example).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants