This dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models.
Original paper: Analysing Mathematical Reasoning Abilities of Neural Models (Saxton, Grefenstette, Hill, Kohli).
This repository now includes a Street Math Approximation Dataset specifically designed for training models on mental math estimation and approximation techniques. The dataset is converted to Alpaca format for instruction tuning.
To generate the Alpaca-formatted approximation dataset:
python combine_dataset.py
This creates the street_math_hf_dataset/
directory with:
train.jsonl
- Training split (70%)validation.jsonl
- Validation split (15%)test.jsonl
- Test split (15%)sample.jsonl
- Sample examples to preview format
Each example follows the Alpaca instruction format:
{
"instruction": "Estimate the following calculation using basic mental math and rounding techniques. Provide your approximation and briefly explain your rounding strategy.",
"input": "22 * -394",
"output": "**Approximation: -8700**\n\n**Method:**\n- **Round 22** to **20** (easier to multiply)\n- **Round -394** to **-400** (round number)\n\n**Reasoning:**\nThis estimation method provides a quick, practical approximation by rounding to the nearest convenient numbers for mental calculation.",
"metadata": {
"exact_answer": "-8668.0",
"lower_bound": "-9535.0",
"upper_bound": "-7801.0",
"difficulty": "train-easy",
"module": "street_math__mul"
}
}
This dataset is optimized for training with Axolotl using the alpaca
format. Simply point your Axolotl config to the generated JSONL files.
The approximation dataset emphasizes:
- Mental math techniques - Rounding strategies for quick estimation
- Practical approximation - Real-world estimation skills
- Reasoning explanation - Understanding why approximations work
- Bounded evaluation - Answers within reasonable tolerance ranges are considered correct
Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r.
Answer: 4
Question: Calculate -841880142.544 + 411127.
Answer: -841469015.544
Question: Let x(g) = 9*g + 1. Let q(c) = 2*c + 1. Let f(i) = 3*i - 39. Let w(j) = q(x(j)). Calculate f(w(a)).
Answer: 54*a - 30
Question: Let e(l) = l - 6. Is 2 a factor of both e(9) and 2?
Answer: False
Question: Let u(n) = -n**3 - n**2. Let e(c) = -2*c**3 + c. Let l(j) = -118*e(j) + 54*u(j). What is the derivative of l(a)?
Answer: 546*a**2 - 108*a - 118
Question: Three letters picked without replacement from qqqkkklkqkkk. Give prob of sequence qql.
Answer: 1/110
This is the version released with the original paper. It contains 2 million (question, answer) pairs per module, with questions limited to 160 characters in length, and answers to 30 characters in length. Note the training data for each question type is split into "train-easy", "train-medium", and "train-hard". This allows training models via a curriculum. The data can also be mixed together uniformly from these training datasets to obtain the results reported in the paper. Categories:
- algebra (linear equations, polynomial roots, sequences)
- arithmetic (pairwise operations and mixed expressions, surds)
- calculus (differentiation)
- comparison (closest numbers, pairwise comparisons, sorting)
- measurement (conversion, working with time)
- numbers (base conversion, remainders, common divisors and multiples, primality, place value, rounding numbers)
- polynomials (addition, simplification, composition, evaluating, expansion)
- probability (sampling without replacement)
The easiest way to get the source is to use pip:
$ pip install mathematics_dataset
Alternately you can get the source by cloning the mathematics_dataset repository:
$ git clone https://github.com/deepmind/mathematics_dataset
$ pip install --upgrade mathematics_dataset/
Generated examples can be printed to stdout via the generate
script. For
example:
python -m mathematics_dataset.generate --filter=linear_1d
will generate example (question, answer) pairs for solving linear equations in one variable.
We've also included generate_to_file.py
to write generated examples to files.
It supports both the original text format and JSON format compatible with
Hugging Face datasets.
Text format (original):
python -m mathematics_dataset.generate_to_file --output_dir=./dataset_text
JSON format (Hugging Face compatible):
# Separate JSONL files per difficulty level
python -m mathematics_dataset.generate_to_file --output_dir=./dataset_json --format=json
# Single combined JSONL file
python -m mathematics_dataset.generate_to_file --output_dir=./dataset_json --format=json --single_file=True
JSON output format: Each line in the JSONL files contains:
{"input": "Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r.", "output": "4", "difficulty": "interpolate", "module": "algebra__linear_1d"}
This format can be directly used with Hugging Face's datasets
library:
from datasets import load_dataset
dataset = load_dataset("json", data_files="interpolate.jsonl")
Additional options:
--per_train_module=N
: Number of examples per training module (default: 10)--per_test_module=N
: Number of examples per test module (default: 10)--filter=pattern
: Generate only modules matching the pattern--train_split=False
: Don't split training data by difficulty
You can use these scripts directly, or adapt them for your generation and training needs.
The following table is necessary for this dataset to be indexed by search engines such as Google Dataset Search.
property | value | ||||||
---|---|---|---|---|---|---|---|
name | Mathematics Dataset |
||||||
url | https://github.com/deepmind/mathematics_dataset |
||||||
sameAs | https://github.com/deepmind/mathematics_dataset |
||||||
description | This dataset consists of mathematical question and answer pairs, from a range
of question types at roughly school-level difficulty. This is designed to test
the mathematical learning and algebraic reasoning skills of learning models.\n
\n
## Example questions\n
\n
```\n
Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r.\n
Answer: 4\n
\n
Question: Calculate -841880142.544 + 411127.\n
Answer: -841469015.544\n
\n
Question: Let x(g) = 9*g + 1. Let q(c) = 2*c + 1. Let f(i) = 3*i - 39. Let w(j) = q(x(j)). Calculate f(w(a)).\n
Answer: 54*a - 30\n
```\n
\n
It contains 2 million
(question, answer) pairs per module, with questions limited to 160 characters in
length, and answers to 30 characters in length. Note the training data for each
question type is split into "train-easy", "train-medium", and "train-hard". This
allows training models via a curriculum. The data can also be mixed together
uniformly from these training datasets to obtain the results reported in the
paper. Categories:\n
\n
* **algebra** (linear equations, polynomial roots, sequences)\n
* **arithmetic** (pairwise operations and mixed expressions, surds)\n
* **calculus** (differentiation)\n
* **comparison** (closest numbers, pairwise comparisons, sorting)\n
* **measurement** (conversion, working with time)\n
* **numbers** (base conversion, remainders, common divisors and multiples,\n
primality, place value, rounding numbers)\n
* **polynomials** (addition, simplification, composition, evaluating, expansion)\n
* **probability** (sampling without replacement) |
||||||
provider |
|
||||||
citation | https://identifiers.org/arxiv:1904.01557 |