Skip to content

Implementation of Minerva from "Minerva: Solving Quantitative Reasoning Problems with Language Models"

License

Notifications You must be signed in to change notification settings

kyegomez/Minerva

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Minerva: Unleashing the Secrets of advanced Mathematics 🏛️🔢

Minerva is a groundbreaking language model that pushes the boundaries of mathematical understanding and problem-solving. Designed with an advanced math theme, Minerva embodies the spirit of renowned mathematicians such as Euclid, Pythagoras, and Archimedes. By harnessing their advanced wisdom, Minerva offers unparalleled capabilities in mathematical reasoning and exploration.


GitHub issues GitHub forks GitHub stars GitHub license

Share on Twitter Share on Facebook Share on LinkedIn

Share on Reddit Share on Hacker News Share on Pinterest Share on WhatsApp


Install

pip install minerva-torch

Usage

import torch 
from minerva.model import Minerva

# Usage with random inputs
text = torch.randint(0, 20000, (1, 1024))

# Initiliaze the model
model = Minerva()
output = model(text)
print(output)

Training

To train Minerva, follow these steps:

  1. Configure the training settings by setting the environment variables:

    • ENTITY_NAME: Your wandb project name
    • OUTPUT_DIR: Specify the output directory for saving the weights (e.g., ./weights)
  2. Launch the training process using Deepspeed:

Accelerate Config
Accelerate launch train_distributed_accelerate.py

Dataset Building

To build a custom dataset for Minerva, you can preprocess the data using the build_dataset.py script. This script performs tasks such as pre-tokenization, data chunking, and uploading to the Huggingface hub. Here's an example command:

Dataset Description
Mathematical Web Pages Web pages containing mathematical expressions in MathJax format, cleaned to preserve math notation
arXiv 2 million arXiv papers up to Feb 2021, in LaTeX format
General Natural Language Data Same dataset used to pretrain PaLM models

The mathematical web pages and arXiv datasets focus on technical and mathematical content. The general natural language data provides a broad coverage of general language.

The paper states the mathematical web pages and arXiv each account for 47.5% of the total data. The remaining 5% is general natural language data which is a subset of what was used for PaLM pretraining.

Roadmap 🗺️📍

  • Create a dataset of ARXVIV papers

About

Implementation of Minerva from "Minerva: Solving Quantitative Reasoning Problems with Language Models"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages