You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The goal of this project is to create a learning based system that takes an image of a math formula and returns corresponding LaTeX code. As a physics student I often find myself writing down Latex code from a reference image. I wanted to streamline my workflow and began looking into solutions, but besides the Freemium [Mathpix](https://mathpix.com/) I could not find anything ready-to-use that runs locally. That's why I decided to create it myself.
2
+
Convert images into LaTeX code. A basic desktop app for https://github.com/lukas-blecher/LaTeX-OCR
In order to render the math in many different fonts we use XeLaTeX, generate a PDF and finally convert it to a PNG. For the last step we need to use some third party tools:
15
-
*[XeLaTeX](https://www.ctan.org/pkg/xetex)
16
-
*[ImageMagick](https://imagemagick.org/) with [Ghostscript](https://www.ghostscript.com/index.html). (for converting pdf to png)
17
-
*[Node.js](https://nodejs.org/) to run [KaTeX](https://github.com/KaTeX/KaTeX) (for normalizing Latex code)
18
-
*[`de-macro`](https://www.ctan.org/pkg/de-macro) >= 1.4 (only for parsing arxiv papers)
19
-
* Python 3.7+ & dependencies (`requirements.txt`)
7
+
## Usage
8
+
Follow the [usage instructions here](https://github.com/lukas-blecher/LaTeX-OCR#using-the-model) (note this project has extra dependencies!) and run ```main.py```.
20
9
21
-
## Using the model
22
-
1. Download/Clone this repository
23
-
2. For now you need to install the Python dependencies specified in `requirements.txt` (look [above](#Requirements))
24
-
3. Download the `weights.pth` (and optionally `image_resizer.pth`) file from my [Google Drive](https://drive.google.com/drive/folders/1cgmyiaT5uwQJY2pB0ngebuTcK5ivKXIb) and place it in the `checkpoints` directory
25
-
26
-
The `pix2tex.py` file offers a quick way to get the model prediction of an image. First you need to copy the formula image into the clipboard memory for example by using a snipping tool (on Windows built in `Win`+`Shift`+`S`). Next just call the script with `python pix2tex.py`. It will print out the predicted Latex code for that image and also copy it into your clipboard.
27
-
28
-
**Note:** As of right now it works best with images of smaller resolution. Don't zoom in all the way before taking a picture. Double check the result carefully. You can try to redo the prediction with an other resolution if the answer was wrong.
29
-
30
-
**Update:** I have trained an image classifier on randomly scaled images of the training data to predict the original size.
31
-
This model will automatically resize the custom image to best resemble the training data and thus increase performance of images found in the wild. To use this preprocessing step, all you have to do is download the second weights file mentioned above. You should be able to take bigger (or smaller) images of the formula and still get a satisfying result
32
-
33
-
## Training the model
34
-
1. First we need to combine the images with their ground truth labels. I wrote a dataset class (which needs further improving) that saves the relative paths to the images with the LaTeX code they were rendered with. To generate the dataset pickle file run
You can find my generated training data on the [Google Drive](https://drive.google.com/drive/folders/13CA4vAmOmD_I_dSbvLp-Lf0s6KiaNfuO) as well (formulae.zip - images, math.txt - labels). Repeat the step for the validation and test data. All use the same label text file.
41
-
42
-
2. Edit the `data` entry in the config file to the newly generated `.pkl` file. Change other hyperparameters if you want to. See `settings/default.yaml` for a template.
43
-
3. Now for the actual training run
44
-
```
45
-
python train.py --config path_to_config_file
46
-
```
47
-
48
-
49
-
## Model
50
-
The model consist of a ViT [[1](#References)] encoder with a ResNet backbone and a Transformer [[2](#References)] decoder.
51
-
52
-
### Performance
53
-
|BLEU score | normed edit distance|
54
-
|-|-|
55
-
|0.88|0.10|
56
-
57
-
## Data
58
-
We need paired data for the network to learn. Luckily there is a lot of LaTeX code on the internet, e.g. [wikipedia](www.wikipedia.org), [arXiv](www.arxiv.org). We also use the formulae from the [im2latex-100k](https://zenodo.org/record/56198#.V2px0jXT6eA) dataset.
59
-
All of it can be found [here](https://drive.google.com/drive/folders/13CA4vAmOmD_I_dSbvLp-Lf0s6KiaNfuO)
60
-
61
-
### Fonts
62
-
Latin Modern Math, GFSNeohellenicMath.otf, Asana Math, XITS Math, Cambria Math
63
-
64
-
65
-
## TODO
66
-
-[ ] support handwritten formulae
67
-
-[ ] reduce model size (distillation)
68
-
-[ ] find optimal hyperparameters
69
-
-[ ] tweak model structure
70
-
-[x] add more evaluation metrics
71
-
-[ ] fix data scraping and scape more data
72
-
-[ ] trace the model
73
-
-[ ] create a standalone application
74
-
75
-
76
-
## Contribution
77
-
Contributions of any kind are welcome.
78
-
79
-
## Acknowledgement
80
-
Code taken and modified from [lucidrains](https://github.com/lucidrains), [rwightman](https://github.com/rwightman/pytorch-image-models), [im2markup](https://github.com/harvardnlp/im2markup), [arxiv_leaks](https://github.com/soskek/arxiv_leaks)
81
-
82
-
## References
83
-
[1][An Image is Worth 16x16 Words](https://arxiv.org/abs/2010.11929)
84
-
85
-
[2][Attention Is All You Need](https://arxiv.org/abs/1706.03762)
0 commit comments