Skip to content

Commit 802c22b

Browse files
committed
New model
- new architecture - fixes lukas-blecher#7 - better performance
1 parent c7e0486 commit 802c22b

File tree

2 files changed

+18
-10
lines changed

2 files changed

+18
-10
lines changed

README.md

+8-5
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ The goal of this project is to create a learning based system that takes an imag
44
![header](https://user-images.githubusercontent.com/55287601/109183599-69431f00-778e-11eb-9809-d42b9451e018.png)
55

66
## Requirements
7-
### Evaluation
8-
* PyTorch (tested on v1.7.0)
7+
### Model
8+
* PyTorch (tested on v1.7.1)
99
* Python 3.7+ & dependencies (`requirements.txt`)
1010
```
1111
pip install -r requirements.txt
@@ -27,7 +27,8 @@ The `pix2tex.py` file offers a quick way to get the model prediction of an image
2727

2828
**Note:** As of right now it works best with images of smaller resolution. Don't zoom in all the way before taking a picture. Double check the result carefully. You can try to redo the prediction with an other resolution if the answer was wrong.
2929

30-
**Update:** I have trained an image classifier on randomly scaled images of the training data to predict the original size. This model will automatically resize the custom image to best resemble the training data and thus increase performance of images found in the wild. To use this preprocessing step, all you have to do is download the second weights file mentioned above. You should be able to take bigger (or smaller) images of the formula and still get a satisfying result
30+
**Update:** I have trained an image classifier on randomly scaled images of the training data to predict the original size.
31+
This model will automatically resize the custom image to best resemble the training data and thus increase performance of images found in the wild. To use this preprocessing step, all you have to do is download the second weights file mentioned above. You should be able to take bigger (or smaller) images of the formula and still get a satisfying result
3132

3233
## Training the model
3334
1. First we need to combine the images with their ground truth labels. I wrote a dataset class (which needs further improving) that saves the relative paths to the images with the LaTeX code they were rendered with. To generate the dataset pickle file run
@@ -49,7 +50,9 @@ python train.py --config path_to_config_file
4950
The model consist of a ViT [[1](#References)] encoder with a ResNet backbone and a Transformer [[2](#References)] decoder.
5051

5152
### Performance
52-
BLEU score: 0.87
53+
|BLEU score | normed edit distance|
54+
|-|-|
55+
|0.88|0.10|
5356

5457
## Data
5558
We need paired data for the network to learn. Luckily there is a lot of LaTeX code on the internet, e.g. [wikipedia](www.wikipedia.org), [arXiv](www.arxiv.org). We also use the formulae from the [im2latex-100k](https://zenodo.org/record/56198#.V2px0jXT6eA) dataset.
@@ -64,7 +67,7 @@ Latin Modern Math, GFSNeohellenicMath.otf, Asana Math, XITS Math, Cambria Math
6467
- [ ] reduce model size (distillation)
6568
- [ ] find optimal hyperparameters
6669
- [ ] tweak model structure
67-
- [ ] add more evaluation metrics
70+
- [x] add more evaluation metrics
6871
- [ ] fix data scraping and scape more data
6972
- [ ] trace the model
7073
- [ ] create a standalone application

settings/config.yaml

+10-5
Original file line numberDiff line numberDiff line change
@@ -4,20 +4,25 @@ channels: 1
44
debug: false
55
decoder_args:
66
cross_attend: true
7+
attn_on_attn: true
8+
cross_attend: true
9+
ff_glu: true
10+
rel_pos_bias: false
11+
use_scalenorm: false
712
device: cuda
813
dim: 256
914
encoder_depth: 4
1015
eos_token: 2
1116
heads: 8
12-
backbone_layers:
13-
- 3
14-
- 4
15-
- 9
17+
backbone_layers:
18+
- 2
19+
- 3
20+
- 7
1621
max_dimensions:
1722
- 672
1823
- 192
1924
min_dimensions:
20-
- 32
25+
- 96
2126
- 32
2227
max_height: 192
2328
max_seq_len: 1024

0 commit comments

Comments
 (0)