From 9b1f8d1f67bc76620e906adc978bad4ccfd655bd Mon Sep 17 00:00:00 2001 From: srush Date: Mon, 19 Sep 2016 14:32:21 -0400 Subject: [PATCH] Request for Request for Research pull Hi OpenAI, We just put up a pre-print of our "solution" to the Im2Latex challenge. Can you include it on your page? We made a landing page for it here: lstm.seas.harvard.edu/latex/. Cheers, Sasha --- _requests_for_research/im2latex.html | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/_requests_for_research/im2latex.html b/_requests_for_research/im2latex.html index dbf35e6..5ba4b1a 100644 --- a/_requests_for_research/im2latex.html +++ b/_requests_for_research/im2latex.html @@ -32,3 +32,7 @@

Notes

A success here would be a very cool result and could be used to build a useful online tool.

While this is a very non-trivial project, we've marked it with a one-star difficulty rating because we know it's solvable using current methods. It is still very challenging to really do it, as it requires getting several ML components together correctly.

+ +

Solutions

+ +

Results, data set, code, and a write-up are available at http://lstm.seas.harvard.edu/latex/. The model is trained on the above data sets and uses an extension of the Show, Attend and Tell paper combined with a multi-row LSTM encoder. Code is written in Torch (based on the seq2seq-attn system), and the model is optimized using SGD. Additional experiments are run using the model to generate HTML from small webpages.