This repository has been archived by the owner on Aug 14, 2019. It is now read-only.
This repository has been archived by the owner on Aug 14, 2019. It is now read-only.
Add "evaluation" mode to model #50
Open
Description
Right now, the model can "train" (training on train data / periodically measure validation accuracy / loss) and it can "predict" (given an unlabeled test set, make predictions). It would be great to have an "evaluation" mode, i.e. given a train/val dataset, make predictions on it and write a file (optionally sorted by log-loss or something) with the questions, correct answer, and our answer; this would really help with error analysis.