Skip to content

MLP Regression: make testing easier #43

Open
@weefuzzy

Description

@weefuzzy

As it stands, we don't make it easy to evaluate the performance of regression against test data. This is a shame, because it makes it harder to make judgements about how well a network is going to perform in practice.

I propose a two-part remedy:

  1. A way to split a dataset into (shuffled) training and testing sets. sk-learn does this with a function. We could either add a message to dataset, or a dedicated object.
  2. A way to retrieve the MSE for a supervised prediction. This is a pain to do in the CCE. Suggest adding methods to the regressor test <dataset inputs_in> <dataset outputs_in> <dataset loss_out> and testPoint <buffer input_in> <buffer output_in> -> double. These take two inputs (i.e. training pairs) and report the loss. (Although, should the batch version just return a double as well, viz. the mean loss across the whole set? Maybe that makes more sense)

In this way we should be able to enable a more principled workflow for trickier examples where we can monitor the test loss alongside the training loss.

paging @g-roma @jamesb93 @tedmoore @tremblap

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions