Skip to content

Doc Fix: mlp_brulee() #1122

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 23, 2024
Merged

Conversation

kscott-1
Copy link
Contributor

@kscott-1 kscott-1 commented May 23, 2024

The default parameter for penalty in the docs is incorrect. This is a comically small PR, but it did trip me up this week thinking the default was 0 when it wasn't.

Here is some quick reprex proof via parsnip:: and via brulee:: itself (see 'weight decay' value):

data <- modeldata::two_class_dat
# - recipe
rec <-
    data |>
        recipes::recipe(Class ~ .)
# - default param brulee_mlp via parsnip
mod <-
    parsnip::mlp() |>
        parsnip::set_engine("brulee") |>
        parsnip::set_mode("classification")
# - workflow
wf <-
    workflows::workflow() |>
        workflows::add_recipe(rec) |>
        workflows::add_model(mod)
# - fit
wf |>
    parsnip::fit(data = data)
#> ══ Workflow [trained] ══════════════════════════════════════════════════════════
#> Preprocessor: Recipe
#> Model: mlp()
#> 
#> ── Preprocessor ────────────────────────────────────────────────────────────────
#> 0 Recipe Steps
#> 
#> ── Model ───────────────────────────────────────────────────────────────────────
#> Multilayer perceptron
#> 
#> relu activation
#> 3 hidden units,  17 model parameters
#> 791 samples, 2 features, 2 classes 
#> class weights Class1=1, Class2=1 
#> weight decay: 0.001 
#> dropout proportion: 0 
#> batch size: 712 
#> learn rate: 0.01 
#> validation loss after 4 epochs: 0.5
brulee::brulee_mlp(
    x = rec,
    data = data
)
#> Multilayer perceptron
#> 
#> relu activation
#> 3 hidden units,  17 model parameters
#> 791 samples, 2 features, 2 classes 
#> class weights Class1=1, Class2=1 
#> weight decay: 0.001 
#> dropout proportion: 0 
#> batch size: 712 
#> learn rate: 0.01 
#> validation loss after 30 epochs: 0.379

  - docs currently define penalty = 0.0 as default value

Signed-off-by: Kyle Scott <kms309@miami.edu>
@simonpcouch
Copy link
Contributor

Thank you! :)

@simonpcouch simonpcouch merged commit 37d62d1 into tidymodels:main May 23, 2024
7 checks passed
@kscott-1 kscott-1 deleted the kscott-1/mlp_doc_fix branch May 23, 2024 20:05
Copy link

github-actions bot commented Jun 7, 2024

This pull request has been automatically locked. If you believe you have found a related problem, please file a new issue (with a reprex: https://reprex.tidyverse.org) and link to this issue.

@github-actions github-actions bot locked and limited conversation to collaborators Jun 7, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants