You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is essentially the same thing as #36. We will use the clipping function because it preserves class predictions and constrains the predictions within the range of natural values.
The text was updated successfully, but these errors were encountered:
Since an ELM is minimizing an MSE loss, we can do this for binary outcomes by clipping values outside of [0, 1] but this isn't really feasible for categorical variables. For example, if there are four categories, 1, 2,3, and 4, the model predicts 1.6, and the actual class is 4, you can't interpret this as having a higher probability of being 3 than 4 because these are just arbitrary categories. Therefore, we will only support clipping functions for binary treatments and outcomes.
This is essentially the same thing as #36. We will use the clipping function because it preserves class predictions and constrains the predictions within the range of natural values.
The text was updated successfully, but these errors were encountered: