Skip to content

Conversation

dweindl
Copy link
Member

@dweindl dweindl commented May 14, 2024

Previously, bounds were only checked for non-negativity.

Previously, bounds were only checked for non-negativity.
@dweindl dweindl requested a review from a team as a code owner May 14, 2024 12:12
@codecov-commenter
Copy link

codecov-commenter commented May 14, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 76.09%. Comparing base (74ac20d) to head (2f92af9).

Additional details and impacted files
@@           Coverage Diff            @@
##           develop     #259   +/-   ##
========================================
  Coverage    76.09%   76.09%           
========================================
  Files           36       36           
  Lines         3233     3233           
  Branches       786      786           
========================================
  Hits          2460     2460           
  Misses         568      568           
  Partials       205      205           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@dilpath dilpath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine for me but it means tools cannot implement model selection via regularization with L1 (admittedly nontrivial on log-scale), because that would contradict the nonzero lower bound.

@dweindl
Copy link
Member Author

dweindl commented May 14, 2024

Fine for me but it means tools cannot implement model selection via regularization with L1 (admittedly nontrivial on log-scale), because that would contradict the nonzero lower bound.

Thx. No strong opinion. I'm fine with either:

  • allowing 0, but making that explicit in the specs and silencing all warnings related to log(0) in the library
  • excluding 0

Other opinions?

@dilpath
Copy link
Member

dilpath commented May 14, 2024

Also no strong opinion.

We support nominal values of 0 regardless of parameter scale and bounds, so no issue for PEtab Select.

Since excluding 0 would be a possibly-breaking change, I would be in favor of allowing 0.

@dweindl
Copy link
Member Author

dweindl commented May 27, 2024

Since excluding 0 would be a possibly-breaking change, I would be in favor of allowing 0.

Okay, I guess we'll leave it as is for PEtab v1.

For PEtab v2 we'll probably allow 0. For further discussion -> PEtab-dev/PEtab#579

@dilpath
Copy link
Member

dilpath commented Jun 13, 2024

Sampling, e.g. initial start vectors for multi-start optimization, when the lower bound is 0 and the parameter is log10-scaled, would mean sampling in (-inf, upperBound], which isn't sensible.

We could require that, if a user specifies a lowerBound of 0 for a log10-scaled parameter, then they should also specify initializationPriorParameters such that the initial sampling of start vectors can occur on parameterScale within finite bounds.

A PEtab problem where samples are drawn uniformly from non-finite bounds, should be considered invalid.

from discussion with @dweindl

dweindl added a commit to dweindl/libpetab-python that referenced this pull request Jun 26, 2024
Check the parameter table for positive bounds for log-scaled estimated
parameters that don't have an explicit intialization prior.

See discussion in PEtab-dev#259

Supersedes and closes PEtab-dev#259
@dweindl
Copy link
Member Author

dweindl commented Jun 26, 2024

Closing in favor of #278

@dweindl dweindl closed this Jun 26, 2024
@dweindl dweindl deleted the pos_log branch June 26, 2024 18:19
dweindl added a commit that referenced this pull request Jun 27, 2024
Check the parameter table for positive bounds for log-scaled estimated
parameters that don't have an explicit intialization prior or that have initialPriorType=parameterScaleUniform.

See discussion in #259

Supersedes and closes #259
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants