Skip to content
This repository has been archived by the owner on Mar 22, 2024. It is now read-only.

add regression metrics #27

Merged
merged 9 commits into from
Jan 11, 2021
Merged

add regression metrics #27

merged 9 commits into from
Jan 11, 2021

Conversation

manonreau
Copy link
Collaborator

No description provided.

Copy link
Member

@NicoRenaud NicoRenaud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job and sorry it took me so long to review it
The main thing I would change are the try/except statements

@@ -116,17 +117,102 @@ def __init__(self, prediction, y, target, threshold=4, binary=True):

self.accuracy = (true_positive+true_negative)/(true_positive+false_positive+false_negative+true_negative)

def hitrate(self):
# regression metrics
if target == 'fnat' or target == 'irmsd' or target == 'lrmsd':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could change that to :

if target in ['fnat', 'irmsd', 'lrmsd']:

Comment on lines 123 to 127
try:
# Explained variance regression score function
self.explained_variance = metrics.explained_variance_score(self.y, self.prediction)
except:
self.explained_variance = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite get the try/except construction here. What exception you would expect ? In general and except without an error type is not the best thing to do (but I do that as well sometimes)

In any case I would remove the try/except. You could initialize all the metrics to None before the if statement. And then call the sklearn functions in the if branch.

self.explained_varianve = None
...
if target in [...]:
   self.explained_variance = metrics.xxxx
else:
   print('Warning : ...')


def hitrate(self):

idx, ground_truth_bool = self.format_score()
hitrate = np.cumsum(ground_truth_bool[idx])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can directly return the np call

return np.cumsum(ground_truth)

try:
# Mean squared logarithmic error regression loss
self.mean_squared_log_error = metrics.mean_squared_log_error(self.y, self.prediction)
except:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice replacement of the excepts. Could you use:

try:
     self.mean_squared_log_error = ....
except ValueError:
     print('Warning ....')

Copy link
Member

@NicoRenaud NicoRenaud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice ! It might be time to start adding doctrings to the function/methods/class and start documenting the code a bit more. We can do that gradually in the next PR :)

@manonreau manonreau merged commit 27c77ce into master Jan 11, 2021
@manonreau manonreau deleted the regression branch January 18, 2021 15:05
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants