-
Notifications
You must be signed in to change notification settings - Fork 32
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job and sorry it took me so long to review it
The main thing I would change are the try/except statements
graphprot/Metrics.py
Outdated
@@ -116,17 +117,102 @@ def __init__(self, prediction, y, target, threshold=4, binary=True): | |||
|
|||
self.accuracy = (true_positive+true_negative)/(true_positive+false_positive+false_negative+true_negative) | |||
|
|||
def hitrate(self): | |||
# regression metrics | |||
if target == 'fnat' or target == 'irmsd' or target == 'lrmsd': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could change that to :
if target in ['fnat', 'irmsd', 'lrmsd']:
graphprot/Metrics.py
Outdated
try: | ||
# Explained variance regression score function | ||
self.explained_variance = metrics.explained_variance_score(self.y, self.prediction) | ||
except: | ||
self.explained_variance = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite get the try/except construction here. What exception you would expect ? In general and except without an error type is not the best thing to do (but I do that as well sometimes)
In any case I would remove the try/except. You could initialize all the metrics to None before the if statement. And then call the sklearn functions in the if branch.
self.explained_varianve = None
...
if target in [...]:
self.explained_variance = metrics.xxxx
else:
print('Warning : ...')
graphprot/Metrics.py
Outdated
|
||
def hitrate(self): | ||
|
||
idx, ground_truth_bool = self.format_score() | ||
hitrate = np.cumsum(ground_truth_bool[idx]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can directly return the np call
return np.cumsum(ground_truth)
graphprot/Metrics.py
Outdated
try: | ||
# Mean squared logarithmic error regression loss | ||
self.mean_squared_log_error = metrics.mean_squared_log_error(self.y, self.prediction) | ||
except: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice replacement of the excepts. Could you use:
try:
self.mean_squared_log_error = ....
except ValueError:
print('Warning ....')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice ! It might be time to start adding doctrings to the function/methods/class and start documenting the code a bit more. We can do that gradually in the next PR :)
No description provided.