-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Evaluation, Reproducibility, Benchmarks Meeting 9
AReinke edited this page Apr 1, 2021
·
1 revision
Date: 24th March 2021
Present: Paul, Jens, Carole, Jorge, Lena
- Brainstorming
- How to present recommendations (metric mapping)?
- Use Matrix (metrics vs “problem characteristics” (e.g. lesion lize)) for recommendation with green,..., red cells
- Question: Do we need multiple matrices to account for different views (e.g. clinical vs. technical)?
- Problem characteristics (Jorge: “Axes”)
- see Decathlon data set description and previous Delphi answers for initial list
- Should collect more from Delphi participants, this time based on the task
- Do experiments on correlation of metrics?
- Evidence on claims of paper desired, but
- It is hard to do comprehensive experiments
- Can incorporate prior work, e.g. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7025187
- Metric vs clinical problem
- Metric may be relevant if it assesses at least one important property properly
- Typically need multiple metrics
- Careful with correlation
- How to present recommendations (metric mapping)?
- Conclusion on next steps:
- Generate shortlist of metrics for each task (Delphi coordinators based on round 1)
- Generate shortlist of problem characteristics (Delphi coordinators):
- For each task:
- Collect initial list of “problem characteristics” by
- gathering results from first Delphi round
- incorporating suggestions from MONAI discussions
- Collect initial list of “problem characteristics” by
- For each task:
- Finalize list of problem charateristics (Delphi participants)
- For each task:
- Disagree/agree on items of the list
- Complement list
- For each task:
- Generate metric pools (Delphi participants)
- For each task and each metric:
- Decide whether metric should be discussed for the respective task
- For each task and each metric:
- In a subsequent round: Converge on a list of problem characteristics and metrics for each task