-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Evaluation, Reproducibility, Benchmarks Meeting 25
AReinke edited this page Jan 24, 2024
·
1 revision
Date: 24th January 2024
- Nick Heller
- Anne Mickan
- Olivier Colliot
- In march: revisit technical paper
- link to visualization would be great
- We need community testing
- MONAI started a newsletter which will be distributed roughly every 3 months. We can advertise community testing
- We should add our examples to the general MONAI tutorials
- After users get metric recommendations, they could be provided with a code snippet from the MONAI implementation
- It would be great if challenge organizers would share their datasets; so that metrics and ideally full results would directly be computed from the system
- We could potentially leverage from Grand Challenge
- MONAI implementation team can support
- From challenge organizer perspective:
- Rankings Reloaded in Python/MONAI
- From metrics to direct output
- Metrics Playground
- We need consensus guidelines
- Also address reproducibility (which is related to variability)
- Address question “when is Algorithm A better than Algorithm B”
- TODO: What do we want to have consensus on/recommendations for?
- Annika will create a folder and brainstorming document; the link will be shared with the WG