Skip to content

Evaluation, Reproducibility, Benchmarks Meeting 25

AReinke edited this page Jan 24, 2024 · 1 revision

Minutes of meeting 25

Date: 24th January 2024

Present: Annika, Michela, Carole, Lena, Stephen, Anne, Olivier, Nick

TOP 1: Welcome to new members

  • Nick Heller
  • Anne Mickan
  • Olivier Colliot

TOP 2: Metric implementation

  • In march: revisit technical paper
  • link to visualization would be great
  • We need community testing
  • MONAI started a newsletter which will be distributed roughly every 3 months. We can advertise community testing
  • We should add our examples to the general MONAI tutorials

TOP 3: Metrics Reloaded Toolkit

  • After users get metric recommendations, they could be provided with a code snippet from the MONAI implementation
  • It would be great if challenge organizers would share their datasets; so that metrics and ideally full results would directly be computed from the system
  • We could potentially leverage from Grand Challenge

TOP 4: Validation framework

  • MONAI implementation team can support
  • From challenge organizer perspective:
    • Rankings Reloaded in Python/MONAI
    • From metrics to direct output
    • Metrics Playground
  • We need consensus guidelines
  • Also address reproducibility (which is related to variability)
  • Address question “when is Algorithm A better than Algorithm B”
  • TODO: What do we want to have consensus on/recommendations for?
    • Annika will create a folder and brainstorming document; the link will be shared with the WG
Clone this wiki locally