You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Task_2/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ In the FeTS challenge task 2, participants can submit their solution in the form
20
20
Details for steps 1 and 2 are given in the guide in the [singularity_example](singularity_example). Regarding step 3, each participating team will be provided a gitlab project where they can upload their submission. A few simple steps are necessary for that:
21
21
22
22
1. Register for the challenge as described on the [challenge website](https://fets-ai.github.io/Challenge/) (if not already done).
23
-
2. Sign up at [https://gitlab.hzdr.de/](https://gitlab.hzdr.de/) by either clicking *Helmholtz AAI* (login via your institutional email) or via your github login. Both buttons are in the lower box on the right.
23
+
2. Sign up at [https://gitlab.hzdr.de/](https://gitlab.hzdr.de/)**using the same email address as in step 1**by either clicking *Helmholtz AAI* (login via your institutional email) or via your github login. Both buttons are in the lower box on the right.
24
24
3. Send an email to [challenge@fets.ai](mailto:challenge@fets.ai), asking for a Task 2-gitlab project and stating your gitlab handle (@your-handle) and team name. We will create a project for you and invite you to it within a day.
25
25
4. Follow the instructions in the newly created project to make a submission.
26
26
@@ -62,4 +62,4 @@ If labels are provided, this script also computes metrics for each test case and
62
62
In the testing phase of Task 2, we are going to perform a federated evaluation on multiple remote institutions with limited computation capabilities. To finish the evaluation before the MICCAI conference, we have to restrict the inference time of the submitted algorithms. As the number of participants is not known in advance, we decided for the following rules in that regard:
63
63
- For each final submission, we are going to check the validity of the algorithm output and measure the execution time of the container on a small dataset using a pre-defined Hardware setup (CPU: E5-2620 v4, GPU: RTX 2080 Ti 10.7GB, RAM: 40GB).
64
64
- Each submission is given **180 seconds per case** to produce a prediction (we will check only the total runtime for all cases, though). Submissions that fail to predict all cases within this time budget will not be included in the federated evaluation.
65
-
- If the number of participants is extremely high, we reserve the right to limit the number of participants in the final MICCAI ranking in the following way: Algorithms will be evaluated on the federated test set in the chronological order they were submitted in. This means the later an algorithm is submitted, the higher is the risk it cannot be evaluated on all federated test sets before the end of the testing phase. Note that this is a worst-case rule and we will work hard to include every single valid submission in the ranking.
65
+
- If the number of participants is extremely high, we reserve the right to limit the number of participants in the final MICCAI ranking in the following way: Algorithms will be evaluated on the federated test set in the chronological order they were submitted in. This means the later an algorithm is submitted, the higher is the risk it cannot be evaluated on all federated test sets before the end of the testing phase. Note that this is a worst-case rule and we will work hard to include every single valid submission in the ranking.
0 commit comments