Thank you for your excellent work! When I was fine-tuning RDT, I encountered some issues. Specifically, I was training with 8 GPUs and got the following error:
[2025-08-15 16:52:01,935] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -8) local_rank: 0 (pid: 3175446) of binary: /vla/miniconda3/envs/rdt/bin/python3.10
Thank you for your help!
