Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train and evaluate algorithms on the robot_env? #6

Open
007lbx opened this issue Jun 30, 2024 · 3 comments
Open

How to train and evaluate algorithms on the robot_env? #6

007lbx opened this issue Jun 30, 2024 · 3 comments

Comments

@007lbx
Copy link

007lbx commented Jun 30, 2024

  1. I follow the scripts file to train the algorithms, but I want to known where is the use of the train_new.py ?
  2. When I use the eval.py to evaluate the algorithms, there are some problems:
    Traceback (most recent call last):
    File "src/eval.py", line 117, in
    main(args)
    File "src/eval.py", line 55, in main
    action_repeat=args.action_repeat,
    AttributeError: 'Namespace' object has no attribute 'action_repeat'
    I find this eval.py is same to the \DMC\src\eval.py, but it's obvious that these two issues are different.
@DavidBert
Copy link
Collaborator

Hey!
Sorry these two files should not be included in the public repo, its a mistake.
The train_new.py file was just a sand bow we used to get random pictures of the environment. If you look closely it's not a training loop but just a script that performs a random trajectory and save an image of the last state.
The eval file is just a plain copy of the one in \DMC without any modification. It is just a remnant files that should not be present in this folder. If you want an equivalent evaluate.pyfile for the robot_env you should take inspiration from the one in \DMCand modify the evaluatemethod to match with the one implemented here.
Sorry for this oversight on our part to delete these files. It's now done, thank you for pointing this out to us.

@007lbx
Copy link
Author

007lbx commented Jul 1, 2024

Thanks for your replay!
The appendix D provides additional results on a vision-based robotic environment:" We modified the original simulator to include three testing environments for both tasks......". The "Test 1-3"is equivalent to the Robotic Manipulation introduced by
(Jangir et al. 2022) ? Can I adopt four test environments "Test 1-4" for each task?

@DavidBert
Copy link
Collaborator

It's been a while since I did not interacted with it but yes I think so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants