Description
Hello. Like many folks here, I use a custom AirSim env for reinforcement learning (RL) experiments. I apply imitation learning (IL) to Multirotors, so that means I'd have to train an RL algorithm -> generate expert demonstrations -> train an IL algorithm. Essentially, that's double the time typically spent on training. My custom env takes 1.5 days to train RL for a million timesteps
I have spent almost the whole of 2020 trying to get GAIL to work on my custom env. The lack of quick results is not just due to the real-time run, but also the unavailability of a good set of hyperparameters to work with. If anyone has been in a similar situation and can guide me, that would be great
I came across several issues requesting better access to multi-threading, clock speed, and speed-up (to mention a few: #563, #64, #17, #901). With the NeurIPS Drone Racing challenge, will we be seeing any enhancements to real-time run bottleneck?