This repository contains the official code of the Online Grounding of Action Models in Unknown Situations (OGAMUS) algorithm, which has been presented at the 19th International Conference on Principles of Knowledge Representation and Reasoning (KR-2022 Special Session of KR and Robotics), for details about the method please see the paper.
The following instructions have been tested on Ubuntu 20.04.
- Clone this repository
git clone https://github.com/LamannaLeonardo/OGAMUS.git
- Create a Python 3.9 virtual environment using conda or pip.
conda create -n ogamus python=3.9
- Activate the environment
conda activate ogamus
- Install pip in the conda environment
conda install pip
pip install ai2thor
- Install the following dependencies
pip install matplotlib
-
Download the pretrained neural network models available at this link, and move all the downloaded files into the directory "Utils/pretrained_models"
-
Check everything is correctly installed by executing the command
python main.py
- This github repository already contains the FastForward planner in the directory "OGAMUS/Plan/PDDL/Planners/FF". If you face any issue, you can compile it from scratch as follows: from the offical FastForward site, download FF-v2.3.tgz (you can directly download it from this link), move it into the "Planners/FF" directory, extract the archive
tar -xf FF-v2.3.tgz, go into the installation directory withcd FF-v2.3and compile FastForward withmake. Finally move the "ff" executable in the parent directory through the commandmv ff ../, go to the parent directorycd ../and delete unnecessary files withrm -r FF-v2.3andrm FF-v2.3.tgz.
The OGAMUS algorithm can be run over the following tasks: on, open, close, object goal navigation (for further details about the tasks, please see the paper). To run OGAMUS on a specific task w/o ground truth object detections, there are two options:
a) -t xxx where "xxx" is the task you want to test, available tasks are: on, open, close, ogn, ogn_ithor
b) -obj (or -o), when you pass this option, the agent uses ground truth object detections
e.g. to run OGAMUS on the task "on" with ground truth object detections, execute the command: "python main.py -t on -o"
When you execute OGAMUS, a new directory with all logs and results is created in the "Results" folder. For instance, the logs and results are stored in the folder "Results/test_set_X_stepsY", where X is the task name provided as input and Y the number of steps (which equals 200 for all tasks but object goal navigation in RoboTHOR). One subdirectory is created for each episode, which consists of a run in a single environment. Each episode subdirectory contains evaluation and log files relative to a single episode. If you want to generate a summarized evaluation of all episodes in a directory named "DIR", open the script "Utils/ResultsPlotter.py" and change the value of the "DIR" variable (at the beginning of the script) with the path of the results directory you want to evaluate. e.g. after running "python main.py -t on -o", in ResultsPlotter.py set DIR = "Results/test_set_on_steps200" and execute the command "python ResultsPlotter.py"
For the object goal navigation task, if you want to generate the additional metric SPL, look at the end of the file ResultsPlotter.py, comment "generate_plots()" and uncomment "ogn_metrics()", then run the script as above.
@inproceedings{lamannaonline,
title={Online Grounding of Symbolic Planning Domains in Unknown Environments},
author={Lamanna, Leonardo and Serafini, Luciano and Saetti, Alessandro and Gerevini, Alfonso and Traverso, Paolo},
booktitle={19th International Conference on Principles of Knowledge Representation and Reasoning},
year={2022}
}
This project is licensed under the MIT License - see the LICENSE file for details.