Chimera is a modular library to develop robotics system.
This library provides wrappers to third-party software and original software for building robotics systems.
Click here to learn more about chimera
.
-
Assuming you have
conda
installed, let's prepare a conda env:conda create -n chimera python=3.9 cmake=3.14.0 conda activate chimera
-
Follow the instractions at https://pytorch.org/get-started/locally/ to install
pytorch
according to yourcuda
environment.conda install pytorch=2.1.0 torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
(this command is for cuda=11.8) where we recommend to install
pytorch=2.1.0
for compatibility. -
Git clone this repository.
-
Install modules via
install.sh
.bash install.sh
If you want to install all modules, run with
all
option instead:bash install.sh all
If you want to install specific module, run with module class and name as follows:
bash install.sh Simulator Habitat
Note that this command does not install
chimera
.
For running examples, let's download Habitat test scenes using download script:
bash script/download_habitat_test_scenes.sh
If you want to run examples about Object-goal Navigation tasks, please use HM3D datasets to follow the link:
After getting access to the dataset following above link, you can download HM3D datasets using download script:
bash script/download_habitat_hm3d.sh --username <api-token-id> --password <api-token-secret>
If you want to run examples using OpenAI API, please get an OpenAI API Key here and set the OPENAI_API_KEY
in the environment variables:
export OPENAI_API_KEY=<your-openai-api-key>
Run demo scripts in example
.
python example/demo_pointnav.py
To learn more about example
, please click here.
If you use Chimera
in your research, please use the following BibTeX entry.
@misc{taguchi2024chimera,
title={Chimera},
author={Shun Taguchi and Hideki Deguchi},
howpublished={\url{https://github.com/ToyotaCRDL/chimera}},
year={2024}
}
Our recent researches in this repository is as follows:
Shun Taguchi and Hideki Deguchi
CLIPMapper
is online embedding method of multi-scale CLIP features into 3D maps.
By harnessing CLIP, this method surpasses the constraints of conventional vocabulary-limited methods and enables the incorporation of semantic information into the resultant maps.
Hideki Deguchi, Kazuki Shibata and Shun Taguchi
L2M
creates a topological map with actions (forward, turn left, turn right) at each node based on natural language path instructions. 'L2M' then generates a path instruction in response to user queries about a destination.
Copyright (C) 2024 TOYOTA CENTRAL R&D LABS., INC. All Rights Reserved.