[Project Page] [Paper]
🔥 DecisionNCE has been accepted by ICML2024 and selected as outstanding paper at MFM-EAI workshop@ICML2024
DecisionNCE , mirrors an InfoNCE-style objective but is distinctively tailored for decision-making tasks, providing an embodied representation learning framework that elegantly extracts both local and global task progression features , with temporal consistency enforced through implicit time contrastive learning, while ensuring trajectory-level instruction grounding via multimodal joint encoding. Evaluation on both simulated and real robots demonstrates that DecisionNCE effectively facilitates diverse downstream policy learning tasks, offering a versatile solution for unified representation and reward learning.
- Clone this repository and navigate to DecisionNCE folder
git clone https://github.com/2toinf/DecisionNCE.git
cd DecisionNCE
- Install Package
conda create -n decisionnce python=3.8 -y
conda activate decisionnce
pip install -e .
import DecisionNCE
import torch
from PIL import Image
# Load your DecisionNCE model
device = "cuda" if torch.cuda.is_available() else "cpu"
model = DecisionNCE.load("DecisionNCE-P", device=device)
image = Image.open("Your Image Path Here")
text = "Your Instruction Here"
with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
reward = model.get_reward(image, text) # please note that number of image and text should be the same
Returns the DecisionNCE model specified by the model name returned by decisionnce.available_models()
. It will download the model as necessary. The name
argument should be DecisionNCE-P
or DecisionNCE-T
The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU.
The model returned by decisionnce.load()
supports the following methods:
Given a batch of images, returns the image features encoded by the vision portion of the DecisionNCE model.
Given a batch of text tokens, returns the text features encoded by the language portion of the DecisionNCE model.
We pretrain vision and language encoder jointly with DecisionNCE-P/T on EpicKitchen-100 dataset. We provide training code and script in this repo. Please follow the instructions below to start training.
- Data preparation
Please follow the offical instructions and download the EpicKitchen-100 RGB images here. And we provide our training annotations reorganized according to the official version
- start training
We use Slurm for multi-node distributed finetuning.
sh ./script/slurm_train.sh
Please fill in your image and annotation path in the specified location of the script.
Models | Pretaining Methods | Params (M) |
Iters | Pretrain ckpt |
---|---|---|---|---|
RN50-CLIP | DecisionNCE-P | 386 | 2W | link |
RN50-CLIP | DecisionNCE-T | 386 | 2W | link |
- simulation
- real robot
We provide our jupyter notebook to visualize the reward curves. Please install jupyter notebook first.
conda install jupyter notebook
TO BE UPDATE
If you find our code and paper can help, please cite our paper as:
@inproceedings{lidecisionnce,
title={DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning},
author={Li, Jianxiong and Zheng, Jinliang and Zheng, Yinan and Mao, Liyuan and Hu, Xiao and Cheng, Sijie and Niu, Haoyi and Liu, Jihao and Liu, Yu and Liu, Jingjing and others},
booktitle={Forty-first International Conference on Machine Learning}
}