-
KAIST
- South Korea
- https://junsu-kim97.github.io/
- @JunsuKim97
Highlights
- Pro
Stars
[ICLR 2025] LAPA: Latent Action Pretraining from Videos
Evaluating Safety of Autonomous Agents in Mobile Device Control
A benchmark for offline goal-conditioned RL and offline RL
Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)
Generates a zip archive that is uploadable to arXiv.
Skeleton for scalable and flexible Jax RL implementations
official implementation for our paper Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning
Official Code for the paper "SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs" (ICLR 2024)
Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)
Foundation Policies with Hilbert Representations (ICML 2024)
[NeurIPS 2023 Spotlight] LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios (awesome MCTS)
Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.
Code for "Learning to Model the World with Language." ICML 2024 Oral.
An official implementation of "Diffusion Video Autoencoders: Toward Temporally Consistent Face Video Editing via Disentangled Video Encoding" (CVPR 2023) in PyTorch.
CLIPort: What and Where Pathways for Robotic Manipulation
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
Code for the paper "Multi-scale Diffusion Denoised Smoothing" (NeurIPS 2023)
Modality-Agnostic Self-Supervised Learning with Meta-Learned Masked Auto-Encoder (NeurIPS 2023)
Code for the paper "RoAST: Robustifying Language Models via Adversarial Perturbation with Selective Training" (EMNLP 2023)
METRA: Scalable Unsupervised RL with Metric-Aware Abstraction (ICLR 2024)
Guide Your Agent with Adaptive Multimodal Rewards (NeurIPS 2023 Accepted)
HIQL: Offline Goal-Conditioned RL with Latent States as Actions (NeurIPS 2023)
Jaehyung Kim et al's ICML23 paper "Prefer to Classify: Improving Text Classifiers via Auxiliary Preference Learning"
Collaborative Score Distillation for Consistent Visual Synthesis (NeurIPS 2023)
S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions