-
KAIST
- Daejeon, South Korea
Highlights
- Pro
Stars
Robot kinematics implemented in pytorch
An Efficient Trajectory Planner for Car-like Robots on Uneven Terrain
[SIGMOD' 25] A fast parallel kd-tree implementation
A markdown version emoji cheat sheet
Trajectory Planner in Multi-Agent and Dynamic Environments
The Replica Dataset v1 as published in https://arxiv.org/abs/1906.05797 .
Matplotlib styles for scientific plotting
Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. C…
Publish full PCD Data for isaac sim
Writing AI Conference Papers: A Handbook for Beginners
A new codebase for popular Scene Graph Generation methods (2020). Visualization & Scene Graph Extraction on custom images/datasets are provided. It's also a PyTorch implementation of paper “Unbiase…
Curvature Corrected Moving Average: An accurate and model-free path smoothing algorithm.
A generative world for general-purpose robotics & embodied AI learning.
Gaussian Process-based Traversability Analysis for Terrain Mapless Navigation (ICRA 2024)
[IROS'24 Oral] Simultaneous Exploration and Photographing with Heterogeneous UAVs for Fast Autonomous Reconstruction
Open source library for Single Object Tracking in point clouds.
A lightweight differential flatness-based trajectory planner for car-like robots
Voxelmap++: Mergeable Voxel Mapping Method for Online LiDAR(-inertial) Odometry
C++ implementation of a fast hash map and hash set using robin hood hashing
[T-RO 2024] FALCON: Fast Autonomous Aerial Exploration using Coverage Path Guidance.
DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding
Bridging lidar and text through image intermediaries
A ROS wrapper for trajectory planning based on motion primitives
Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"
The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)