You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
Deep Reinforcement Learning for mobile robot navigation in IR-SIM simulation. Using DRL (SAC, TD3, PPO, DDPG) neural networks, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
Deep Reinforcement Learning for mobile robot navigation in ROS2 Gazebo simulator. Using DRL (SAC, TD3) neural networks, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
ENPM 661 Project 3 Phase 2: A rigid robot traverses through a configuration space to find the goal node using A star search algorithm, while it avoid the obstacles in the map
This project simulates a robot navigating through an environment using Pygame. The robot can move, avoid obstacles, and update its position and heading based on sensor inputs.
Autonomous wheeled robot implementing RRT path planning, real-time obstacle avoidance with IR sensors, PI-controlled motion, and IMU-based error correction. Features PyQt5 GUI and a separate TensorFlow object detection module. Complete with demos for obstacle detection and angle correction.
A ROS2-based obstacle avoidance system using Bash scripting and ROS2 parameters, developed for The Construct’s Linux for Robotics course. The robot uses laser scan data to navigate around obstacles with simple logic.
The line-following and obstacle avoidance robot is an intelligent autonomous vehicle capable of moving along a predefined path while detecting and avoiding obstacles in its way.