Build systems
: updated toC++20
standard,CMake
3.29.2,Clang
17,GCC
13,Python
3.12- The
autonomysim
Python package has undergone a complete overhaul!AutonomyLib
is next. Windows
: we now provide separate Batch/Command and PowerShell build systems. Both are tested in CI/CD.Documentation
: a new system has been rolled out that also generates Python and C++ API docs.- Support for
Unity Engine
,Gazebo
, andROS1
has been deprecated to focus onUnreal Engine
,ROS2
,ArduPilot/PX4
,qGroundControl
,PyTorch
, and real-time applications ofAutonomyLib
via software- and hardware-in-the-loop. - The
master
branch supportsUnreal Engine
version 5.03 and above. For version 4.27, you can use theue4.27
branch.
Unreal Engine
version 5.4 brought new features including animation and sequencing.Unreal Engine
version 5.2 brought native support for Apple/ARM M-series silicon.Unreal Engine
version 5.0 brought powerful new features including Nanite and Lumen, while deprecating support for the PhysX backend.- The
Omniverse Unreal Engine Connector
enables you to syncUnreal Engine
data with anOmniverse Nucleus
server, which can then sync with anyOmniverse Connect
application includingIsaacSim
.
For a complete list of changes, view the change log.
"A central challenge in the branch of artificial intelligence (AI) known as machine learning (ML) is the massive amount of high-fidelity labeled data needed to train models. Datasets for real-world systems are either hand-crafted or automatically labeled using other models, introducing biases and errors into data and downstream models, and limiting learning to the offline case. While game engines have long used hardware-accelerated physics engines of Newtonian dynamics, accelerators for physics-based rendering (PBR) have recently made real-time ray-tracing a reality, extending physical realism to the visual domain. In parallel, physical fidelity with the real world has skyrocketed with the rapid growth and falling cost of Earth observation data. For the first time in history, the average user can generate high-fidelity robotic system models and real-world labeled datasets with known physics for offline or online learning of intelligent agents. This will revolutionize AI for robotics, where the data and safety requirements are otherwise intractable, while enabling low-cost hardware prototyping in silico." -Dr. Adam Erickson, 2024
AutonomySim
is a high-fidelity, photorealistic simulator for multi-agent and -domain autonomous systems, intelligent robotic systems, or embodiment as it is known in the AI research community. AutonomySim
is based on Unreal Engine
and Microsoft's former AirSim
. AutonomySim
is an open-source, cross-platform, modular simulator for robotic intelligence that supports software-in-the-loop (SITL) and hardware-in-the-loop (HITL) operational modes for popular robotics controllers (e.g., Pixhawk/PX4
, APM/ArduPilot
). Future support is planned for SITL and HITL ground control software (GCS) such as qGroundControl
. AutonomySim
is developed as an Unreal Engine
plugin that can be dropped into any Unreal environment or downloaded from the Unreal Engine Marketplace. The aim of AutonomySim
is to provide physically realistic multi-modal simulations of robotic systems with first-class support for popular AI and control systems libraries in order to develop new perception, actuation, communication, navigation, and coordination AI models for diverse real-world environments.
We hope that you find AutonomySim
enjoyable to use and develop. Unlike other projects, we intend to make public any and all improvements to the software framework. We merely ask that you share your improvements in return, although you are not obligated to do so in any way. Together, we will build a foundation for robotic general intelligence (RGI) by providing the best simulation system for AI in robotics.
Robotics companies interested in having Nervosys model their hardware/software and develop related AI models in AutonomySim
can reach us directly at opensource@nervosys.ai. We are delighted to offer our services, so that we may continue to support and improve this essential open-source robotics project for the benefit of the community.
- Windows 10
- Windows 11
- Windows Server 2019 (untested)
- Windows Server 2022 (untested)
- Ubuntu 20.04 LTS (Focal Fossa)
- Ubuntu 22.04 LTS (Jammy Jellyfish)
- Ubuntu Server 22.04 LTS (untested)
- Ubuntu Core 22 (untested)
- Botnix 1.0 (Torbjörn) (coming soon!)
Note
Unreal Engine
versions 5.2 and up natively support Apple/ARM M-series silicon.
- macOS 11 (Big Sur)
- macOS 12 (Monterey)
- macOS 13 (Ventura)
- macOS 14 (Sonoma)
Coming soon. In the meantime, please see our GitHub Workflows for how to build the project.
Below are explanations and examples to help you get started.
- Project structure
- Development workflow
- Settings
- API examples
- Image APIs
- C++ API usage
- Camera views
- Sensors
- Voxel grids
- Robot controllers
- Radio controllers
- Wired controllers
- Adding new APIs
- Simple flight controller
- ROS
Project documentation and autogenerated API documentation:
Overview of the AutonomySim
architecture:
Figure 1. Overview of the simulation architecuture from Shah et al. (2017).
Based on AirSim
, the predecessor to AutonomySim
.
- Setting up AirSim with Pixhawk Tutorial by Chris Lovett
- Using AirSim with Pixhawk Tutorial by Chris Lovett
- Using off-the-self environments with AirSim by Jim Piavis
- Harnessing high-fidelity simulation for autonomous systems by Sai Vemprala
- Reinforcement Learning with AirSim by Ashish Kapoor
- The Autonomous Driving Cookbook by Microsoft Deep Learning and Robotics Garage Chapter
- Using TensorFlow for simple collision avoidance by Simon Levy and WLU team
Mirroring real-world robotic systems, AutonomySim
will support three different operational modes:
- Human operation
- Machine operation
- Hybrid human-machine operation
If you have wired or remote controller, you can manually control vehicles in the simulator as shown below. For ground vehicles, you can use the arrow keys for control inputs (i.e., steering, accelerating, decelerating). See more details here.
AutonomySim
exposes Application Programming Interfaces (APIs) for progammatic interaction with the simulation vehicles and environment. These APIs can be used to control vehicles and the environment (e.g., weather), generate imagery, audio, or video, record control inputs along with vehicle and environment state, et cetera. The APIs are exposed through a remote procedure call (RPC) interface and are accessible through a variety of languages, including C++, Python, and Rust.
The APIs are also available as part of a separate, independent, cross-platform library, so that they can be deployed on embedded systems running on your vehicle. That way, you can write and test your code in simulation, where mistakes are relatively cheap, before deploying it to real-world systems. Moreover, a core focus of AutonomySim
is the development of simulation-to-real (sim2real) domain adaptation AI models, a form of transfer learning. These metamodels map from models of simulations to models of real-world systems, leveraging the universal function approximation abilities of artificial neural networks (ANNs) to implicitly represent real-world processes not explicitly represented in simulation.
Note
The Sim Mode setting or the new Computer Vision mode can be used to specify the default vehicle, so you don't get prompted each time you start AutonomySim
. See this for more details.
Using a form of hardware-in-the-loop (HITL), AutonomySim
is capable of operating in hybrid human-machine mode. The classical example is a semi-autonomous aircraft stabilization program, which maps human control inputs (or lack thereof) into optimal control outputs to provide level flight.
There are two general approaches to generating labeled data with AutonomySim
:
- Manual: using the
record
button - Programmatic: using the APIs
The first method, using the record
button, is the easiest method. Simply press the big red button in the lower right corner to begin recording. This will record the vehicle pose/state and image for each frame. The data logging code is simple and easy to customize to your application.
Human/manual data recording mode.
The second method, using the APIs, is a more precise and repeatable method for generating labeled data. The APIs allow you to be in full control of the how, what, where, and when of data logging.
It is possible to use AutonomySim
with vehicles and physics disabled. This is known as Computer Vision Mode
and it supports both human and machine control. In this mode, you can use the keyboard or APIs to position cameras in arbitrary poses and collect imagery including depth, disparity, surface normals, or object segmentation masks. As the name implies, this is useful for generating labeled data for learning computer vision models. See this for more details.
We plan on supporting the following sensors and data modalities:
- RGB imagery
- Depth
- Disparity
- Surface normals
- Object panoptic, semantic, and instance segmentation masks
- Object bounding boxes (coming soon)
- Audio (coming soon)
- Video (coming soon)
- Short- or long-wavelength infrared imagery (see)
- Multi- and hyper-spectral (TBD)
- LiDAR (see; GPU acceleration coming soon)
- RaDAR (TBD)
- SoNAR (TBD)
We also plan on providing autolabeling systems in the future.
- Automobile
- BoxCar (coming soon)
- ClearPath Husky (coming soon)
- Pioneer P3DX (coming soon)
- Multirotor aircraft: Quadcopter
- Rotor-wing aircraft (TBD)
- Fixed-wing aircraft (TBD)
- Hybrid aircraft (TBD)
The weather system support human and machine control. Press the F10
key to see the available weather effect options. You can also control the weather using the APIs, as shown here.
Press the F1
key to see other available options.
Unreal Engine includes built-in support.
- Learning Perception, Communication, Planning, and Control Models
- Imitation or Apprenticeship Learning
- An example of recording control inputs and vehicle state for learning control systems.
- Neural Radiance Fields
- Learning compressed 3-D radiative transfer models.
- Large Language Models
- An example of using a large language model (LLM) to parse text commands into planning and control inputs for robotic systems. See Eureka.
- Robotics Foundation Models
- Learning Surrogate Models or Emulators
- Learning World Models
- Sensor System Development
- Locomotion System Development
- An example of learning structure, actuator, and locomotion models. This is useful, for example, for developing robotic systems that are robust to major structural failures, such as the loss of motors or legs.
- Communication System Development
- Data Randomization via Procedural Modeling
- A class of data augmentation to generate large amounts of diverse training data.
For updates or answers to your questions, join our GitHub Discussion group here or our Discord channel here.
For information on becoming a contributor, see the below section.
Community contributions are strongly encouraged via GitHub Issues and Pull Requests. If you are looking for areas to contribute, please take a look at the open issues. For more information about contributing to the project, please visit the contributing page.
Our GitHub Insights page provides a sense of the project activity.
The AutonomySim
repository consists of multiple projects with a project, the core of which is AutonomyLib
. Additional projects include DroneServer
, DroneShell
, HelloCar
, HelloDrone
, MavLinkCom
, Examples
, and LogViewer
.
It provides wrappers for Unreal Engine
, Python
, and ROS2
, as well as build scripts for Docker
and Azure
.
The build system uses Visual Studio 2022
for Windows and CMake
for cross-platform support. Pre-build scripts are run beforehand to prepare the target project for compilation.
For more information, see the following pages:
Below is a comparison with AirSim
and its other forks.
Project | Origin | Year | New Features | Updated | Framework | Server | SaaS | Organization |
---|---|---|---|---|---|---|---|---|
AirSim | original | 2017 | - | 2022 | open-source | closed-source | Project AirSim | Microsoft |
Cosys-AirSim | fork | 2020 | Sensors, Matlab | 2024 | open-source | - | - | Cosys Lab |
Colosseum | fork | 2022 | Unreal Engine 5 | 2023 | open-source | closed-source | SWARM | Codex Labs |
AirGen | fork | 2023 | - | - | closed-source | closed-source | GRID | Scaled Foundations |
AirSim-Client | original | 2022 | Rust | 2023 | open-source | - | - | Kristoffer Solberg Rakstad |
AutonomySim | fork | 2023 | Major refactoring | 2024 | open-source | open-source | - | Nervosys |
Compared to other simulation engines for robotic systems, AutonomySim
is open-source and built on top of a state-of-the-art game engine with the best available features and performance. It also has batteries-included support for popular machine learning workflows.
AutonomySim
has been designed from the ground-up for robotic general intelligence (RGI) or general robotic intelligence (GRI) based on multi-modal, high-dimensional sensing combined with state-of-the-art AI modeling techniques, terms and concepts that Nervosys rightfully invented.
A subset of the organizations, people, and projects that have used AutonomySim
or its predecessor, AirSim
, are listed here.
If you would like to be featured on this list, please submit a request here.
- Nervosys: "Accelerating the development of robotic general intelligence"
AutonomySim
is made possible by Nervosys, NVIDIA, Epic Games, Microsoft, the Linux Foundation and countless contributors. We need your support to ensure the success of AutonomySim
.
Reach out to us at opensource@nervosys.ai to learn how you can support this project.
- Focus on Unreal Engine, deprecate support for ROS1, Unity, Gazebo
- Project reorganization and modernization
- Add support for the latest
Unreal Engine
version 5.4 - Updated Python library
- Update C++ library
- Add API, RPC support for Rust, deprecate support for Java and C#
- Update automated tests
- Add support for the latest
- Add the JSBSim flight dynamics model (FDM) plugin for Unreal Engine per Project Antoinette
- Add libraries and tools for artificial intelligence (AI)
- CUDA Toolkit, CuDNN, TensorRT, JetPack
- Mojo, PyTorch, JAX-Flax, OpenCV
- LLMs: LLaMA 3, Mistral/Mixtral, OpenHermes, SD, LLaVA
- Robotics foundation models (multimodal)
- Interpretability, explainability, and hard bounds or guardrails
- Testing, safety, cybersecurity tools
- Add headless server mode for control via external program, container, virtual machine, or local network
- Add support for SITL and HITL of companion computers (NVIDIA JetPack)
- Create generic interface for control software
- Add flight control software (FCS): BetaFlight, OpenPilot, LibrePilot, dRehmFlight, Flightmare/flightlib
- Add MavLink-based ground control software (GCS): qGroundControl, Mission Planner, Auterion Mission Control
- Add self-driving car (i.e., rover) software: openpilot, Autoware, CARLA, Vista, Aslan, OpenPodcar/ROS
- Add large labeled robotics labeled datasets
For technical aspects on the design of AutonomySim
, refer to the original AirSim
manuscript:
@techreport{shah2017,
author = {Shital Shah and Debadeepta Dey and Chris Lovett and Ashish Kapoor},
year = 2017,
title = {{Aerial Informatics and Robotics Platform}},
number = {MSR-TR-2017-9},
institution = {Microsoft Research},
url = {https://www.microsoft.com/en-us/research/project/aerial-informatics-robotics-platform/},
eprint = {https://www.microsoft.com/en-us/research/wp-content/uploads/2017/02/aerial-informatics-robotics.pdf},
note = {AirSim draft manuscript}
}
A list of manuscripts related to the design and implementation of AutonomySim
and its predecessors can be found here. Please open a GitHub Issue to add your manuscript.
A manuscript on the design and implementation of AutonomySim
is forthcoming.
For other questions, see the FAQ and feel free to post issues in the repository here.
The AutonomySim Code of Conduct is based on the Contributor Covenant version 2.1, itself inspired by the Mozilla standards. The original unmodified covenant can be found here. The changes made better reflect the core value of our organization in preserving freedom.
For answers to common questions about this code of conduct, see the FAQ. Translations are available here.
Contact us through GitHub Discussions with any additional questions or comments, so that we may maintain transparency in adopting community guidelines.
This project is released under the Apache 2.0 License, a permissible license often preferred for commercial use.
Any and all sublicenses can be found here.
"Accelerating the development of robotic general intelligence"
TM 2024 © Nervosys, LLC