Skip to content

Commit

Permalink
Merged CameraCalibrationDev branch
Browse files Browse the repository at this point in the history
  • Loading branch information
thomasfermi committed Aug 24, 2021
2 parents ddd8099 + 89016fb commit ea74ce1
Show file tree
Hide file tree
Showing 67 changed files with 5,971 additions and 2,935 deletions.
3 changes: 3 additions & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Contributors
The initial version of this book, which contained the chapters on lane detection and control, was written by [Mario Theers](https://github.com/homasfermi).
The chapter on camera calibration was written by [Mankaran Singh](https://github.com/MankaranSingh), with minor contributions from [Mario Theers](https://github.com/homasfermi).
10 changes: 6 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,13 @@ Algorithms for Automated Driving

![](book/Introduction/carla_vehicle_lanes.jpg "")

Each chapter of this (mini-)book guides you in programming one important software component for automated driving. Currently, this book contains two chapters: **Lane Detection**, and **Control**. You will implement software that
* detects lane boundaries from an image using deep learning
Each chapter of this (mini-)book guides you in programming one important software component for automated driving.
Currently, this book contains three chapters: **Lane Detection**, **Control** and **Camera Calibration**. You will implement software that
* detects lane boundaries from a camera image using deep learning
* controls steering wheel and throttle to keep the vehicle within the detected lane at the desired speed
* determines how the camera is positioned and oriented with respect to the vehicle (a prerequisite to properly join the lane detection and the control module)

The software you will write is in python, and you will apply it in the [open-source driving simulator Carla](https://carla.org/). Ideally, your computer is powerful enough to run Carla, but if it is not, you can still work through the exercises using a simplistic simulator I created for this course. I recommend to work through the chapters in order, but each chapter is self-contained and can be studied on its own.
The software you will write is in python, and you will apply it in the [open-source driving simulator CARLA](https://carla.org/). Ideally, your computer is powerful enough to run CARLA, but if it is not, you can still work through the exercises. For the exercise on control there is a simplistic simulator that comes with this course. We recommend to work through the chapters in order, but if you want to, you can read the **Control** chapter, before the **Lane Detection** chapter.

To work through this book, you
* should understand the following math and physics concepts: derivative, integral, trigonometry, sine/cosine of an angle, matrix, vector, coordinate system, velocity, acceleration, angular velocity, cross product, rotation matrix
Expand All @@ -23,4 +25,4 @@ Please follow this [link](https://thomasfermi.github.io/Algorithms-for-Automated
As of 2021, we have a discord server 🥳. Please follow this [link](https://discord.gg/57YEzkCFHN) to join the community!

## Help wanted!
Are you interested in contributing to the book by adding a new chapter? Or do you have other ideas for improvements? Please let me know by joining the discussion [on github](https://github.com/thomasfermi/Algorithms-for-Automated-Driving/discussions/4)!
Are you interested in contributing to the book by adding a new chapter? Or do you have other ideas for improvements? Please let us know by joining the discussion [on github](https://github.com/thomasfermi/Algorithms-for-Automated-Driving/discussions/4)!
27 changes: 20 additions & 7 deletions book/Appendix/ExerciseSetup.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,24 +23,37 @@ Open [Google Drive](https://drive.google.com/drive/my-drive). In the top left na
## Python environment


````{tabbed} Local installation
`````{tabbed} Local installation
If you do not have anaconda, please [download and install it](https://www.anaconda.com/products/individual).
Please create a conda environment called `aad` (Algorithms for Automated Driving) for this course using the environment.yml file within "Algorithms-for-Automated-Driving/code"
```bash
````bash
cd Algorithms-for-Automated-Driving/code
conda env create -f environment.yml
````
````{admonition} Tip: Use mamba!
:class: tip, dropdown
You may find that creating a conda environment takes a lot of time. I recommend to install mamba:
```bash
conda install mamba -n base -c conda-forge
```
Installing mamba takes some time, but afterwards setting up environments like the one for this book is way faster. Just write `mamba` instead of `conda`:
```bash
mamba env create -f environment.yml
```
````
Be sure to activate that environment to work with it
```bash
conda activate aad
```
If you are working on Windows, consider [adding anaconda to your PowerShell](https://www.scivision.dev/conda-powershell-python/).
````
`````


````{tabbed} Google Colab
`````{tabbed} Google Colab
When you run code in Google Colab, you will have most of the libraries you need already installed. Just import whatever you need. If it is missing, you will get an error message that explains how to install it.
````
`````


## Navigating the exercises
Expand All @@ -56,7 +69,7 @@ cd Algorithms-for-Automated-Driving
jupyter lab
```
In the book's exercise sections, I typically tell you to start working on the exercise by opening some jupyter notebook (.ipynb file).
When you open the .ipynb file be sure to select the "aad" conda environment as your python kernel.
When you open the .ipynb file with VS code be sure to select the "aad" conda environment as your python kernel.
Once you opened the notebook, read through it cell by cell. Execute each cell by pressing ctrl+enter. Typically the first section of the notebook is for setting up Google Colab. This won't do anything on your machine. You can also delete these Colab-specific cells if you want.
````

Expand All @@ -66,4 +79,4 @@ Open [Google Drive](https://drive.google.com/drive/my-drive) and navigate to the
````

## Getting help
If you have a question, feel free to ask it by [raising an issue on github](https://github.com/thomasfermi/Algorithms-for-Automated-Driving/issues). Please add the "question" label, when creating the issue.
If you have a question about the exercises, feel free to ask it on [github discussions](https://github.com/thomasfermi/Algorithms-for-Automated-Driving/discussions) or on the [discord server](https://discord.gg/57YEzkCFHN).
1 change: 0 additions & 1 deletion book/Appendix/NextChapters.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
I would like to add the following chapters to this book in the future

* **Model Predictive Control** The pure pursuit controller does not work well when driving curves at high speeds. In this case the assumption of zero slip for the kinematic bicycle model does not apply. In this chapter we will design a model predictive controller based on the dynamic bicycle model, which accounts for nonzero side slip angles.
* **Camera Calibration** How do we estimate the camera height above the road, as well as the camera roll, pitch, and yaw angle? In the chapter on Lane Detection, we got these parameters directly from the simulation. Of course, we cannot do this in the real world. In this chapter we will implement a camera calibration module to estimate the camera extrinsics.
* **HD Map Localization** Carla has a very nice API to access a high definition (HD) map of the road. How can we use our detected lane boundaries, a GPS sensor, a yaw rate sensor, and a speedometer to estimate our position on the HD map? This is relevant for navigation, and can also be used for improved vehicle control.

If you have some additional wishes for future chapters, please raise an issue on the [book's github repo](https://github.com/thomasfermi/Algorithms-for-Automated-Driving). If you want to motivate me to continue working on this book, please star the [book's github repo](https://github.com/thomasfermi/Algorithms-for-Automated-Driving) 😉.
Expand Down
33 changes: 33 additions & 0 deletions book/CameraCalibration/Discussion.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
<!-- #region -->
# Discussion

## Limitations

The method we presented just assumed that the roll is zero. Also we did not estimate the height $h$ of the camera. In the real world you could estimate the height using a tape measure and will probably only make an error of around 5 percent. Assuming a roll of zero does not seem to lead to practical problems, since this is done in the [source code](https://github.com/commaai/openpilot/blob/d74def61f88937302f7423eea67895d5f4c596b5/selfdrive/locationd/calibrationd.py#L5) of openpilot, which is known to perfrom really well. As a bonus exercise you can run experiments with `code/tests/camera_calibration/carla_sim.py` where you change the roll of the camera or you slightly modify the height. Investigate how this affects the control of the vehicle. Regarding estimation of height and roll we also recommend to have a look at [this paper](https://arxiv.org/abs/2008.03722).

Another limitation: The method we discussed in this chapter only works if your autonomous vehicle software stack uses lane detection in image space and if it is used in areas with good lane markings. But what if your software doesn't predict lanes in image space? Maybe it predicts lanes in world space as [openpilot](https://github.com/commaai/openpilot), or maybe it doesn't predict lanes at all and makes predictions [end-to-end](https://developer.nvidia.com/blog/deep-learning-self-driving-cars/). In either approaches, this method of camera calibration isn't going to work. As an alternative, we can use visual odometery (VO) based camera calibration.



## Alternative: VO-based camera calibration

In this approach, visual odometery is performed to find the motion of the camera. This approach also needs to be performed when the car is aligned with the lane line, i.e. when it is going straight and fast. Since the output of VO would be motion of the camera, this information can be used to find out the orientation of the camera with respect to the car. See how openpilot performs this in the [calibrationd](https://github.com/commaai/openpilot/blob/master/selfdrive/locationd/calibrationd.py#L148) module. The vehicle forwards axis is more or less identical to the direction of the translation vector, since the vehicle is driving straight. But having the vehicle forwards axis with respect to the camera reference frame means that you can estimate how the optical axis (the z-axis) of the camera is tilted with respect to the vehicle forwards direction. Hence you get the extrinsic rotation matrix! However, you also need to assume the roll is zero in this approach. To get started with visual odometery, the fastest way is to get started is [PySlam](https://github.com/luigifreda/pyslam). It has lots of methods for visual odometery including novel approaches based on deep learning.

## Further reading

- A great paper to get started with more advanced methods is Ref. {cite}`lee2020online`: [Online Extrinsic Camera Calibration for Temporally Consistent IPM Using Lane Boundary Observations with a Lane Width Prior](https://arxiv.org/abs/2008.03722). This paper also discusses estimation of roll and height.
- Visual Odometery: [PySlam](https://github.com/luigifreda/pyslam) and the resources mentioned in the repo.
- VO [Blog post](http://avisingh599.github.io/vision/monocular-vo/) by Avi Singh.
- [Minimal python](https://github.com/yoshimasa1700/mono_vo_python/) implementation of VO.
- [VO Lectures with exercises](https://web.archive.org/web/20200709104300/http://rpg.ifi.uzh.ch/teaching.html) by David Scaramuzza


## References
The formalism of how to compute the camera orientation from the vanishing point was adapted from Ref. {cite}`ShiCourseraCalibration`. The idea to use lane boundaries to determine the vanishing point can be found in the paper by Lee et al. {cite}`lee2020online` and within the references of that paper.

```{bibliography}
:filter: docname in docnames
```


<!-- #endregion -->
Loading

0 comments on commit ea74ce1

Please sign in to comment.