You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When it comes to data sets that can be used when training the neural networks, there are two options. Either one uses an already available data set, either paid or free, or a new data set is created. Currently, machine learning and especially deep learning are popular topics. As a result of this, the availability of free and publicly available data sets has increased, especially in the area of image processing like segmentation, facial recognition or object detection. Free data sets consisting of aerial imagery are \cite{VolodymyrMnih.2013}, \cite{spacenet}, \cite{isprs-vaihingen}, \cite{isprs-potsdam}, \cite{Helber.20170831}, \cite{deepsat}.
6
+
When it comes to data sets that can be used to train the neural networks, there are two options. Either an already available data set is being used, either paid or free, or a new data set is created. Currently, machine learning and especially deep learning are popular topics. As a result of this, the availability of free and publicly available data sets has increased, especially in the area of image processing like segmentation, facial recognition or object detection. Free data sets consisting of aerial imagery are \cite{VolodymyrMnih.2013}, \cite{spacenet}, \cite{isprs-vaihingen}, \cite{isprs-potsdam}, \cite{Helber.20170831}, \cite{deepsat}.
7
7
8
8
Despite these available data sets, we decided to make our own, consisting solely of open data, that is Microsoft Bing for the imagery and OpenStreetMap for the vector data. Due to this, a tool named Airtiler \cite{airtiler} which is described in detail in \autoref{chp:theoretical_and_experimental_results}.
9
9
10
-
It can be assumed, that in the future, more and more swiss cantons will made high resolution orthophotos publicly available. At the time of this writing, especially the canton of Zurich takes a pioneering role and makes several of their data sources publicly and freely available\footnote{https://geolion.zh.ch/ (15.06.18)}. However, at the time of this writing, it was not an option to use these images for this work, because it would lead to a rather small dataset.
10
+
It can be assumed, that in the future, more and more swiss cantons will made high resolution orthophotos publicly available. At the time of this writing, especially the canton of Zurich takes a pioneering role and makes several of their data sources publicly and freely available\footnote{https://geolion.zh.ch/ (15.06.18)}. However, at the time of this writing, it was not an option to use these images for this work, because it would lead to a rather small data set.
11
11
12
12
\section{Prediction accuracy}
13
13
\subsection{Class probability}
14
14
After the first training the neural network, the results were not quite as expected. Even though, buildings were predicted as buildings in most cases, other classes, like tennis courts, were predicted as buildings as well. Due to this, the network has been retrained with the additional, incorrectly predicted, classes like tennis courts. However, instead of correctly making a distinction between buildings and tennis courts, the overall prediction accuracy got worse.
15
15
This might be the result of the network which has to solve a more complex task now, by deciding which class it is, instead of a simple yes-no decision. Additionally, the training data is highly imbalanced, as there are lot more samples of buildings than tennis courts.
16
-
As a result of this, a solution could be to train the network several times seperatly, to get multiple models, each trained for a specific class. Another solution could be to weight the loss according to the relative amount of the specific class according to the size of the whole dataset.
16
+
As a result of this, a solution could be to train the network several times seperatly, to get multiple models, each trained for a specific class. Another solution could be to weight the loss according to the relative amount of the specific class according to the size of the whole data set.
17
17
18
18
\subsection{Outline}
19
19
\autoref{fig:challenges:small_predictions} shows, that the predictions are in most cases a bit to small when compared to the corresponding orthophoto. This might be the result of slightly misaligned masks, since the masks and the images are generated seperately.
With the increasing computational power that comes with recent graphic cards, increasingly complex neural networks can be used on increasingly challenging tasks. Especially the area of image processing, deep learning gains in popularity. Not only due to the great availability of data sets but also because companies recognize the amount of knowledge and information that can be retrieved with such technologies.
6
6
7
-
The following sections are a brief introduction into image segmentation using deep learning.
7
+
The following sections are a brief introduction into image segmentation using convolutional neural networks.
8
8
9
9
\subsection{Object detection and segmentation}
10
-
Object detection exists since long before deep learning was so popular as it is now. In object detection, the goal is to determine whether an object of a specified class (for example 'car') is visible on a image.
10
+
Object detection exists since long before deep learning was so popular as it is now. In object detection, the goal is to determine whether an object of a specified class (for example 'car') is present on an image. Another type of object detection is with additional classification, which means to find all objects on an image together with their class and a probability that the object is actually belongs to the determined class. \autoref{fig:neural_networks:object_detection} shows an example of object detection and classification.
Copy file name to clipboardExpand all lines: doc/chapters/practical_results/practical_results.tex
+31
Original file line number
Diff line number
Diff line change
@@ -41,3 +41,34 @@ \section{QGIS Plugin}
41
41
\caption{Changes have attributes showing the predicted class and the type of change (added, deleted, changed)}
42
42
\label{fig:plugin:change_attributes}
43
43
\end{figure}
44
+
45
+
\section{Prediction Accuracy}
46
+
Normally, the accuracy of predictions of objects on orthophotos is measured using \textit{Intersection over Union} (IoU), also called \textit{Jaccard coefficient} \cite{Liu.2011} which is a measure of similarity between objects. Its calculation is shown in \autoref{fig:results:iou}.
\caption{The calculation of Intersection over Union (IoU)\\Source: https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/ (23.06.2018)}
52
+
\label{fig:results:iou}
53
+
\end{figure}
54
+
55
+
However, due to its non-differentiability, the IoU is can not directly be used as loss-coefficient during the training of the neural network. Despite that, there are options how to use IoU during training shown in \cite{Bebis.2016}, \cite{Yu.20160804}.
56
+
57
+
Since the goal of this thesis is not to get the most accurate predictions but to reduce the false positives and false negatives as much as possible, it does not really matter if the prediction is extremely accurate but if all objects of their corresponding classes are found. Due to this, we introduce a new metric called \textbf{Hit rate}, which simply counts if an object was found (hit) or not. It can be calculated as shown.
58
+
59
+
\begin{equation}
60
+
Precision = \dfrac{|TP|}{|TP| + |FP|}
61
+
\end{equation}
62
+
and
63
+
\begin{equation}
64
+
Recall = \dfrac{|TP|}{|TP| + |FN|}
65
+
\end{equation}
66
+
where:
67
+
\begin{itemize}[label=]
68
+
\item$TP$: True positive prediction
69
+
\item$FP$: False positive prediction
70
+
\item$FN$: False negative prediction
71
+
\end{itemize}
72
+
73
+
Finally, accordingly to this metrics, our predictions have a \textbf{Precision of 95.33\%} and a \textbf{Recall of 88.96\%}. This values have been evaluated using a randomly selected batch of 150 images from the test data set.
Copy file name to clipboardExpand all lines: doc/chapters/theoretical_and_experimental_results/theoretical_and_experimental_results.tex
+2-5
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ \section{Training data}
5
5
\subsection{Airtiler - A data set generation tool}
6
6
For the training, we wanted to use publicly and freely available data. Not only due to the fact, highly resolved orthophotos cost quite a lot but also to make it possible for others to reproduce the results.
7
7
8
-
As a result of this, OpenStreetMap was chosen for the vector data and Microsoft Bing Maps for the imagery. A dataset consisting of satellite imagery and images for the ground truths can be created using the Python module Airtiler \cite{airtiler}. This tool has been developed by the author during this master thesis. It allows to configure one or more bounding boxes together with several other options like zoom level and OpenStreetMap attributes.
8
+
As a result of this, OpenStreetMap was chosen for the vector data and Microsoft Bing Maps for the imagery. A data set consisting of satellite imagery and images for the ground truths can be created using the Python module Airtiler \cite{airtiler}. This tool has been developed by the author during this master thesis. It allows to configure one or more bounding boxes together with several other options like zoom level and OpenStreetMap attributes.
9
9
10
10
\autoref{lst:results:airtiler_config} shows a sample configuration as it is being used by Airtiler.
11
11
@@ -87,7 +87,7 @@ \subsection{Airtiler - A data set generation tool}
87
87
\end{figure}
88
88
89
89
\subsection{Publicly available data sets}
90
-
Furthermore, there are several different datasets publicly available: \cite{VolodymyrMnih.2013}, \cite{spacenet}, \cite{isprs-vaihingen}, \cite{isprs-potsdam}, \cite{Helber.20170831}, \cite{deepsat}.
90
+
Furthermore, there are several different data sets publicly available: \cite{VolodymyrMnih.2013}, \cite{spacenet}, \cite{isprs-vaihingen}, \cite{isprs-potsdam}, \cite{Helber.20170831}, \cite{deepsat}.
91
91
92
92
\section{Mapping Challenge}
93
93
At the time of this writin the platform crowdAI hosted a challenge called Mapping Challenge \cite{mappingchallenge} which was about detecting buildings from satellite imagery. In order to gain additional knowledge regarding the performance of Mask R-CNN, we decided to participate in the challenge.
Copy file name to clipboardExpand all lines: doc/frontmatter/acknowledgments.tex
+1-1
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
\chapter*{Acknowledgments}
5
5
6
6
\begin{description}
7
-
\item[Prof. Stefan Keller] tbd
7
+
\item[Prof. Stefan Keller] for his creativity, his visions and his support not only throughout this thesis, but also in the projects before.
8
8
\item[My beloved wife Nadine] for supporting me whenever needed by listening, with thoughts, ideas and sometimes by doing a bit of additional housework.
9
9
\item[My family] for always trying to understand what my thesis is actually about.
Copy file name to clipboardExpand all lines: doc/frontmatter/bibliography.bib
+45-15
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ @misc{airtiler
9
9
author = {{Martin Boos}},
10
10
title = {Airtiler},
11
11
url = {https://github.com/mnboos/airtiler},
12
-
urldate = {15.05.2018}
12
+
urldate = {2018-05-15}
13
13
}
14
14
15
15
@@ -34,17 +34,27 @@ @proceedings{Banissi.2003
34
34
}
35
35
36
36
37
+
@proceedings{Bebis.2016,
38
+
abstract = {We consider the problem of learning deep neural networks~(DNNs) for object category segmentation, where the goal is to label each pixel in an image as being part of a given object (foreground) or not (background). Deep neural networks are usually trained with simple loss functions (e.g., softmax loss). These loss functions are appropriate for standard classification problems where the performance is measured by the overall classification accuracy. For object category segmentation, the two classes (foreground and background) are very imbalanced. The intersection-over-union (IoU) is usually used to measure the performance of any object category segmentation method. In this paper, we propose an approach for directly optimizing this IoU measure in deep neural networks. Our experimental results on two object category segmentation datasets demonstrate that our approach outperforms DNNs trained with standard softmax loss.},
39
+
year = {2016},
40
+
title = {Optimizing Intersection-Over-Union in Deep Neural Networks for Image Segmentation: Advances in Visual Computing},
41
+
publisher = {{Springer International Publishing}},
42
+
isbn = {978-3-319-50835-1},
43
+
editor = {Bebis, George and Boyle, Richard and Parvin, Bahram and Koracin, Darko and Porikli, Fatih and Skaff, Sandra and Entezari, Alireza and Min, Jianyuan and Iwai, Daisuke and Sadagic, Amela and Scheidegger, Carlos and Isenberg, Tobias and Rahman, Md Atiqur and Wang, Yang}
44
+
}
45
+
46
+
37
47
@misc{cocoformat,
38
48
title = {COCO Data Format: Common Objects in Context},
0 commit comments