The combination of Deep Neural Network (DNN), Reinforced Learning (RL), and semi- supervised learning is used to utilize unlabeled data collected by the IoT devices to help users navigate inside the buildings. A deep Q reinforcement learning agent is trained, using both labeled and unlabeled data, to help the users of smart cities navigate indoors. The main aim of the project is to guide the user as close to the target as possible. The unsupervised learning model performs better in guiding an RL agent close to the target compared to the supervised learning model.
This project requires Python and the following Python libraries installed:
You will also need to have software installed to run and execute a Jupyter Notebook
If you do not have Python installed yet, it is highly recommended that you install the Anaconda distribution of Python, which already has the above packages and more included.
The complete code is provided in the boston_housing.ipynb
notebook file. The following files are necessory to run boston_housing.ipynb
Task.py
Agent.py
ReplayBuffer.py
QNetwork.py
VAE.py
VAE_action.py
environment.py
In a terminal or command window, navigate to the top-level project directory Indoor Navigation and Localization/
(that contains this README) and run one of the following commands:
ipython notebook Navigation_Project.ipynb\
or
jupyter notebook Navigation_Project.ipynb\
This will open the Jupyter Notebook software and project file in your browser.
There are two datasets. One is iBeacon_RSSI_labeled, and the other one is iBeacon_RSSI_unlabeled. The datasets are available on UCI Machine Learning Repository.
Features
location
: The location of receiving RSSIs from ibeacons b3001 to b3013; symbolic values showing the column and row of the location on the map (e.g., A01 stands for column A, row 1).Date
: Datetime in the format of dd-mm-yyyy hh : mm : yyyyb3001 - b3013
: RSSI readings corresponding to the iBeacons; numeric, integers only. }