Characterizing the performance of the SPHERE exoplanet imager at the Very Large Telescope using deep learning
Table of Contents
Taking direct pictures of extrasolar planetary systems is an important, yet challenging goal of modern astronomy, which requires specialized instrumentation. The high-contrast imaging instrument SPHERE, installed since 2014 at the Very Large Telescope, has been collecting a wealth of data over the last eight years. An important aspect for the exploitation of the large SPHERE data base, the scheduling of future observations, and for the preparation of new instruments, is to understand how instrumental performance depends on environmental parameters such as the strength of atmospheric turbulence, the wind velocity, the duration of the observation, the pointing direction, etc. With this project, we propose to use deep learning techniques in order to study how these parameters drive the instrumental performance, in an approach similar to the one used by Xuan et al. 2018. This project will make use of first-hand access the large SPHERE data base through the SPHERE Data Center at IPAG/LAM (Grenoble/Marseille).
A detailed explanation of the whole project can be found in this manuscript.
To get a local copy up and running follow these simple steps.
If you are using windows you should install wget and add its path to your environment variables.
-
Download SPHERE Data:
- You can either retrieve the SPHERE data download script from the SPHERE client or simply use the one located in
Dataset_creation/Download Scripts
.
In order to retrieve the data from the SPHERE client you need to follow these steps :
i. Browse Process
ii. Use the following options :
Process/recipe/ird_specal_dc
Process/Preset/cADI_softsorting / ird_specal_dc / production
Observation / Parameters/Filter/DB_H23
iii. Generate the download script :
Select all the processes, then right click and selectDownload script/Selected - outputs only
iv. Parse the file : In the
Dataset_creation
folder there is a file namedsphere_dl_parser.py
that can be used in order to remove the unwanted files in the download script.v. Execute the script : You just have to launch a terminal in the script folder and execute it.
- Linux
./parsed_sphere_dl_script_contrast_curves.sh
- Windows
sh parsed_sphere_dl_script_contrast_curves.sh
- You can either retrieve the SPHERE data download script from the SPHERE client or simply use the one located in
-
Create Data Folders:
-
After downloading the data, create the following folder structure:
SPHERE_DC_DATA ├── contrast_curves └── timestamps
These folders are where the (raw) data from the SPHERE client should be placed.
-
-
Move Data to Dataset Creation Folder:
- The observations will be downloaded in the folder
SPHERE_DC_DATA
(located in the same directory asparsed_sphere_dl_script_contrast_curves.sh
). - Move the
SPHERE_DC_DATA
folder inside theDataset_creation
folder.
- The observations will be downloaded in the folder
In order to be able to track the training of the models as well as to hypertune them Weights and Biases was used. Thus in order to be able to run the codes, you need to provide your login key in the main
of the different files.
-
rf_single_nsigma.py
: Random forest that predicts a contrast value at a given separation. -
nn_single_nsigma.py
: Neural Network that predicts a contrast value at a given separation. -
nn_vector_nsigma.py
: Neural Network that predicts the whole contrast vector at once. -
nn_single_uncertainty.py
: Neural Network that computes the aleatoric uncertainty.
Ludo Bissot - ludo.bissot@uclouvain.be
Project Link: https://github.com/lbissot/Master-Thesis
- Julien Milli's github has been used to query Simbad.