28 year's old, Computer Engineer and Master's Student of Computer Enginnering. Scholarship holder of PRH22-ANP, developing a air-underwater Next-Best-View Planner for inspection in large partially submerged structures, as part of a project closely tied to the development of a Hybrid Unmanned Aerial Underwater Vehicle. I am also a volunteer scholarship holder at the robotics team FBOT, which focuses on the creation of an autonomous drone for indoor tasks participation in robotics competitions.
I am currently developing a next-best-view planner for the coverage of large partially submerged structures for use in a ybrid Unmanned Aerial Underwater Vehicle.
For the implementation of navigation, a package was developed to interface with the Mavros in order to send commands to the vehicle, allowing it to have intelligence and operate autonomously. A mission planner was developed to send commands and get the status of the vehicle. Complementary packages were also developed to carry out the inspection of the environment to run on the Jetson Nano board together with the mission_planner_package. The complementary packages are responsible for processing the vision in order to imply in the navigation of the vehicle. For each detection task performed, the search for the target and carrying out a specific action is given by processing the image collected by the Logiteh C920 camera, thus resulting in displacement changes, movement actions to demonstrate a certain task completion and visual demonstration of the processing effected.
Para a implementação da navegação, um pacote foi desenvolvido para interagir com o Mavros a fim de enviar comandos ao veículo, permitindo que ele possua inteligência e opere autonomamente. Um planejador de missões foi desenvolvido para enviar comandos e obter o status do veículo. Pacotes complementares também foram desenvolvidos para realizar a inspeção do ambiente para operar na placa Jetson Nano juntamente com o mission_planner_package. Os pacotes complementares são responsáveis pelo processamento da visão para influenciar na navegação do veículo. The vehicle's navigation is strictly based on reading QR codes and estimating its position through image processing and interpreting the length of the road, resembling the concept of a line follower.
The current work proposes an approach to the development of this system, aiming to recognize a target on the water's surface and subsequently navigate towards it to perform UAVs emergency landings. The application of this approach is motivated by the use of photorealistic environments to enhance the performance of deep reinforcement learning.
This approach involves the characterization of active perception, focusing particularly on visual perception. For this purpose, the agent's observations are generated by interpreting visual processing, without the need to use convolutional neural networks or contrastive learning. The obtained results demonstrated the effectiveness of the agent's learned policy during the conducted tests, enabling successful landings on targets positioned in various locations and orientations within the agent's field of view.
...
- Programming Languages: Python, C++, Java, C
- Tools: Git, Docker, ROS, Opencv, MatplotLib, Numpy, Scypi, PyTorch
- Robotics
- Autonomous Navigation
- Deep Learning
- 🥉 Competição Brasileira de Robótica - Simulation, 2021;
- 🥉 Competição Brasileira de Robótica - São Paulo, Brazil, 2022;
- 4º FIRA WorldCup - Wolfenbüttel, Germany, 2023;
- Master's Degree: Computer Engineering
- Duration: 2021 - Current
- Degree: Bachelor of Computer Engineering
- Duration: 2018 - 2021