About | Features | Technologies | Requirements | Installation | Development | Deploy changes | CNC worker | License | Authors
This repository comprises two related applications:
- Desktop: A small desktop app to monitor and control an Arduino-based CNC machine, optimized for touchscreen.
- API: REST API to integrate the app's functionalities in a remote client.
You can see further information in their respective folders.
✔️ PostgreSQL database management
✔️ G-code files management
✔️ Real time monitoring of CNC status
✔️ Communication with GRBL-compatible CNC machine via USB
✔️ Long-running process delegation via message broker
The following tools were used in this project:
- Programming language: Python
- API framework: FastAPI
- UI (desktop) framework: PyQt
- Database: PostgreSQL
- ORM: SQLAlchemy
- DB migrations: Alembic
- Tasks queue: Celery
- Message broker: Redis
- Containerization: Docker
Before starting 🏁, you need to have Python and Docker installed.
There is a folder for each subproject in docs, which contain instructions to start using them in production:
You can execute the script deployment/db_schema.py
in production with the adminer
service, or copy it to the Raspberry and follow these steps.
The easiest way to run the needed services is with Docker
. This will start the API and the following services:
- PostgreSQL DB.
- Adminer, to manage the DB.
- Message broker (Redis).
- Flower, to monitor the Celery worker.
$ docker compose up -d
if you want to also start the CNC worker (Celery) in a container (Linux only, see this section):
$ docker compose --profile=worker up -d
Open http://localhost:8000 with your browser to check if the API works.
You can find instructions to run locally (without Docker) and further information in each subproject's folder:
You can also run a worker with a mocked version of the GRBL device, which runs the GRBL simulator. NOTE: This version of the worker can run in Windows.
$ docker compose --profile=simulator up
For the worker/app to use the mocked port, update your environment (or ini file) to use a virtual port:
SERIAL_PORT=/dev/ttyUSBFAKE
Initiate the virtual port inside the worker's container:
docker exec -it remote-cnc-worker-sim /bin/bash simport.sh
To see your database, you can either use the adminer
container which renders an admin in http://localhost:8080
when running the docker-compose.yaml
; or connect to it with a client like DBeaver.
You can manage database migrations by using the following commands inside the core
folder.
- Apply all migrations:
$ alembic upgrade head
- Revert all migrations:
$ alembic downgrade base
- Seed DB with initial data:
$ python seeder.py
More info about Alembic usage here.
if you are using docker compose
, you can run the following commands to apply database migrations and seeder:
$ docker exec remote-cnc-api bash -c "cd core && alembic upgrade head"
$ docker exec remote-cnc-api bash -c "cd core && python seeder.py"
There is a folder for each subproject in docs, which contain instructions to deploy changes to production:
If we modify the Docker image for the API or Worker, or we just need to update the version of one of the other services, we have to follow the next steps.
- If not logged, log in to your Docker account:
$ docker login
- In the server, stop, update and restart the project in production mode:
$ cd /home/username/adminapp
$ docker compose -f docker-compose.yaml -f docker-compose.production.yaml stop
$ docker compose -f docker-compose.yaml -f docker-compose.production.yaml rm -f
$ docker compose -f docker-compose.yaml -f docker-compose.production.yaml pull
$ docker compose -f docker-compose.yaml -f docker-compose.production.yaml up -d
NOTE: Take into account that you may add --profile=worker
to each command above if you are using the worker
service. The same applies for other optional services (ngrok).
- Generate a SQL script for the migration following these steps.
- You can either execute the migration script in production with the
adminer
service, or copy it to the Raspberry and follow these steps.
If you are using the worker
service and you have made changes to the code, you must generate a Docker image for the architecture of the Raspberry (ARM 32 v7), to pull it in production. The easiest method to achieve that is by using buildx.
The first time we generate the image, we must create a custom builder.
docker buildx create --name raspberry --driver=docker-container
Then, the command to actually generate the image and update the remote repository is the following:
docker buildx build --platform linux/arm/v7,linux/amd64 --tag {{your_dockerhub_user}}/cnc-worker:latest --builder=raspberry --target production --file core/Dockerfile.worker --push core
NOTE: You may have to log in with docker login
previous to run the build command.
Then, follow the guide to update Docker containers in the Raspberry.
The CNC worker should start automatically when running docker compose --profile=worker up
, with certain conditions:
- It only works with Docker CE without Docker Desktop, because the latter can't mount devices. You can view a discussion about it here.
- Therefore, and given that devices in Windows work in a completely different way (there is no
/dev
folder), you won't be able to run theworker
service on Windows. For that reason, in Windows you'll have to follow the steps in Start the Celery worker manually (Windows).
In case you don't use Docker or just want to run it manually, you can follow the next steps.
# 1. Move to worker folder
$ cd core/worker
# 2. Start Celery's worker server
$ celery --app tasks worker --loglevel=INFO --logfile=logs/celery.log
Optionally, if you are going to make changes in the worker's code and want to see them in real time, you can start the Celery worker with auto-reload.
# 1. Move to worker folder
$ cd core/worker
# 2. Start Celery's worker server with auto-reload
$ watchmedo auto-restart --directory=./ --pattern=*.py -- celery --app tasks worker --loglevel=INFO --logfile=logs/celery.log
Due to a known problem with Celery's default pool (prefork), it is not as straightforward to start the worker in Windows. In order to do so, we have to explicitly indicate Celery to use another pool. You can read more about this issue here.
- solo: The solo pool is a simple, single-threaded execution pool. It simply executes incoming tasks in the same process and thread as the worker.
$ celery --app worker worker --loglevel=INFO --logfile=logs/celery.log --pool=solo
- threads: The threads in the threads pool type are managed directly by the operating system kernel. As long as Python's ThreadPoolExecutor supports Windows threads, this pool type will work on Windows.
$ celery --app worker worker --loglevel=INFO --logfile=logs/celery.log --pool=threads
- gevent: The gevent package officially supports Windows, so it remains a suitable option for IO-bound task processing on Windows. Downside is that you have to install it first.
# 1. Install gevent
# Option 1: If you use Conda
$ conda install -c anaconda gevent
# Option 2: If you use pip
$ pip install gevent
# 2. Start Celery's worker server
$ celery --app worker worker --loglevel=INFO --logfile=logs/celery.log --pool=gevent
You can use the following command to execute tests (unit, linter, type check):
$ make run-tests
This project is under license from MIT. For more details, see the LICENSE file.
Made with ❤️ by Leandro Bertoluzzi and Martín Sellart.