The Foodplanner API is a backend service for the GIRAF Foodplanner application, providing a robust system to manage meal plans and user roles. Built with ASP.NET Core, the API connects to a PostgreSQL database and exposes endpoints for seamless integration with the Flutter-based frontend.
- User Management: Role-based access control for teachers and parents.
- Meal Planning: Endpoints for creating, updating, and managing meal plans.
- Database Integration: Utilizes PostgreSQL for reliable data storage.
- Database Migration: Utilizes FlutentMigrator for scalable data storage.
- RESTful API: Follows REST principles for easy integration and development.
- Framework: ASP.NET Core
- Database: PostgreSQL
- Image Database: Minio
- Authentication: JWT (JSON Web Token)
- Containerization: Docker (optional)
src/
├── FoodplannerApi/ # Contains main functionality
│ └── Controller/ # API controllers for handling HTTP requests
├── FoodplannerDataAccessSQL/ # Data access layer
│ ├── Account/ # Account repositories
│ ├── FeedbackChat/ # Feedback repositories
│ ├── Image/ # Image repositories
│ ├── LunchBox/ # LunchBox repositories
│ └── Migrations/ # Database migrations
├── FoodplannerModels/ # Model layer
│ ├── Account/ # Account models
│ ├── FeedbackChat/ # Feedback models
│ ├── Image/ # Image models
│ └── LunchBox/ # LunchBox models
├── FoodplannerServices/ # Service Layer
│ ├── Account/ # Account services
│ ├── Auth/ # Authentication services
│ ├── FeedbackChat/ # Feedback services
│ ├── Image/ # Image services
│ └── LunchBox/ # LunchBox services
└── Test/ # Tests
When making changes to the database, such as making tables or making new relations a new migration should be added defining this change.
Migration files are versioned, which enables rollback to a previous database version. Version names are sequently rising starting at 1, meaning the next migration should have version 2 and so on. Documentation is found on https://fluentmigrator.github.io/articles/intro.html.
New migrations are added by including a new file in the Migrations folder and the class must inherit Migration
. Important is to implement up
and down
methods which must be each others reverse.
Ensure you have the following installed:
- ASP.NET Core SDK
- Docker (optional, for containerized deployment)
An active Infisical project must exist for managing secrets, either use the existing project, create a new one or overwrite all secrets. Refer to Installation 4. Setup development environment on how to overwrite secrets locally.
- Clone the repository:
git clone https://github.com/aau-giraf/foodplanner-api.git
- Navigate to the project directory:
cd foodplanner-api/foodplannerApi
- Install dependencies:
dotnet restore
- Setup development environment:
Make sure to have a JSON file called appsettings.Development.json
in the same directory as appsettings.json
, containing the following properties.
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"Infisical": {
"ClientId": "<ClientId>",
"ClientSecret": "<ClientSecret>",
"Workspace": "<Workspace>"
}
}
Overwriting environment variables is possible, this is done by adding to the "Infisical"
group.
Example
...
"Infisical": {
"ClientId": "<ClientId>",
"ClientSecret": "<ClientSecret>",
"Workspace": "<Workspace>",
"DB_HOST": "anotherValue"
}
...
Ensure github actions is correctly setup.
TODO: vi skal lige have skrevet den her færdig når vi ved hvad der skal gøres i forhold til docker hub.
The server will consist of docker containers such as a staging and production API, PostgreSQL database and a Minio database. It will integrate with Github actions to streamline development and automaticly update Staging and Production API to follow newest releases. To set this up correctly please follow these steps.
- First step is to figure out where to host the server. A great place is AAU's own hosting platform https://strato-new.claaudia.aau.dk here the most important thing is to pick a server running Ubuntu.
Important
If Strato is decided then remember to open all the ports that are expected to be used. This can be done in Security Groups under the Network section. These ports could be 5432, 8080, 8081, 9000 and 9001
- Install the following on the server:
-
Create a new file called
docker-compose.yml
and open it using the following commandtouch docker-compose.yml nano docker-compose.yml
Paste the following code into the file. Remember to update the variables with your own information.
version: "3.8" services: minio: image: minio/minio:latest container_name: minio_giraf restart: unless-stopped ports: - "9000:9000" - "9001:9001" volumes: - ./minio/data:/mnt/data environment: - MINIO_ROOT_USER=<insert here> - MINIO_ROOT_PASSWORD=<insert here> - MINIO_VOLUMES=/mnt/data command: server /mnt/data --console-address ":9001" postgres: image: postgres:latest container_name: postgres_giraf restart: unless-stopped ports: - "5432:5432" volumes: - ./postgres/data:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=<insert here> - POSTGRES_USER=<insert here> - POSTGRES_DB=<insert here>
Go ahead and run the docker compose file using the following command:
docker compose -f docker-compose.yml up
This will create two containers containing Minio for image storage and PostgreSQL for data storage. The Minio server can now be accessed and managed on
http://<server-ip>:9001
-
Connect to the newly created PostgreSQL container using your prefered PostgreSQL database tool ex. pgAdmin.
Tip
Host name/address: This is your server-ip
Port: 5432 unless changed in docker-compose.yml
Username: The one you wrote in docker-compose.yml
Password: The one you wrote in docker-compose.yml
When connected go ahead and create two new databases called `giraf_foodplanner_db_stage` and `giraf_foodplanner_db_prod` You dont have to create any new tables these will be automaticly generated when the dotnet application runs.
-
Create a new file called
docker-auto-deploy.sh
and open it using the following command.touch docker-auto-deploy.sh nano docker-auto-deploy.sh
Paste the following code into the file. Remember to update the variables with your own information.
# Variables DOCKER_IMAGE="<docker-hub-name>/foodplanner-api" # Replace with your Docker image name CONTAINER_NAME="foodplanner-api" # Name of your running container LAST_IMAGE_FILE="/var/tmp/last_image_version_stage.txt" # File to store the last pulled image version LAST_IMAGE_FILE_PROD="/var/tmp/last_image_version_prod.txt" CLIENT_ID="<client-id>" CLIENT_SECRET="<client-secret>" WORKSPACE="<workspace>" # Function to pull and deploy pull_and_deploy() { echo "New image found, pulling and deploying..." sudo docker pull $DOCKER_IMAGE:staging # Stop the existing container sudo docker stop $CONTAINER_NAME-stage sudo docker rm $CONTAINER_NAME-stage # Start a new container with the updated image sudo docker run -d --name $CONTAINER_NAME-stage -p 8080:8080 -e CLIENT_ID=$CLIENT_ID -e CLIENT_SECRET=$CLIENT_SECRET -e WORKSPACE=$WORKSPACE -e ASPNETCORE_ENVIRONMENT=Staging $DOCKER_IMAGE:staging # Store the new image digest in the file echo $LATEST_DIGEST > $LAST_IMAGE_FILE echo "Deployment successful." } pull_and_deploy_prod() { echo "New image found, pulling and deploying..." sudo docker pull $DOCKER_IMAGE:prod # Stop the existing container sudo docker stop $CONTAINER_NAME-prod sudo docker rm $CONTAINER_NAME-prod # Start a new container with the updated image sudo docker run -d --name $CONTAINER_NAME-prod -p 8081:8080 -e CLIENT_ID=$CLIENT_ID -e CLIENT_SECRET=$CLIENT_SECRET -e WORKSPACE=$WORKSPACE -e ASPNETCORE_ENVIRONMENT=Production $DOCKER_IMAGE:prod # Store the new image digest in the file echo $LATEST_DIGEST_PROD > $LAST_IMAGE_FILE_PROD echo "Deployment successful." } # Get the current latest image digest from Docker Hub LATEST_DIGEST=$(curl -s https://hub.docker.com/v2/repositories/$DOCKER_IMAGE/tags/staging/ | jq -r '.images[0].digest') LATEST_DIGEST_PROD=$(curl -s https://hub.docker.com/v2/repositories/$DOCKER_IMAGE/tags/prod/ | jq -r '.images[0].digest') # For staging # Check if the last image version file exists if [ ! -f "$LAST_IMAGE_FILE" ]; then echo "No previous image found, pulling the latest version..." pull_and_deploy else # Read the last pulled image digest LAST_DIGEST=$(cat $LAST_IMAGE_FILE) # Compare the latest digest with the last pulled one if [ "$LATEST_DIGEST" != "$LAST_DIGEST" ]; then pull_and_deploy else echo "No new image found." fi fi # For production if [ ! -f "$LAST_IMAGE_FILE_PROD" ]; then echo "No previous image found, pulling the latest version..." pull_and_deploy_prod else # Read the last pulled image digest LAST_DIGEST_PROD=$(cat $LAST_IMAGE_FILE_PROD) # Compare the latest digest with the last pulled one if [ "$LATEST_DIGEST_PROD" != "$LAST_DIGEST_PROD" ]; then pull_and_deploy_prod else echo "No new image found." fi fi
-
Last but not least, we need to set up a cron job to run the
docker-auto-deploy.sh
script periodically. Open Cron using the following command
cronjob -e
Then add the following line to the end of the file.
* * * * * $HOME/docker-auto-deploy.sh >> $HOME/docker-deploy.log 2>&1
This will run the bash script once every minute and write the output to docker-deploy.log
Have docker installed.
- Create a file in a local folder named docker-compose-example.yaml (Its gonna be the name of the docker container)
docker-compose-example.yaml
-
Paste in the following:
version: '3.8' services: minio: image: minio/minio:latest container_name: minio_giraf_local restart: unless-stopped ports: - "9000:9000" - "9001:9001" volumes: - ./minio/data:/mnt/data environment: - MINIO_ROOT_USER=girafminio - MINIO_ROOT_PASSWORD=girafminio - MINIO_VOLUMES=/mnt/data command: server /mnt/data --console-address ":9001" postgres: image: postgres:latest container_name: postgres_giraf_local restart: unless-stopped ports: - "7654:5432" volumes: - ./postgres/data:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=postgres - POSTGRES_USER=postgres - POSTGRES_DB=giraf adminer: image: adminer restart: unless-stopped container_name: adminer_giraf_local ports: - "8000:8080"
-
Open cmd
-
Change directory to the folder containing the yml file:
cd PATH TO YOUR FOLDER.
-
Run your yml
docker compose -f docker-compose-example.yaml up
Important
The credentials must match the Infisical setup for development in secrets.
Note
Minio and Adminer are interfaces to alter Minio bucket and PostgreSQL database. (You can also use pgAdmin or any other tools for the database)
Minio: http://localhost:9001/login
Username: girafminio
Password: girafminio
Adminer: http://localhost:8000/
Server: postgres
Username: postgres
Password: postgres
Database: giraf
- Start the API locally:
dotnet run
- The API will be available at https://localhost:8080
The API is documented using Swagger, available at:
Contributions are welcome! Follow these steps:
- Create a branch for your feature or bugfix:
git checkout -b feature-name
- Commit your changes:
git commit -m "Add feature name"
- Push to the branch:
git push origin feature-name
- Open a pull request to the staging branch, test it, and then create a new pull request for main.