A centralized platform for managing module proposals, reviews, and approvals within the School of Computation, Information and Technology (CIT) at TUM.
The Module Management System streamlines the process of creating, reviewing, and approving academic modules. It replaces the inefficient email-based workflow with a structured digital platform that provides clear guidance and feedback mechanisms for all stakeholders. Further information can be found here
- Module Proposal Creation: Professors can create and save module proposals with all necessary fields.
- Structured Feedback Process: Reviewers can provide granular feedback on specific sections.
- Version Management: Support for creating new module versions based on feedback while maintaining version history.
- AI-Assisted Description Generation: Help professors create standardized module descriptions.
- Module Overlap Detection: Identify potential overlaps between proposed modules and existing curriculum.
- PDF Export: Export module information for offline use.
-
Creating a Module Proposal:
- Log in to the system with professor credentials
- Navigate to "Create New Proposal"
- Fill in all required module information
- Save your progress at any time
- Use AI-assistance for generating standardized descriptions
- Check for potential module overlaps
- Submit when ready for review
-
Handling Feedback:
- Review consolidated feedback from all stakeholders
- Create a new module version addressing the feedback
- Resubmit for approval
- Reviewing Module Proposals:
- Log in with reviewer credentials
- View list of pending module proposals
- Provide specific feedback for each field
- Approve, request changes, or reject proposals
The system implements a modular client-server architecture with three primary components:
- Angular Client: Provides role-specific user interfaces with responsive design
- Spring Boot Server: Implements core business logic, workflow, and data persistence
- Python AI Service: Delivers module description generation and overlap detection capabilities
- Client-side: Angular 19, TypeScript, Tailwind CSS
- Server-side: Java Spring Boot, Hibernate, PostgreSQL
- AI Service: Python FastAPI, LangChain, Sentence Transformers
- Authentication: Keycloak integration
- Deployment: Docker containerization
Make sure you have the following installed:
- Docker and Docker Compose
- Node.js v20.19+ and npm
- Angular CLI
- Java JDK 21
- Python 3.11+
- Copy the example environment file to create your
.envfile:
cp .example.env .env- Edit
.envand update the values as needed.
From the project root directory, start PostgreSQL, Keycloak, and the AI service:
docker-compose -f docker/docker-compose.dev.yaml --env-file .env upPorts are configured in your .env file.
From the Server directory:
cd Server
./gradlew bootRunNote: Make sure the server has execute permissions on gradlew. If not, run:
chmod +x gradlewThe server will start on http://localhost:8080.
From the Client directory:
cd Client
npm install --legacy-peer-deps # First time only
npm startThe client will start on http://localhost:4200.
Development Mode: The client uses environment.development.ts which points to your local server and Keycloak instances. URLs are configured in the environment file.
If you want to run the AI service locally (outside Docker) for development:
- Navigate to the AI directory:
cd AI-
Create a virtual environment (using Python 3.11):
Option A: Using pyenv (Recommended)
If you have
pyenvinstalled:pyenv install 3.11 # If not already installed pyenv local 3.11 # Sets local Python version for this directory python -m venv .venv # Creates venv using the pyenv-managed Python
Option B: Using system Python 3.11
If
python3.11is available on your system:python3.11 -m venv .venv
-
Activate the virtual environment:
On macOS/Linux:
source .venv/bin/activateOn Windows:
.venv\Scripts\activate
-
Install dependencies:
pip install --upgrade pip
pip install -r requirements.txt- Run the service locally:
uvicorn app.main:app --host 0.0.0.0 --port 5001 --reloadThe --reload flag enables auto-reload on code changes during development.
The AI service supports using local LLMs via LM Studio or other OpenAI-compatible local servers. This is useful for development when you don't want to use Azure OpenAI.
Prerequisites:
- LM Studio installed and running
- A model loaded in LM Studio
Setup Steps:
-
Start LM Studio:
- Open LM Studio
- Load a model of your choice
- Start the local server (usually runs on
http://localhost:1234)
-
Configure Environment Variables:
In your
.envfile, set:USE_LOCAL_LLM=true LOCAL_LLM_BASE_URL=http://host.docker.internal:1234/v1 LOCAL_LLM_MODEL=your-model-name
Important Notes:
- Use
host.docker.internalinstead oflocalhostor127.0.0.1when running in Docker, as containers can't accesslocalhoston the host machine - If running the AI service locally (not in Docker), you can use
http://localhost:1234/v1
- Use
The Keycloak realm includes test users (see module-management-realm.json), which are also seeded to the database when you run the server:
Professors:
module_management_test_professor1/test- Role: PROFESSOR (Max Mustermann)module_management_test_professor2/test- Role: PROFESSOR (Alice Wonderland)
Academic Program Advisor:
module_management_test_apa1/test- Role: ACADEMIC_PROGRAM_ADVISOR (Academic Program Advisor)
Quality Management:
module_management_test_qm1/test- Role: QUALITY_MANAGEMENT (Quirin Moos)
Examination Board:
module_management_test_eb1/test- Role: EXAMINATION_BOARD (Erik Bert)
If the API changes, regenerate the TypeScript client:
cd Client
npm run api:updateThis requires the Spring Boot server to be running on port 8080.
This project is licensed under the MIT License.
This project was developed as part of a Master's thesis by Kilian Wimmer at the Technical University of Munich.

