An offline AI chat application that runs entirely on your local machine, providing privacy and control over your conversations.
LokiChat AI is a comprehensive chat application that combines:
- Backend: Spring Boot with Spring AI framework
- AI Model: Ollama for local AI model execution
- Frontend: Next.js for modern chat interface
- Database: PostgreSQL for conversation persistence
- Deployment: Docker Compose for seamless local deployment
- π€ Offline AI chat with local model execution
- π¬ Create and manage multiple conversations
- πΎ Persistent chat history stored in PostgreSQL
- π³ Docker containerization for easy deployment
- π Modern web interface built with Next.js
- π Complete privacy - no data leaves your machine
- Docker and Docker Compose
- Node.js (for frontend development)
- PM2 (for process management)
- Apache (for production frontend serving)
First, create a .env file in your project root:
DB_USER=your_db_username
DB_PASSWORD=your_secure_password.env file for security.
Create or update your docker-compose.yml:
# Start all services
docker-compose up -d
# View logs
docker-compose logs -fAfter starting the containers, you need to pull your preferred AI model into Ollama:
# Access the Ollama container
docker exec -it lokichat_ollama ollama pull llama2
# Or pull other models like:
docker exec -it lokichat_ollama ollama pull mistral
docker exec -it lokichat_ollama ollama pull codellama
# List available models
docker exec -it lokichat_ollama ollama listPopular model options:
llama2- General purpose chatmistral- Fast and efficientcodellama- Code-focused conversationsphi- Lightweight option
# Navigate to frontend directory
cd frontend
# Install dependencies
npm install
# Run development server
npm run devnpm install -g pm2# In frontend directory
npm run buildCreate ecosystem.config.js in your frontend directory:
module.exports = {
apps: [{
name: 'lokichat-frontend',
script: 'npm',
args: 'start',
cwd: '/path/to/your/frontend',
instances: 1,
autorestart: true,
watch: false,
max_memory_restart: '1G',
env: {
NODE_ENV: 'production',
PORT: 3000
}
}]
};# Start the application
pm2 start ecosystem.config.js
# Save PM2 configuration
pm2 save
# Setup PM2 to start on boot
pm2 startup# Navigate to backend directory
cd backend
# Run with Maven
./mvnw spring-boot:run
# Or with Gradle
./gradlew bootRunUpdate application.yml or application.properties:
spring:
datasource:
url: jdbc:postgresql://localhost:5432/lokichat
username: ${DB_USER:lokichat}
password: ${DB_PASSWORD:password}
driver-class-name: org.postgresql.Driver
jpa:
hibernate:
ddl-auto: update
show-sql: false
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
ai:
ollama:
base-url: http://localhost:11434
chat:
options:
model: llama2
temperature: 0.7# Clone the repository
git clone <your-repo-url>
cd LokiChat-AI
# Configure environment
cp .env.example .env
# Edit .env with your database credentials
# Start all services
docker-compose up -d
# Pull AI model
docker exec -it lokichat_ollama ollama pull llama2
# Your application is now running at:
# - Backend: http://localhost:8081
# - Frontend: http://localhost:3000 (if running separately)
# - Ollama: http://localhost:11434- Database: Start PostgreSQL
- Backend: Run Spring Boot application
- Ollama: Start Ollama service and pull models
- Frontend: Build and deploy with PM2 + Apache
-
Database Connection Issues
# Check if PostgreSQL is running docker-compose ps # View database logs docker-compose logs lokichat-db
-
AI Model Not Responding
# Check Ollama status docker exec -it lokichat_ollama ollama list # Test Ollama directly curl http://localhost:11434/api/generate -d '{ "model": "llama2", "prompt": "Hello world" }'
-
Frontend Not Loading
# Check PM2 status pm2 list # View frontend logs pm2 logs lokichat-frontend
- Change default database credentials in
.env - Use environment variables for sensitive configuration
- Consider using HTTPS in production
- Regularly update Docker images
- Monitor resource usage
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
For issues and questions, please create an issue in the GitHub repository.