The Ollama Chat Bot App is a feature-rich chat application that enables users to interact seamlessly with a chatbot. Designed to be versatile and user-friendly, the app connects to local and remote Ollama APIs, offering a customizable and dynamic chat experience. With its ability to process and respond to user inputs, it serves as a robust platform for chatbot interactions, supporting multiple models and advanced features like executing Python code snippets.
- Intuitive Chat Interface: Communicate with the chatbot through a responsive web interface.
- Server Configuration: Easily configure the server URL for the backend service.
- Multi-Model Support: Connect to multiple Ollama API models for diverse functionalities.
- Code Execution: Run Python code snippets directly within the chat.
- Syntax Highlighting: View code blocks with enhanced readability using syntax highlighting.
- One-Click Code Actions: Copy and execute code snippets straight from the chat interface.
Before setting up the project, ensure the following requirements are met:
- Node.js and npm/yarn: Installed on your system. Download Node.js.
- Python: Installed on your machine. Download Python.
- Ollama API: Installed and configured on a local or remote server.
- Models: Required models downloaded for the Ollama API.
Follow these steps to set up and run the Ollama Chat Bot App:
git clone https://github.com/your-username/ollama-chat-bot-app.git
cd ollama-chat-bot-app
Create a .env
file in the root directory and add the following variable:
NEXT_PUBLIC_OLLAMA_URL=http://your-server-ip:port
# Example:
# NEXT_PUBLIC_OLLAMA_URL=http://192.168.178.230:11434
Replace http://your-server-ip:port
with the URL of your Ollama API server.
Install the necessary dependencies using your preferred package manager:
npm install
# or
yarn install
# or
pnpm install
Run the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
Open http://localhost:3000 in your browser to view the app. The application supports hot-reloading, allowing real-time updates as you edit the source code.
To leverage the full potential of the chatbot, set up the Ollama API on your local or remote server:
- Install Ollama API: Follow the installation guide on the Ollama API GitHub repository.
- Download Models: Acquire the required models specified in the Ollama API documentation.
- Start the API Server: Run the Ollama API server and ensure it is accessible at the URL defined in your
.env
file.
Example:
ollama start
Verify the server is running by accessing the API URL in your browser or via a tool like Postman.
pages/
: Contains the main application files and routes.components/
: Reusable UI components for the chat interface.utils/
: Utility functions, including API calls and helper methods.styles/
: Styling files for the application.
Contributions are welcome! To contribute:
-
Fork the repository.
-
Create a new feature branch:
git checkout -b feature/your-feature-name
-
Commit your changes:
git commit -m "Add your message here"
-
Push to the branch:
git push origin feature/your-feature-name
-
Open a pull request.
This project is licensed under the MIT License. See the LICENSE
file for details.