This project allows you to run a ChatGPT-style code reviewer locally using Ollama and React. It lets you paste custom CSS and JavaScript code and receive a structured, Excel-style analysis output.
- Vite + React
- Tailwind CSS for styling
- Ollama for running local LLMs (CodeLlama, Mistral, etc.)
- Install Node.js (v16 or higher)
- Install Ollama
- Mac (Homebrew):
brew install ollama
- Windows:
- Go to https://ollama.com/download
- Download the
.exe
installer and run it - After installation, open Command Prompt and run:
to confirm it's working
ollama --version
- Ubuntu/Linux:
- Download the Linux
.deb
package from https://ollama.com/download - Then install it via terminal:
sudo apt install ./ollama_<version>_amd64.deb
- Confirm install:
ollama --version
- Download the Linux
- Mac (Homebrew):
git clone https://github.com/dhruvildave22/ollama-code-reviewer.git
cd ollama-code-reviewer
npm install
First you have to start ollama
ollama serve
Now you can use codellama
, mistral
, or any other local model. This project assumes codellama.
ollama run codellama
This starts the Ollama server at http://localhost:11434
.
💡 You can switch models by changing the
model
value inApp.jsx
.
To list available models:
ollama list
To pull others (optional):
ollama pull mistral # Optional: only if you want to test other models
npm run dev
Visit http://localhost:5173 in your browser.
- Paste CSS and JS code to analyze customization logic
- Uses local LLM to output structured, Excel-like insights
- Table-formatted output that's easily readable
- Visual summary panel for copied code
- Styled with Tailwind (light, calm theme)
- Loading animation with disabled state
The app sends this prompt to Ollama:
You are a code reviewer. Analyze the following HTML, CSS, or JavaScript code and respond using this Markdown table format only:
| Column | Entry |
|--------|-------|
| Color/Font Changes? | ✅ Yes — ... |
| UI Text/Message Changes? | ❌ No |
...and so on.
Functionality Description: A short summary of what the code does.
This ensures clean and parseable markdown tables.
await fetch("http://localhost:11434/api/generate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "codellama",
prompt: "<your full prompt here>",
stream: false,
})
})
- Use
stream: false
to ensure full table rendering - Run
ollama run codellama
in a separate terminal - Tailor prompt output to match your table logic
MIT License — feel free to modify and build upon it.