Skip to content

Commit

Permalink
doc: update release notes
Browse files Browse the repository at this point in the history
  • Loading branch information
ex3ndr committed Jan 16, 2024
1 parent 987d668 commit f1452ca
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 3 deletions.
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Llama Coder is a better and self-hosted Github Copilot replacement for VS Studio

## Features
* 🚀 As good as Copilot
* ⚡️ Fast. Works well on consumer GPUs. RTX 4090 is recommended for best performance.
* ⚡️ Fast. Works well on consumer GPUs. Apple Silicon or RTX 4090 is recommended for best performance.
* 🔐 No telemetry or tracking
* 🔬 Works with any language coding or human one.

Expand All @@ -27,10 +27,11 @@ Install [Ollama](https://ollama.ai) on dedicated machine and configure endpoint

## Models

Currently Llama Coder supports only Codellama. Model is quantized in different ways, but our tests shows that `q4` is an optimal way to run network. When selecting model the bigger the model is, it performs better. Always pick the model with the biggest size and the biggest possible quantization for your machine. Default one is `codellama:7b-code-q4_K_M` and should work everywhere, `codellama:34b-code-q4_K_M` is the best possible one.
Currently Llama Coder supports only Codellama. Model is quantized in different ways, but our tests shows that `q4` is an optimal way to run network. When selecting model the bigger the model is, it performs better. Always pick the model with the biggest size and the biggest possible quantization for your machine. Default one is `stable-code:3b-code-q4_0` and should work everywhere and outperforms most other models.

| Name | RAM/VRAM | Notes |
|---------------------------|----------|-------|
| stable-code:3b-code-q4_0 | 3GB | |
| codellama:7b-code-q4_K_M | 5GB | |
| codellama:7b-code-q6_K | 6GB | m |
| codellama:7b-code-fp16 | 14GB | g |
Expand All @@ -48,6 +49,10 @@ Most of the problems could be seen in output of a plugin in VS Code extension ou

## Changelog

## [0.0.11]
- Added Stable Code model
- Pause download only for specific model instead of all models

## [0.0.10]
- Adding ability to pick a custom model
- Asking user if they want to download model if it is not available
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"name": "llama-coder",
"displayName": "Llama Coder",
"description": "Better and self-hosted Github Copilot replacement",
"version": "0.0.10",
"version": "0.0.11",
"icon": "icon.png",
"publisher": "ex3ndr",
"repository": {
Expand Down

0 comments on commit f1452ca

Please sign in to comment.