Skip to content

Commit

Permalink
Typo fixed
Browse files Browse the repository at this point in the history
"MacBooc" > "MacBook"
  • Loading branch information
fuad00 authored Dec 31, 2023
1 parent 241d47b commit 84f3334
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Llama Coder is a better and self-hosted Github Copilot replacement for VS Studio

Minimum required RAM: 16GB is a minimum, more is better since even smallest model takes 5GB of RAM.
The best way: dedicated machine with RTX 4090. Install [Ollama](https://ollama.ai) on this machine and configure endpoint in extension settings to offload to this machine.
Second best way: run on MacBooc M1/M2/M3 with enougth RAM (more == better, but 10gb extra would be enougth).
Second best way: run on MacBook M1/M2/M3 with enougth RAM (more == better, but 10gb extra would be enougth).
For windows notebooks: it runs good with decent GPU, but dedicated machine with a good GPU is recommended. Perfect if you have a dedicated gaming PC.

## Local Installation
Expand Down Expand Up @@ -69,4 +69,4 @@ Most of the problems could be seen in output of a plugin in VS Code extension ou

## [0.0.4]

- Initial release of Llama Coder
- Initial release of Llama Coder

0 comments on commit 84f3334

Please sign in to comment.