Skip to content

MaxNiftyNine/llama-wiiu.cpp

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llama-wiiu.cpp

This is just a quick little vibe-coded project I made. The model I tested with was 'qwen2.5-0.5b-instruct-q4_0.gguf' Currently the prompt has to be hardcoded before building.

### Instruction
You are a helpful assistant. Reply conversationally and directly.
### Input:
Hi
### Response:

To edit the "Hi" and other settings here in wiiu/main.cpp

//CONFIG ######################
const int n_predict = 32;
const std::string textInput = "Hi!";
const bool use_buggy_keyboard = false; 
//#############################

Running on hardware

  • Copy a model (.gguf) to sd:/model/ (Don't put multiple in there).
  • Run llama-wiiu.wuhb.
  • A log is also written to sd:/llama-wiiu.log.

Build

Requirements:

  • devkitPro with WUT

Build the RPX/WUHB:

make

Outputs:

  • build-wiiu/llama-wiiu.rpx
  • build-wiiu/llama-wiiu.wuhb

About

Run LLMs on the CPU of the Wii U

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 62.7%
  • C 18.9%
  • Cuda 10.3%
  • Metal 3.2%
  • GLSL 1.9%
  • CMake 1.3%
  • Other 1.7%