Skip to content

Latest commit

 

History

History
75 lines (54 loc) · 3.29 KB

dev.md

File metadata and controls

75 lines (54 loc) · 3.29 KB

LA-llama.cpp

Let's play LLM on LoongArch!

Overview

The project aims at porting and optimizing llama.cpp, a C++ LLM inference framework, on LoongArch. Especially, we want to tackle the following challenges:

  • Potential problems when porting the code on LoongArch platform.
  • Inference performance optimization via SIMD, temporarily targeting at 3A6000 platform.
  • LLM evaluation on LoongArch platform.
  • Interesting applications with presentation.

Project Structure

The overall directory structure of this project is organized as follows:

  • llama.cpp-b2430/: The original code of llama.cpp with fixed release version b2430. During development, we try to keep minimum change within this directory by only revising the build system (Makefile) and some conditionally compiled code (Macros to insert our work). Most of the real work are in the src/ directory.
  • src/: This is where we put the real optimization code, i.e., loongarch_matmul.[cpp|h].
  • test/: The benchmark code, which is altered from llama.cpp-b2430/examples/benchmark/benchmark-matmult.cpp. That means, the performance measure is completely comparable with the former reported results in community.
  • docs/: The documentation generated along with the project.

Plan

Based on the above challenges, the project can be divided into the following 4 stages:

Setup

  • Task: Build basic environments and get familiar to the codebase.
  • Objective: Environment setup and self warm-up.

Porting

  • Task: Port llama.cpp to LoongArch platform.
  • Objective: Compile and run llama.cpp on 3A6000.

Optimization

  • Task: Optimize the efficiency of llama.cpp on LoongArch (focus on CPU).
  • Objective: Apply programming optimization techniques and document the improvements.

Evaluation

  • Task: Benchmark various LLMs of different sizes.
  • Objective: Output a technical report.

Application

  • Task: Deploy usable applications with LLM on LoongArch platforms.
  • Objective: Output well-written deployment documents and visual demos.

Miscellaneous

Progress and TODO list

Setup Stage

At this stage, we get familiar with the concept of cross compilation, build and

  • Compile and run original llama.cpp on x86 CPU.
  • Cross compile llama.cpp to RISCV64 and run with QEMU on x86 CPU (refer to ggerganov/llama.cpp#3453).
  • Set up cross compilation tools and QEMU environment for LoongArch.

Porting Stage

  • Alter the makefile for LoongArch cross compilation.
  • Cross compile llama.cpp to LoongArch64.

Optimization Stage

Thanks to the excellent work from Loongson team, we have a great oppotunity to learn about SIMD acceleration with LoongArch LSX/LASX vector instruction set. Part of our work are based on them.

  • Identify performance bottleneck in llama.cpp.
  • Add LSX/LASX SIMD acceleration for llama.cpp.
  • Add LASX GEMM acceleration for llama.cpp.

Benchmark Stage

Benchmark goes along with optimization because we always want to know the exact improvement.

  • Measure performance improvement on Loongson 3A6000 processor.

Finish Stage

Output a well-organized technical report.

  • Compete technical report.