With the increasing prevalence of Large Language Models (LLMs), the High-Performance Computing (HPC) community is keen on leveraging LLMs to address a myriad of challenges, ranging from code analysis and generation to performance optimization and question answering. LM4HPC is meticulously crafted to cater to the unique needs of HPC users. It encompasses both internal components and external APIs tailored for HPC-specific tasks.
LM4HPC utilizes the OpenAI API and the Hugging Face API. If you haven't configured your access tokens yet, execute the following commands:
echo "HUGGINGFACEHUB_API_TOKEN='YOUR_HUGGINGFACE_API_TOKEN'" >> ~/.bashrc
echo "OPENAI_API_KEY='YOUR_OPENAI_API_KEY'" >> ~/.bashrc
Note: If you're using a shell other than bash, adjust the configuration file accordingly (e.g., ~/.zshrc for Zsh).
For a seamless experience, we advocate the use of Conda or other virtual environments for the installation of LM4HPC. As of September 1, 2023, LM4HPC is in its pre-alpha phase, receiving frequent updates. We advise an editable installation to readily access the latest features before the beta version rollout.
The subsequent steps are demonstrated using Conda. Adjust them based on your preferred virtual environment:
Conda create -n lm4hpc python=3.10 openssl=1.1.1
git clone git@github.com:LChenGit/LM4HPC.git
git intall -e /PATH/TO/LM4HPC
To get started with LM4HPC, refer to the following example tested on Google Colab: https://colab.research.google.com/drive/1-Z6-7dXpjjJgG3-S90t1AN_eCPY-1No6?usp=sharing