installation:
Mac:
- open terminal (command + space, type "terminal")
- cd to folder:
cd ~/Downloads/research-main - run:
chmod +x start.command - run:
./start.command(opens UI; keep terminal open) - next time: just run
./start.commandfrom the folder
PC:
- open the folder where you downloaded the repository
- double-click
start.bat(opens UI; keep window open) - next time: double-click
start.batagain
/endpoint → type in the endpoint where Ollama is running on your computer with /api/generate added at the end. for example, if your local host is http://localhost:11434/. in the window, type, http://localhost:11434/api/generate. if you're unsure what to do, search 'help me find my local ollama endpoint'.
/model → type in the name of the model you plan to use, exactly as it appears at https://ollama.com/library. select your model from the Ollama window drop down menu. then, type in the Ollama chat to trigger the download. if the model you plan to use is not available in the Ollama drop down, run it from a terminal window. open a new terminal window and run the model's execution line e.g. 'ollama run gpt-oss:20b'. keep the terminal open. Then, navigate to the research window. You are ready to go! Always close the running terminal before changing models.
/release → this exports the current conversation to your browser downloads; you can drag and drop these releases into the research window as references from previous conversations, just make sure to give the model some context when you do. sometimes, if a conversation is too long or the material is too dense, it's better to condense your thoughts and formulate a new idea as a starting point
/recover → this deletes the last dropped file or the last response from the model, whichever came last, it's just a way to undo what happened last - 0: yes, 1: no
/intro → add a prompt prefix. session bound (not persistent), not release bound.
/outro → add a prompt postfix. session bound (not persistent), not release bound.
/polarity → 0: explanatory; full vocabulary 1: concise. direct answer. yes or no, if applicable.
DISCLAIMER: prompts are sent to a private render server. these are then returned to your local ollama model. the specific structure isn't disclosed. your data is not stored or logged. prerequisites: ollama must be downloaded separately. the .command will automatically install python 3.6+ for you on mac
For help learning Ollama, here is the fastest way.
https://youtu.be/UtSSMs6ObqY?si=SZlghpMhHZPMAvP9
TIP: I have found that you can easily change model names from the Ollama Desktop App
You will still need to set the name of the model you would like to use with the /model command, but the list is a quick reference for proper model name and spelling validation.
TYPO:
/RECOVERY:
/MODEL:
CORRECTION:
SOLVED:
POLARITY:1
/RELEASE
DRAG&DROP:
Drag and drop the release to re:search
/ENDPOINT:
The Ollama default endpoint is already set. If yours happens to be different, you will have to search for it independently. However, any search engine will help you solve the problem of locating it with minor grievances. I hope you enjoy, and best of luck with your re:search!