Skip to content

researchUSAI/research

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

installation:

Mac:

  1. open terminal (command + space, type "terminal")
  2. cd to folder: cd ~/Downloads/research-main
  3. run: chmod +x start.command
  4. run: ./start.command (opens UI; keep terminal open)
  5. next time: just run ./start.command from the folder

PC:

  1. open the folder where you downloaded the repository
  2. double-click start.bat (opens UI; keep window open)
  3. next time: double-click start.bat again
Screenshot 2025-12-20 at 12 37 45 AM Screenshot 2025-12-20 at 12 36 34 AM

/endpoint → type in the endpoint where Ollama is running on your computer with /api/generate added at the end. for example, if your local host is http://localhost:11434/. in the window, type, http://localhost:11434/api/generate. if you're unsure what to do, search 'help me find my local ollama endpoint'.

/model → type in the name of the model you plan to use, exactly as it appears at https://ollama.com/library. select your model from the Ollama window drop down menu. then, type in the Ollama chat to trigger the download. if the model you plan to use is not available in the Ollama drop down, run it from a terminal window. open a new terminal window and run the model's execution line e.g. 'ollama run gpt-oss:20b'. keep the terminal open. Then, navigate to the research window. You are ready to go! Always close the running terminal before changing models.

/release → this exports the current conversation to your browser downloads; you can drag and drop these releases into the research window as references from previous conversations, just make sure to give the model some context when you do. sometimes, if a conversation is too long or the material is too dense, it's better to condense your thoughts and formulate a new idea as a starting point

/recover → this deletes the last dropped file or the last response from the model, whichever came last, it's just a way to undo what happened last - 0: yes, 1: no

/intro → add a prompt prefix. session bound (not persistent), not release bound.

/outro → add a prompt postfix. session bound (not persistent), not release bound.

/polarity → 0: explanatory; full vocabulary 1: concise. direct answer. yes or no, if applicable.

DISCLAIMER: prompts are sent to a private render server. these are then returned to your local ollama model. the specific structure isn't disclosed. your data is not stored or logged. prerequisites: ollama must be downloaded separately. the .command will automatically install python 3.6+ for you on mac

For help learning Ollama, here is the fastest way.

https://youtu.be/UtSSMs6ObqY?si=SZlghpMhHZPMAvP9

TIP: I have found that you can easily change model names from the Ollama Desktop App

Screenshot 2025-12-20 at 12 44 26 AM

You will still need to set the name of the model you would like to use with the /model command, but the list is a quick reference for proper model name and spelling validation.

TYPO:

Screenshot 2025-12-20 at 12 49 02 AM

/RECOVERY:

Screenshot 2025-12-20 at 12 49 30 AM

/MODEL:

Screenshot 2025-12-20 at 12 49 59 AM

CORRECTION:

Screenshot 2025-12-20 at 12 50 15 AM

SOLVED:

Screenshot 2025-12-20 at 12 50 43 AM

POLARITY:1

Screenshot 2025-12-20 at 12 54 56 AM Screenshot 2025-12-20 at 12 55 33 AM

/RELEASE

Screenshot 2025-12-20 at 12 56 10 AM Screenshot 2025-12-20 at 12 57 03 AM Screenshot 2025-12-20 at 12 57 31 AM

DRAG&DROP:

Drag and drop the release to re:search

Screenshot 2025-12-20 at 12 59 43 AM

/ENDPOINT:

Screenshot 2025-12-20 at 12 59 43 AM

The Ollama default endpoint is already set. If yours happens to be different, you will have to search for it independently. However, any search engine will help you solve the problem of locating it with minor grievances. I hope you enjoy, and best of luck with your re:search!