This is a boilerplate for starting to create applications with the Gemini LLM (by Google).
First:
- Rename the .env.example file to .env.
- Open the newly renamed .env file and add your GEMINI_API_KEY.
- You can obtain your API key here.
With this done, we can continue 😎
You have to add your own prompt in the prompts folder. Let's add "my_custom_prompt.go".
prompts
├── my_custom_prompt.go
├── personal_assistant.go
├── python_interpreter.go
└── talk_file.go
Now our prompts folder contains these files.
Important: prompts/my_custom_prompt.go
must contain a prompt template and a function that receives the prompt written by the user and converts it to useful instructions for the LLM.
This is a basic example, so we will receive the answer just in Markdown.
prompts/my_custom_prompt.go
// prompts/my_custom_prompt.go
package prompts
import (
"app/gemini"
"fmt"
)
var YOUR_OWN_PROMPT_TEMPLATE = `Context: %s
User Prompt: %s
[YOUR_RESPONSE_HERE]`
// var YOUR_OWN_PROMPT = `HERE YOUR OWN PROMPT (JSON, MARKDOWN, YAML, ...)`
func YourOwnHandler(userPrompt string) string {
return fmt.Sprintf(YOUR_OWN_PROMPT_TEMPLATE, gemini.BuildHistory(), userPrompt)
}
Now that we have "YourOwnHandler", we must use it in our main.go, line 16.
// main.go
package main
import (
"app/gemini"
"app/prompts"
"fmt"
"log"
)
func main() {
if err := loadEnv("./.env"); err != nil {
log.Fatal(err)
}
for {
userPrompt := readPrompt("Prompt: ")
prompt := prompts.YourOwnHandler(userPrompt)
jsonMode := true
response, err := gemini.Request(prompt)
if err != nil {
fmt.Println("Error gemini request:", err)
continue
}
gemini.AddMessage(prompt)
processResponse(response, jsonMode)
}
}
The Gemini package includes 2 functions for managing the history (memory):
- AddMessage: If you want the LLM to remember something that you wrote, you must use this function after the request was validated:
gemini.AddMessage(message)
- AddResponse: If you want the LLM to remember something that it answered in the past, you must use:
gemini.AddResponse(response)
- processResponse: When the LLM responds with a JSON object, set the second parameter to
true
to enablejsonMode
. This ensures that the function processes the response as JSON. For example:
processResponse(response, true)
The function will then look for the "result" key in the JSON returned by the LLM.
If the LLM's response is in Markdown format, set the second parameter to false
.
If you're using a custom JSON structure, make sure to update the parseResponse
function and adjust the struct accordingly to correctly handle the response.
Note: AddMessage and AddResponse are used by default, so the LLM has "memory". The memory is limited to 100 messages, but you can modify this limit in history.go (MAX_MEMORY).
Start building an app with LLM now ❤️