Skip to content

Boolean arguments to tools #173

@ultronozm

Description

@ultronozm

Evaluating

(let* ((provider (make-llm-claude
                  :key (exec-path-from-shell-getenv "ANTHROPIC_KEY")
                  :chat-model "claude-3-7-sonnet-20250219"))
       (results nil)
       (tool (llm-make-tool
              :name "setNotifications"
              :description "Enables or disables notification alerts"
              :function (lambda (enable)
                          (concat
                           (if enable
                               "Notifications enabled"
                             "Notifications disabled")
                           (format " (arg = %s)" enable)))
              :args '((:name "enable"
                             :type boolean
                             :description "Whether to enable notifications"
                             :required t))))
       (instructions '("Enable notifications."
                       "Disable notifications.")))
  (dolist (instruction instructions)
    (let* ((prompt (llm-make-chat-prompt
                    instruction
                    :tools (list tool)))
           (response (llm-chat provider prompt)))
      (push (list :instruction instruction) results)
      (push (list :response response) results)
      (pp-display-expression
       (nreverse (copy-sequence results)) "*test*"))))

yields

((:instruction "Enable notifications.")
 (:response (("setNotifications" . "Notifications enabled (arg = t)")))
 (:instruction "Disable notifications.")
 (:response
  (("setNotifications" . "Notifications enabled (arg = :json-false)"))))

With OpenAI, we get something similar, but with :false rather than :json-false.

I think it'd be a bit annoying to have to check equality with :false or with :json-false every time we want to determine that a boolean argument in a tool call takes the value nil, so this seems worth fixing at the level of llm rather than patching at the level of each tool call.

In the case of Claude, the issue arises already after the call to plz-media-type-request. With OpenAI, the issue arises in the call to json-parse-string inside llm-provider-extract-tool-uses:

(json-parse-string "{\"enable\":true}") => ("enable" t)
(json-parse-string "{\"enable\":false}") => ("enable" :false)

I think one could address this by adding a preprocessing step to each llm-provider-extract-tool-uses, but figured I'd check if you had other ideas or suggestions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions