Skip to content

Commit

Permalink
docs: add Frequently Asked Questions to README.md (sashabaranov#462)
Browse files Browse the repository at this point in the history
* docs: add Frequently Asked Questions to README.md

* Update README.md

Co-authored-by: Alexander Baranov <677093+sashabaranov@users.noreply.github.com>

---------

Co-authored-by: Alexander Baranov <677093+sashabaranov@users.noreply.github.com>
  • Loading branch information
vvatanabe and sashabaranov authored Jul 30, 2023
1 parent 62dc817 commit 71a2493
Showing 1 changed file with 31 additions and 0 deletions.
31 changes: 31 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -694,6 +694,37 @@ OPENAI_TOKEN=XXX go test -v -tags=integration ./api_integration_test.go

If the `OPENAI_TOKEN` environment variable is not available, integration tests will be skipped.

## Frequently Asked Questions

### Why don't we get the same answer when specifying a temperature field of 0 and asking the same question?

Even when specifying a temperature field of 0, it doesn't guarantee that you'll always get the same response. Several factors come into play.

1. Go OpenAI Behavior: When you specify a temperature field of 0 in Go OpenAI, the omitempty tag causes that field to be removed from the request. Consequently, the OpenAI API applies the default value of 1.
2. Token Count for Input/Output: If there's a large number of tokens in the input and output, setting the temperature to 0 can still result in non-deterministic behavior. In particular, when using around 32k tokens, the likelihood of non-deterministic behavior becomes highest even with a temperature of 0.

Due to the factors mentioned above, different answers may be returned even for the same question.

**Workarounds:**
1. Using `math.SmallestNonzeroFloat32`: By specifying `math.SmallestNonzeroFloat32` in the temperature field instead of 0, you can mimic the behavior of setting it to 0.
2. Limiting Token Count: By limiting the number of tokens in the input and output and especially avoiding large requests close to 32k tokens, you can reduce the risk of non-deterministic behavior.

By adopting these strategies, you can expect more consistent results.

**Related Issues:**
[omitempty option of request struct will generate incorrect request when parameter is 0.](https://github.com/sashabaranov/go-openai/issues/9)

### Does Go OpenAI provide a method to count tokens?

No, Go OpenAI does not offer a feature to count tokens, and there are no plans to provide such a feature in the future. However, if there's a way to implement a token counting feature with zero dependencies, it might be possible to merge that feature into Go OpenAI. Otherwise, it would be more appropriate to implement it in a dedicated library or repository.

For counting tokens, you might find the following links helpful:
- [Counting Tokens For Chat API Calls](https://github.com/pkoukk/tiktoken-go#counting-tokens-for-chat-api-calls)
- [How to count tokens with tiktoken](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)

**Related Issues:**
[Is it possible to join the implementation of GPT3 Tokenizer](https://github.com/sashabaranov/go-openai/issues/62)

## Thank you

We want to take a moment to express our deepest gratitude to the [contributors](https://github.com/sashabaranov/go-openai/graphs/contributors) and sponsors of this project:
Expand Down

0 comments on commit 71a2493

Please sign in to comment.