0.9.7 release
Gemini Translator Improvements:
- Default Model: Switched to
gemini-1.5-flash
. - New Options: Added support for
use_context
,temperature
,prompt
, custom model lists (--model_list
), and key/model rotation, matching other translators. - Output Limit: Increased
max_output_tokens
from 2048 to 8192. - Safety Settings: Relaxed thresholds(
BLOCK_NONE
) to reduceBlockedPromptException
occurrences (though not entirely eliminated). - Exponential Backoff: Implemented for retries.
Retry Strategy: The retry mechanism prioritizes key rotation on the first retry, switching to model rotation only on subsequent retries to minimize potential inconsistencies in translation quality.
New Option:
- Interval Option: Added a
--interval [float]
(seconds) option to control request frequency. Exponential backoff is now implemented but the original fixed sleep time is retained. This addresses Gemini's lower RPM limits (1000-2000 vs. OpenAI's 10,000) and free tier restrictions. Suggested values: 0.03s for Gemini Flash (2000 RPM limit) and 4s for the free tier (15 RPM).
Testing:
- Uncomment line 90 in
gemini_translator.py
(# print(model)
) to inspect the model object and verify parameter changes. - Default (Gemini Flash):
python3 make_book.py --book_name test_books/animal_farm.epub --model gemini --test
- Gemini Pro:
python3 make_book.py --book_name test_books/animal_farm.epub --model geminipro --test
- All Options(Free Tier API with Retries):
python3 make_book.py --book_name test_books/animal_farm.epub --model gemini --model_list gemini-1.5-pro-002,gemini-1.5-flash-002 --test --test_num 30 --use_context --temperature 0.5 --prompt prompt_template_sample.json --interval 0.1
Thanks @risin42