Skip to content

0.9.7 release

Compare
Choose a tag to compare
@yihong0618 yihong0618 released this 21 Oct 05:44
· 5 commits to main since this release
9261d92

Gemini Translator Improvements:

  • Default Model: Switched to gemini-1.5-flash.
  • New Options: Added support for use_context, temperature, prompt, custom model lists (--model_list), and key/model rotation, matching other translators.
  • Output Limit: Increased max_output_tokens from 2048 to 8192.
  • Safety Settings: Relaxed thresholds(BLOCK_NONE) to reduce BlockedPromptException occurrences (though not entirely eliminated).
  • Exponential Backoff: Implemented for retries.

Retry Strategy: The retry mechanism prioritizes key rotation on the first retry, switching to model rotation only on subsequent retries to minimize potential inconsistencies in translation quality.

New Option:

  • Interval Option: Added a --interval [float] (seconds) option to control request frequency. Exponential backoff is now implemented but the original fixed sleep time is retained. This addresses Gemini's lower RPM limits (1000-2000 vs. OpenAI's 10,000) and free tier restrictions. Suggested values: 0.03s for Gemini Flash (2000 RPM limit) and 4s for the free tier (15 RPM).

Testing:

  1. Uncomment line 90 in gemini_translator.py (# print(model)) to inspect the model object and verify parameter changes.
  2. Default (Gemini Flash): python3 make_book.py --book_name test_books/animal_farm.epub --model gemini --test
  3. Gemini Pro: python3 make_book.py --book_name test_books/animal_farm.epub --model geminipro --test
  4. All Options(Free Tier API with Retries): python3 make_book.py --book_name test_books/animal_farm.epub --model gemini --model_list gemini-1.5-pro-002,gemini-1.5-flash-002 --test --test_num 30 --use_context --temperature 0.5 --prompt prompt_template_sample.json --interval 0.1

Thanks @risin42