Skip to content

Batch LLM API Optimization #70

@iAmGiG

Description

@iAmGiG

Feature Request: Batch LLM API Processing

Problem

Current system makes individual LLM calls per trading day. For large date ranges, this is:

  • Inefficient (many API calls)
  • Expensive (per-call costs add up)
  • Slow (network latency per call)

Proposed Solution

Batch process multiple days in single LLM call:

  • Group by week (5 trading days)
  • Send all week's GEX data in one prompt
  • Get batch analysis response
  • Parse responses back to individual days

Example Batch Prompt

Analyze the following 5-day trading sequence:

Day 1 (2024-01-02): GEX=-5.3B, regime=NEGATIVE_GAMMA_LOW...
Day 2 (2024-01-03): GEX=-8.9B, regime=NEGATIVE_GAMMA_LOW...
Day 3 (2024-01-04): GEX=+2.1B, regime=POSITIVE_GAMMA_LOW...
...

Provide WHO/WHOM/WHAT analysis for each day.

Benefits

  • Reduce API calls by 5x (weekly batches)
  • Lower costs through bulk processing
  • Faster execution (fewer network round trips)
  • LLM can see day-to-day context patterns

Implementation Notes

  • Need robust response parsing
  • Handle batch failures gracefully
  • Maintain individual day result format
  • Test with different batch sizes (3, 5, 10 days)

Priority

Medium - Would significantly improve efficiency for large-scale testing

Sub-issues

Metadata

Metadata

Assignees

Labels

api-integrationAlpha Vantage API related tasksenhancementNew feature or requestllm-integrationLLM integration and prompt engineering

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions