Skip to content

Set Timeout but didn't stop #79

@TinaXinweiLi

Description

@TinaXinweiLi

Thx a lot for this cool repo!

I would like to check how to correctly set the timeout in the config.yaml (Can see my current one below). Currently my timeout is set to be 600s (10min), num_population is 6. According to my understanding, the max running time of one iteration should be 1 hour ( 6*10min = 1h). However, according to my log file (Evaluated program 51590846-626a-463d-87a1-3bf749e000a0 in 40497.26s), one iteration ran around 11 hours to move to the next iteration. I'm wondering does anyone have the same issue before or any possible solutions? Thanks a lot!

config.yaml (api key removed):

# EVOLUTION PROCESS
max_iterations: 30               
checkpoint_interval: 5            # Save progress every 5 generations
log_level: "INFO"                 # Log verbosity

# LLM CONFIGURATION
llm:
  primary_model: o3-mini          # Efficient model for code mutations
  primary_model_weight: 1.0       # Use only one model (simpler/faster)
  secondary_model: o3-mini        # Disabled
  secondary_model_weight: 0.0     # No secondary model
  api_base:                 
  api_key:                 

# PROMPT ENGINEERING
prompt:
  system_message: "You are an expert in financial optimization algorithms. Improve the feature vector selection method for report type tagging to maximize the weighted score of 0.1*accReturn + 0.9*CalmarRatio. Focus on improving the search_algorithm function to reliably find vectors for the report_type_factor_map. Focus on achieving higher values for both accReturn and CalmarRatio, where these metrics are derived from the evaluation_function(report_type_factor_map, EuropeRegion()), so that you can achieve higher weighted score of 0.1*accReturn + 0.9*CalmarRatio. Do not change evaluation_function."
  num_top_programs: 2             # Show top 2 performers in prompt
  use_template_stochasticity: true # Vary prompt phrasing for diversity

# POPULATION MANAGEMENT
database:
  population_size: 6             # Small population (due to long evals)
  archive_size: 2                 # Keep historical best
  num_islands: 2                  # Single population (no migration needed)
  elite_selection_ratio: 0.3      # Preserve top 4 performers each gen
  exploitation_ratio: 0.7         # Heavy focus on refining good solutions

# EVALUATION STRATEGY
evaluator:
  timeout: 600                    # 10min timeout 
  cascade_evaluation: false       # Single-stage evaluation only
  parallel_evaluations: 1         # Run evaluations sequentially
  use_llm_feedback: false         # No additional LLM critique

# EVOLUTIONARY OPERATORS
diff_based_evolution: true        # Modify existing code with diffs
allow_full_rewrites: false        # Prevent complete overhauls

log info of the iteration that exceeds the timeout:

2025-06-16 19:12:39,090 - httpx - INFO - HTTP Request: POST https://moham-m6qbowu5-eastus2.cognitiveservices.azure.com/openai/deployments/o3-mini/chat/completions?api-version=2024-12-01-preview "HTTP/1.1 200 OK"
2025-06-17 06:27:36,349 - openevolve.evaluator - INFO - Evaluated program 51590846-626a-463d-87a1-3bf749e000a0 in 40497.26s: primary_score=0.0000, value_score=0.0000, stability_score=0.0000, mean_accReturn=0.0000, mean_calmar=0.0000, reliability=0.0000, time_score=0.0000, error=All trials failed

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions