Refine traffic simulator typing and fairness handling#5
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| # Step 1: enqueue arrivals | ||
| for input_idx, destinations in enumerate(arrivals): | ||
| for output_idx in destinations: | ||
| if not 0 <= output_idx < self.num_outputs: | ||
| continue # ignore malformed destinations | ||
| queue = queues[input_idx] | ||
| if self.queue_limit is not None and len(queue) >= self.queue_limit: | ||
| if slot >= self.warmup_slots: | ||
| total_dropped += 1 | ||
| continue | ||
| active = slot >= self.warmup_slots | ||
| queue.append((output_idx, active)) | ||
| if slot >= self.warmup_slots: | ||
| total_generated += 1 |
There was a problem hiding this comment.
Count dropped packets in generated totals
Inside the simulator loop, total_generated (and per‑input/flow generated counts) are incremented only after a packet is successfully enqueued, while packets that are dropped because the queue is full only bump total_dropped. This means the denominator used for throughput and drop‑rate (total_generated) ignores dropped arrivals, so a scenario where half the arrivals are dropped reports throughput=1.0 and drop_rate=1.0 simultaneously. The generated counters should be incremented before the queue-capacity check so that both served and dropped packets contribute to the offered load.
Useful? React with 👍 / 👎.
Summary
Testing
pytest -qhttps://chatgpt.com/codex/tasks/task_e_68fcc3cbd31c8330bf142a9599a0dacb