Skip to content

[BUG] [RAG] Error when cloning with changed chunk_size #189

@richard-rfai

Description

@richard-rfai

Bug Description

When using the "clone-modify" IC Op to clone a RAG run with a changed chunk_size, whether through the IC Ops panel or directly with a call to the dispatcher, the cloned run will immediately fail with the following error:

================================================================================
⚠️  Run 5 (Pipeline 5) FAILED
================================================================================
Shard: 1/4
Error: ray::QueryProcessingActor.process_batch() (pid=2950, ip=192.168.10.10, actor_id=442f9bb3c78745a02ce0251901000000, repr=<rapidfireai.evals.actors.query_actor.QueryProcessingActor object at 0x7b0508aa8800>)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/rapidfireai/evals/actors/query_actor.py", line 324, in process_batch
    raise RuntimeError(
RuntimeError: Error processing batch: AttributeError: 'NoneType' object has no attribute 'get_context'
================================================================================
This run has been marked as FAILED. The experiment will continue with other runs.

To Reproduce

Steps to reproduce the behavior:

  1. Run the Colab RAG notebook: https://colab.research.google.com/github/RapidFireAI/rapidfireai/blob/main/tutorial_notebooks/rag-contexteng/rf-colab-rag-fiqa-tutorial.ipynb
  2. As the experiment is running, clone run 1, changing the chunk_size from 256 to 512.
  3. See the cloned run fail and produce the error.

Expected Behavior

If the chunk_size can be changed, the run should successfully clone. If not, either chunk_size should not be presented as an option for cloning, or the server should reject the attempt and explain that chunk_size cannot be changed.

Environment

  • Python version: 3.12
  • RapidFire AI version: 0.14.0

Error Logs

================================================================================
⚠️  Run 5 (Pipeline 5) FAILED
================================================================================
Shard: 1/4
Error: ray::QueryProcessingActor.process_batch() (pid=2950, ip=192.168.10.10, actor_id=442f9bb3c78745a02ce0251901000000, repr=<rapidfireai.evals.actors.query_actor.QueryProcessingActor object at 0x7b0508aa8800>)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/rapidfireai/evals/actors/query_actor.py", line 324, in process_batch
    raise RuntimeError(
RuntimeError: Error processing batch: AttributeError: 'NoneType' object has no attribute 'get_context'
================================================================================
This run has been marked as FAILED. The experiment will continue with other runs.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions