Skip to content

Commit

Permalink
Merge pull request #31 from remichu-ai/dev_stop_midstream
Browse files Browse the repository at this point in the history
Bugfix for llama cpp
  • Loading branch information
remichu-ai authored Sep 24, 2024
2 parents 121a759 + 08db59c commit 26c322b
Showing 1 changed file with 0 additions and 4 deletions.
4 changes: 0 additions & 4 deletions src/gallama/backend/chatgenerator.py
Original file line number Diff line number Diff line change
Expand Up @@ -998,10 +998,6 @@ async def generate(
g_queue.get_queue().put_nowait(gen_type)

# Create a task to check for disconnection
disconnect_check_task = None
if request:
disconnect_check_task = asyncio.create_task(self.check_disconnection(request, job, gen_queue_list))


# llama cpp python generator is not async, hence running fake async..
# Run the synchronous generator in a separate thread
Expand Down

0 comments on commit 26c322b

Please sign in to comment.