Replies: 1 comment
-
|
Great question! There are a few ways to capture the raw LLM output in CrewAI: 1. Using Callbacks (Recommended)You can create a custom callback handler to intercept LLM responses: from crewai import Agent, Task, Crew
from langchain.callbacks.base import BaseCallbackHandler
class LLMOutputCapture(BaseCallbackHandler):
def __init__(self):
self.outputs = []
def on_llm_end(self, response, **kwargs):
# Capture the raw LLM output
self.outputs.append(response.generations[0][0].text)
print(f"Raw LLM Output: {response.generations[0][0].text}")
# Use it
capture = LLMOutputCapture()
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="You are a researcher",
callbacks=[capture]
)2. Using Verbose Mode + Custom Loggingimport logging
logging.basicConfig(level=logging.DEBUG)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True # This shows all LLM interactions
)3. Accessing Task Results DirectlyAfter execution, you can access results per task: result = crew.kickoff()
# Each task result contains the raw output
for task_output in result.tasks_output:
print(f"Task: {task_output.description}")
print(f"Raw Output: {task_output.raw}")
print(f"Pydantic: {task_output.pydantic}") # If you used output_pydanticThe callback approach gives you the most granular control - you can capture every single LLM call, including tool use and intermediate reasoning steps. Hope this helps! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I want to obtain the raw text output of LLM to do some analysis. but it was wrapped.
Beta Was this translation helpful? Give feedback.
All reactions