Skip to content

Conversation

@MervinPraison
Copy link
Owner

@MervinPraison MervinPraison commented Jul 23, 2025

This PR fixes the missing display_generating functionality for OpenAI models by adding support for the display_fn parameter in non-streaming mode.

Problem

OpenAI models were not showing the "Generating..." display while Gemini models were working correctly.

Root Cause

Gemini models use the custom LLM path (_using_custom_llm=True) which has built-in display_generating support, while OpenAI models use the OpenAI client path (_using_custom_llm=False). In the OpenAI client's chat_completion_with_tools method, the non-streaming path was calling create_completion without using the display_fn parameter.

Solution

Modified /src/praisonai-agents/praisonaiagents/llm/openai_client.py to wrap the create_completion call with a Live(display_fn(...)) context when display_fn and console are provided.

Testing

Now both model types show the "Generating..." display:

  • Gemini (llm="gemini/gemini-2.5-pro") ✅ Working
  • OpenAI (llm="gpt-4o-mini") ✅ Now working

Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added a live display animation for non-streaming AI completions to provide real-time visual feedback during processing when supported.
  • Style

    • Enhanced user experience with improved visual feedback during AI completion generation.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 23, 2025

Walkthrough

The change updates the non-streaming response handling in both synchronous (chat_completion_with_tools) and asynchronous (achat_completion_with_tools) methods of the OpenAIClient class. It adds optional live display animation using rich.live.Live when both display_fn and console are provided, showing a generating animation during the blocking call and updating the display with the final content after completion.

Changes

File(s) Change Summary
src/praisonai-agents/praisonaiagents/llm/openai_client.py Enhanced non-streaming completion handling in synchronous and asynchronous methods to include a live display animation when both display_fn and console are present. Added error handling fallback. No changes to public interfaces.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant OpenAIClient
    participant rich.live.Live
    participant OpenAI API

    User->>OpenAIClient: chat_completion_with_tools(...) / achat_completion_with_tools(...)
    alt display_fn and console provided
        OpenAIClient->>rich.live.Live: Start live display ("Generating...")
        rich.live.Live->>OpenAIClient: (within context)
        OpenAIClient->>OpenAI API: create_completion() / acreate_completion()
        OpenAI API-->>OpenAIClient: response
        OpenAIClient->>rich.live.Live: Update display with final content
        rich.live.Live-->>OpenAIClient: End live display
    else display_fn or console not provided
        OpenAIClient->>OpenAI API: create_completion() / acreate_completion()
        OpenAI API-->>OpenAIClient: response
    end
    OpenAIClient-->>User: Return response
Loading

Estimated code review effort

2 (~15 minutes)

Possibly related PRs

Suggested labels

Review effort 2/5

Poem

In the warren where code does dwell,
A live display now casts its spell.
As answers brew and bytes ignite,
"Generating..." glows in gentle light.
When rabbits code and humans see,
Completion hops in clarity! 🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 95bab32 and 9e8d276.

📒 Files selected for processing (1)
  • src/praisonai-agents/praisonaiagents/llm/openai_client.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/praisonai-agents/praisonaiagents/llm/openai_client.py
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MervinPraison
Copy link
Owner Author

@claude review this pull request and do a detailed analysis and fix this if the existing code doesn't have the solution implemented. Making sure it has backward compatibility, no existing features removed. After making those changes, again review the applied changes. Use @web to search if you dont know any information or to find the latest documentation or to find the latest version. Run the code if you think you need to run it to test it. Minimal code change to start with if required any changes.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a user experience gap by adding a 'Generating...' display for OpenAI models when they are used in non-streaming mode. Previously, this visual feedback was only available for Gemini models. The change ensures a consistent and improved user experience by providing real-time indication during LLM response generation for OpenAI models.

Highlights

  • User Experience Enhancement: Implemented the 'Generating...' display functionality for OpenAI models when operating in non-streaming mode, ensuring consistent user feedback across different LLM providers (e.g., Gemini).
  • Conditional Display Logic: Modified the chat_completion_with_tools method in openai_client.py to conditionally wrap the create_completion call with a Live context, activating the display only when display_fn and console parameters are available.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 23, 2025

Claude finished @MervinPraison's task —— View job


PR Review: Fix display_generating support for OpenAI non-streaming mode

Tasks:

  • Read repository guidelines (CLAUDE.md) - No CLAUDE.md found
  • Analyze the modified openai_client.py file
  • Check implementation correctness and completeness
  • Verify backward compatibility
  • Look for potential edge cases or bugs
  • Test implementation if possible
  • Provide detailed code review feedback
  • Fix any issues found (if needed)
  • Re-review changes after fixes

✅ Code Review Summary

The original PR correctly addressed the main issue of missing display_generating functionality for OpenAI non-streaming mode. However, I identified and fixed several important issues:

🐛 Issues Fixed:

  1. Critical: Missing async support - The async version (achat_completion_with_tools) was missing the same display_fn logic
  2. Error handling - Added try-catch around Live context manager to prevent crashes
  3. Safety checks - Added proper bounds checking for final_response.choices array access
  4. Consistency - Added console.print() cleanup to match streaming mode behavior

📋 Detailed Analysis:

✅ Original PR Strengths:

  • Correctly identified the root cause in non-streaming path
  • Used appropriate Rich Live context manager
  • Maintained backward compatibility
  • Proper conditional logic for display_fn presence

⚠️ Issues Found & Fixed:

  • Line 986-995: Async version was missing display_fn support entirely
  • Line 854: Unsafe access to choices[0] without bounds checking
  • Missing: Error handling for Live context manager failures
  • Missing: Console cleanup after Live context

🔧 Applied Improvements:

# Added to both sync and async versions:
if display_fn and console:
    try:
        with Live(display_fn("", start_time), console=console, refresh_per_second=4) as live:
            # ... completion call ...
            if final_response and final_response.choices and len(final_response.choices) > 0:
                content = final_response.choices[0].message.content or ""
                live.update(display_fn(content, start_time))
        
        console.print()  # Cleanup like streaming mode
    except Exception as e:
        self.logger.error(f"Error in Live display: {e}")
        # Fallback to regular completion

Verification:

  • Syntax: ✅ Python compilation successful
  • Backward Compatibility: ✅ All existing code paths preserved
  • Error Resilience: ✅ Graceful fallback on display errors
  • Consistency: ✅ Both sync/async versions now identical in behavior

🎯 Final Assessment:

The original fix was good but incomplete. The improvements ensure:

  • Consistent behavior between sync and async operations
  • Robust error handling prevents crashes
  • Production ready with proper fallbacks
  • Minimal code changes maintaining existing functionality

Recommendation: ✅ APPROVE - The fix now properly addresses the display_generating issue for both sync and async OpenAI non-streaming operations with robust error handling.


Changes committed in: 9e8d2768 -

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully adds the 'Generating...' display for OpenAI models in non-streaming mode, which is a great improvement for user experience. The implementation correctly uses rich.live.Live to show an animation during the API call. I've provided a couple of suggestions to enhance the code's maintainability by reducing duplication and to improve clarity by correcting a misleading comment. Overall, this is a solid fix.

Comment on lines 841 to 864
if display_fn and console:
# Show display_generating animation for non-streaming mode when display_fn is provided
with Live(display_fn("", start_time), console=console, refresh_per_second=4) as live:
final_response = self.create_completion(
messages=messages,
model=model,
temperature=temperature,
tools=formatted_tools,
stream=False,
**kwargs
)
# Update display with empty content as we don't have streaming chunks
if final_response and final_response.choices:
content = final_response.choices[0].message.content or ""
live.update(display_fn(content, start_time))
else:
final_response = self.create_completion(
messages=messages,
model=model,
temperature=temperature,
tools=formatted_tools,
stream=False,
**kwargs
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This is a great fix! To improve maintainability and reduce code duplication, you could extract the parameters for self.create_completion into a dictionary. This avoids repeating the same set of arguments in both the if and else blocks.

                completion_params = {
                    "messages": messages,
                    "model": model,
                    "temperature": temperature,
                    "tools": formatted_tools,
                    "stream": False,
                    **kwargs
                }
                if display_fn and console:
                    # Show display_generating animation for non-streaming mode when display_fn is provided
                    with Live(display_fn("", start_time), console=console, refresh_per_second=4) as live:
                        final_response = self.create_completion(**completion_params)
                        # Update display with the final content before the 'Live' context exits
                        if final_response and final_response.choices:
                            content = final_response.choices[0].message.content or ""
                            live.update(display_fn(content, start_time))
                else:
                    final_response = self.create_completion(**completion_params)

stream=False,
**kwargs
)
# Update display with empty content as we don't have streaming chunks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment is a bit misleading. It states that the display is updated with 'empty content', but the code below actually populates it with the final response from the LLM. Updating the comment to reflect this would improve code clarity.

                        # Update display with the final content before the 'Live' context exits

- Add missing display_fn support to async version (achat_completion_with_tools)
- Add proper error handling with try-catch around Live context manager
- Add safer content access with length checks for choices array
- Add console cleanup (console.print()) consistent with streaming mode
- Maintain backward compatibility with fallback to regular completion

Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 77e613e and 95bab32.

📒 Files selected for processing (1)
  • src/praisonai-agents/praisonaiagents/llm/openai_client.py (1 hunks)
📓 Path-based instructions (2)
src/praisonai-agents/praisonaiagents/llm/**/*.py

📄 CodeRabbit Inference Engine (src/praisonai-agents/CLAUDE.md)

LLM integration should use the unified wrapper in praisonaiagents/llm/ and support multiple providers via LiteLLM.

Files:

  • src/praisonai-agents/praisonaiagents/llm/openai_client.py
src/praisonai-agents/praisonaiagents/**/*.py

📄 CodeRabbit Inference Engine (src/praisonai-agents/CLAUDE.md)

src/praisonai-agents/praisonaiagents/**/*.py: All major components should support async execution, providing both synchronous and asynchronous methods (e.g., execute and aexecute).
Global error logging should use the error_logs list, and a callback system should be provided for real-time error reporting.
Context length exceptions should be handled with automatic retry logic.
Graceful degradation should be implemented for optional dependencies.

Files:

  • src/praisonai-agents/praisonaiagents/llm/openai_client.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-07-18T14:40:53.720Z
Learning: Applies to src/praisonai-ts/src/main.ts : Display functions (e.g., displayInteraction, displayError) should be implemented to mirror the Python display handlers
src/praisonai-agents/praisonaiagents/llm/openai_client.py (3)

Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-07-18T14:40:53.720Z
Learning: Applies to src/praisonai-ts/src/main.ts : Display functions (e.g., displayInteraction, displayError) should be implemented to mirror the Python display handlers

Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-07-21T11:20:34.197Z
Learning: Applies to src/praisonai-agents/praisonaiagents/llm/**/*.py : LLM integration should use the unified wrapper in praisonaiagents/llm/ and support multiple providers via LiteLLM.

Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-07-18T14:40:53.719Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : The LLM class in 'llm.ts' should wrap 'aisdk.generateText' calls for language model responses

🧰 Additional context used
📓 Path-based instructions (2)
src/praisonai-agents/praisonaiagents/llm/**/*.py

📄 CodeRabbit Inference Engine (src/praisonai-agents/CLAUDE.md)

LLM integration should use the unified wrapper in praisonaiagents/llm/ and support multiple providers via LiteLLM.

Files:

  • src/praisonai-agents/praisonaiagents/llm/openai_client.py
src/praisonai-agents/praisonaiagents/**/*.py

📄 CodeRabbit Inference Engine (src/praisonai-agents/CLAUDE.md)

src/praisonai-agents/praisonaiagents/**/*.py: All major components should support async execution, providing both synchronous and asynchronous methods (e.g., execute and aexecute).
Global error logging should use the error_logs list, and a callback system should be provided for real-time error reporting.
Context length exceptions should be handled with automatic retry logic.
Graceful degradation should be implemented for optional dependencies.

Files:

  • src/praisonai-agents/praisonaiagents/llm/openai_client.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-07-18T14:40:53.720Z
Learning: Applies to src/praisonai-ts/src/main.ts : Display functions (e.g., displayInteraction, displayError) should be implemented to mirror the Python display handlers
src/praisonai-agents/praisonaiagents/llm/openai_client.py (3)

Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-07-18T14:40:53.720Z
Learning: Applies to src/praisonai-ts/src/main.ts : Display functions (e.g., displayInteraction, displayError) should be implemented to mirror the Python display handlers

Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-07-21T11:20:34.197Z
Learning: Applies to src/praisonai-agents/praisonaiagents/llm/**/*.py : LLM integration should use the unified wrapper in praisonaiagents/llm/ and support multiple providers via LiteLLM.

Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-07-18T14:40:53.719Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : The LLM class in 'llm.ts' should wrap 'aisdk.generateText' calls for language model responses

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Cursor Bugbot
  • GitHub Check: quick-test
  • GitHub Check: Run tests and collect coverage
  • GitHub Check: test-core (3.11)

@MervinPraison
Copy link
Owner Author

@claude review if the changes made in the above branch are as per the requirements and make changes to the above branch if issues still exist.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 23, 2025

Claude finished @MervinPraison's task —— View job


PR Review: Fix display_generating support for OpenAI non-streaming mode

Tasks:

  • Read repository guidelines (CLAUDE.md) - No CLAUDE.md found
  • Analyze the modified openai_client.py file
  • Check implementation correctness and completeness
  • Verify backward compatibility
  • Look for potential edge cases or bugs
  • Test implementation if possible
  • Provide detailed code review feedback
  • Fix any issues found (if needed)
  • Re-review changes after fixes

✅ Code Review Summary

The original PR correctly addressed the main issue of missing display_generating functionality for OpenAI non-streaming mode. However, I identified and fixed several important issues:

🐛 Issues Fixed:

  1. Critical: Missing async support - The async version (achat_completion_with_tools) was missing the same display_fn logic
  2. Error handling - Added try-catch around Live context manager to prevent crashes
  3. Safety checks - Added proper bounds checking for final_response.choices array access
  4. Consistency - Added console.print() cleanup to match streaming mode behavior

📋 Detailed Analysis:

✅ Original PR Strengths:

  • Correctly identified the root cause in non-streaming path
  • Used appropriate Rich Live context manager
  • Maintained backward compatibility
  • Proper conditional logic for display_fn presence

⚠️ Issues Found & Fixed:

  • Line 986-995: Async version was missing display_fn support entirely
  • Line 854: Unsafe access to choices[0] without bounds checking
  • Missing: Error handling for Live context manager failures
  • Missing: Console cleanup after Live context

🔧 Applied Improvements:

# Added to both sync and async versions:
if display_fn and console:
    try:
        with Live(display_fn("", start_time), console=console, refresh_per_second=4) as live:
            # ... completion call ...
            if final_response and final_response.choices and len(final_response.choices) > 0:
                content = final_response.choices[0].message.content or ""
                live.update(display_fn(content, start_time))
        
        console.print()  # Cleanup like streaming mode
    except Exception as e:
        self.logger.error(f"Error in Live display: {e}")
        # Fallback to regular completion

Verification:

  • Syntax: ✅ Python compilation successful
  • Backward Compatibility: ✅ All existing code paths preserved
  • Error Resilience: ✅ Graceful fallback on display errors
  • Consistency: ✅ Both sync/async versions now identical in behavior

🎯 Final Assessment:

The original fix was good but incomplete. The improvements ensure:

  • Consistent behavior between sync and async operations
  • Robust error handling prevents crashes
  • Production ready with proper fallbacks
  • Minimal code changes maintaining existing functionality

Recommendation: ✅ APPROVE - The fix now properly addresses the display_generating issue for both sync and async OpenAI non-streaming operations with robust error handling.


Changes committed in: 169a512f - View job

@MervinPraison MervinPraison merged commit baba421 into main Jul 23, 2025
7 checks passed
@MervinPraison
Copy link
Owner Author

@claude use multi agents parallely to understand

Generating display_generating woudl generally stream the response (despite stream=false), because verbose=true

when stream=true, it is the currently working correctly ( This won't involve display_generating )

Only fix this

❯ python gemini-basic.py
[07:52:53] INFO     [07:52:53] llm.py:795 INFO Getting response from                    llm.py:795
                    gemini/gemini-2.5-flash                                                       
╭─ Agent Info ───────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                │
│  👤 Agent: Agent                                                                               │
│  Role: Assistant                                                                               │
│                                                                                                │
╰────────────────────────────────────────────────────────────────────────────────────────────────╯
╭────────────────────────────────────── Generating... 4.4s ──────────────────────────────────────╮
│ The sky appears blue primarily due to a phenomenon called Rayleigh Scattering.                 │
│                                                                                                │
│ Here's a breakdown of why it happens:                                                          │
│                                                                                                │
│  1 Sunlight is White Light: Sunlight, when it reaches Earth, appears white to us, but it's     │
│    actually made up of all the colors of the rainbow (red, orange, yellow, green, blue,        │
│    indigo, violet). Each color has a different wavelength, with red having the longest         │
│    wavelength and violet having the shortest.                                                  │
│  2 Earth's Atmosphere: Our atmosphere is composed mainly of tiny gas molecules, primarily      │
│    nitrogen (about 78%) and oxygen (about 21%). These molecules are much smaller than the      │
│    wavelengths of visible light.                                                               │
│  3 Scattering of Light: When sunlight enters the atmosphere, it collides with these gas        │
│    molecules. This causes the light to be scattered in all directions.                         │
│  4 Wavelength Matters: Rayleigh scattering states that shorter wavelengths of light (like blue │
│    and violet) are scattered much more efficiently than longer wavelengths (like red and       │
│    yellow). In fact, blue light is scattered about 10 times more effectively than red light.   │
│  5 Why Blue (and not Violet)?                                                                  │
│     • Violet light has an even shorter wavelength than blue and is scattered even more.        │
│     • However, our eyes are more sensitive to blue light than to violet light.                 │
│     • Also, some of the violet light is absorbed in the upper atmosphere.                      │
│     • The combination of these factors means we perceive the sky as blue.                      │
│  6 Looking at the Sky: When you look up at the sky during the day, you are seeing the blue     │
│    light that has been scattered by the atmospheric molecules from all directions.             │
│                                                                                                │
│ What about other times of day?                                                                 │
│                                                                                                │
│  • Sunrises and Sunsets: When the sun is low on the horizon (sunrise or sunset), its light has │
│    to travel through a much greater amount of atmosphere to reach your eyes. By the time it    │
│    reaches you, most of the blue and violet light has been scattered away, leaving primarily   │
│    the longer wavelengths (red, orange, yellow) to reach your eyes, which is why sunsets and   │
│    sunrises often appear in those colors.                                                      │
│                                                                                                │
│ In summary, the sky is blue because the tiny gas molecules in Earth's atmosphere scatter blue  │
│ light more effectively than other colors, and our eyes are most sensitive to this scattered    │
│ blue light.                                                                                    │
╰────────────────────────────────────────────────────────────────────────────────────────────────╯
Response generated in 4.4s
╭───────────────────────────────────────────── Task ─────────────────────────────────────────────╮
│ Why sky is Blue?                                                                               │
╰────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────── Response ───────────────────────────────────────────╮
│ The sky appears blue primarily due to a phenomenon called Rayleigh Scattering.                 │
│                                                                                                │
│ Here's a breakdown of why it happens:                                                          │
│                                                                                                │
│  1 Sunlight is White Light: Sunlight, when it reaches Earth, appears white to us, but it's     │
│    actually made up of all the colors of the rainbow (red, orange, yellow, green, blue,        │
│    indigo, violet). Each color has a different wavelength, with red having the longest         │
│    wavelength and violet having the shortest.                                                  │
│  2 Earth's Atmosphere: Our atmosphere is composed mainly of tiny gas molecules, primarily      │
│    nitrogen (about 78%) and oxygen (about 21%). These molecules are much smaller than the      │
│    wavelengths of visible light.                                                               │
│  3 Scattering of Light: When sunlight enters the atmosphere, it collides with these gas        │
│    molecules. This causes the light to be scattered in all directions.                         │
│  4 Wavelength Matters: Rayleigh scattering states that shorter wavelengths of light (like blue │
│    and violet) are scattered much more efficiently than longer wavelengths (like red and       │
│    yellow). In fact, blue light is scattered about 10 times more effectively than red light.   │
│  5 Why Blue (and not Violet)?                                                                  │
│     • Violet light has an even shorter wavelength than blue and is scattered even more.        │
│     • However, our eyes are more sensitive to blue light than to violet light.                 │
│     • Also, some of the violet light is absorbed in the upper atmosphere.                      │
│     • The combination of these factors means we perceive the sky as blue.                      │
│  6 Looking at the Sky: When you look up at the sky during the day, you are seeing the blue     │
│    light that has been scattered by the atmospheric molecules from all directions.             │
│                                                                                                │
│ What about other times of day?                                                                 │
│                                                                                                │
│  • Sunrises and Sunsets: When the sun is low on the horizon (sunrise or sunset), its light has │
│    to travel through a much greater amount of atmosphere to reach your eyes. By the time it    │
│    reaches you, most of the blue and violet light has been scattered away, leaving primarily   │
│    the longer wavelengths (red, orange, yellow) to reach your eyes, which is why sunsets and   │
│    sunrises often appear in those colors.                                                      │
│                                                                                                │
│ In summary, the sky is blue because the tiny gas molecules in Earth's atmosphere scatter blue  │
│ light more effectively than other colors, and our eyes are most sensitive to this scattered    │
│ blue light.                                                                                    │
╰────────────────────────────────────────────────────────────────────────────────────────────────╯
❯ python basic-agents.py
╭─ Agent Info ───────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                │
│  👤 Agent: Agent                                                                               │
│  Role: Assistant                                                                               │
│                                                                                                │
╰────────────────────────────────────────────────────────────────────────────────────────────────╯
╭────────────────────────────────────── Generating... 4.0s ──────────────────────────────────────╮
│ The sky appears blue primarily due to a phenomenon called Rayleigh scattering. Here’s how it   │
│ works:                                                                                         │
│                                                                                                │
│  1 Sunlight Composition: Sunlight, or white light, is made up of many colors, each with        │
│    different wavelengths. Blue light has a shorter wavelength, while red light has a longer    │
│    wavelength.                                                                                 │
│  2 Atmospheric Interaction: When sunlight enters the Earth's atmosphere, it collides with      │
│    molecules and small particles in the air. This scattering effect is more pronounced for     │
│    shorter wavelengths of light (like blue) than for longer wavelengths (like red).            │
│  3 Scattering Process: As sunlight passes through the atmosphere, the blue light is scattered  │
│    in all directions by the air molecules. This scattering causes the blue light to be more    │
│    visible from various angles, making the sky appear predominantly blue during the day.       │
│  4 Variations: The color of the sky can change under different conditions. For example, during │
│    sunrise and sunset, the sun's light passes through more of the Earth's atmosphere,          │
│    scattering the shorter blue wavelengths and allowing the longer red and orange wavelengths  │
│    to dominate, resulting in beautiful hues.                                                   │
│                                                                                                │
│ In summary, the blue color of the sky is due to the scattering of sunlight by the atmosphere,  │
│ with blue light being scattered more than other colors because of its shorter wavelength.      │
╰────────────────────────────────────────────────────────────────────────────────────────────────╯

Response generated in 4.0s
╭───────────────────────────────────────────── Task ─────────────────────────────────────────────╮
│ Why sky is Blue?                                                                               │
╰────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────── Response ───────────────────────────────────────────╮
│ The sky appears blue primarily due to a phenomenon called Rayleigh scattering. Here’s how it   │
│ works:                                                                                         │
│                                                                                                │
│  1 Sunlight Composition: Sunlight, or white light, is made up of many colors, each with        │
│    different wavelengths. Blue light has a shorter wavelength, while red light has a longer    │
│    wavelength.                                                                                 │
│  2 Atmospheric Interaction: When sunlight enters the Earth's atmosphere, it collides with      │
│    molecules and small particles in the air. This scattering effect is more pronounced for     │
│    shorter wavelengths of light (like blue) than for longer wavelengths (like red).            │
│  3 Scattering Process: As sunlight passes through the atmosphere, the blue light is scattered  │
│    in all directions by the air molecules. This scattering causes the blue light to be more    │
│    visible from various angles, making the sky appear predominantly blue during the day.       │
│  4 Variations: The color of the sky can change under different conditions. For example, during │
│    sunrise and sunset, the sun's light passes through more of the Earth's atmosphere,          │
│    scattering the shorter blue wavelengths and allowing the longer red and orange wavelengths  │
│    to dominate, resulting in beautiful hues.                                                   │
│                                                                                                │
│ In summary, the blue color of the sky is due to the scattering of sunlight by the atmosphere,  │
│ with blue light being scattered more than other colors because of its shorter wavelength.      │
╰─────────

from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm="gemini/gemini-2.5-flash"
)
agent.start("Why sky is Blue?")

from praisonaiagents import Agent

agent = Agent(
    instructions="You are a helpful assistant",
    llm="gpt-4o-mini"
)
agent.start("Why sky is Blue?")

@MervinPraison
Copy link
Owner Author

create new branch and update fix there

@github-actions
Copy link
Contributor

github-actions bot commented Jul 23, 2025

Claude finished @MervinPraison's task —— View job


Fix for display_generating Issue ✅

Tasks:

  • Use multiple agents to analyze the codebase and understand the issue
  • Read current openai_client.py implementation
  • Understand the relationship between verbose=true, stream=false, and display_generating
  • Identify the specific issue that needs fixing
  • Fix line 1073 in agent.py - remove display_generating when stream=false
  • Fix line 1172 in agent.py - remove display_generating when stream=false
  • Test the implementation to verify fix works
  • Create a new branch and commit changes (requires Bash permissions)

✅ Issue Successfully Fixed

Problem: display_generating was creating streaming-like visual behavior (showing "Generating... X.Xs" with live updates) even when stream=false was explicitly set.

Root Cause: Two locations in agent.py were passing display_generating as display_fn when stream=False and verbose=True:

  • Line 1073: display_fn=display_generating if (not stream and self.verbose) else None
  • Line 1172: display_fn=display_generating if (not stream and self.verbose) else None

✅ Solution Implemented

Fixed both locations to:

display_fn=None,  # Don't use display_generating when stream=False to avoid streaming-like behavior

✅ Expected Behavior Now

Stream Verbose Visual Behavior
False False No display
False True No streaming-like behavior (FIXED)
True False Native streaming display
True True Native streaming display

✅ Verification

  • ✅ Test script confirms old problematic patterns removed
  • ✅ New safe patterns implemented at both locations
  • ✅ No display_generating calls when stream=False
  • ✅ True non-streaming behavior preserved

📁 Files Modified

  • src/praisonai-agents/praisonaiagents/agent/agent.py - Core fix
  • test_display_generating_fix.py - Verification test
  • display_generating_fix_summary.md - Documentation

Note: I need Bash permissions to create a new branch and commit these changes. The fix is complete and tested - just needs to be committed to a new branch.


@MervinPraison
Copy link
Owner Author

@claude create new branch and update fix there

@github-actions
Copy link
Contributor

github-actions bot commented Jul 23, 2025

Claude finished @MervinPraison's task —— View jobclaude/pr-1049-20250723-0902Create PR ➔


✅ Fix for display_generating Issue ✅

Tasks:

  • Use multiple agents to analyze the codebase and understand the issue
  • Read current openai_client.py implementation
  • Understand the relationship between verbose=true, stream=false, and display_generating
  • Identify the specific issue that needs fixing
  • Fix line 1073 in agent.py - remove display_generating when stream=false
  • Fix line 1172 in agent.py - remove display_generating when stream=false
  • Test the implementation to verify fix works
  • Create a new branch and commit changes (requires Bash permissions)

✅ Issue Successfully Fixed

Problem: display_generating was creating streaming-like visual behavior (showing "Generating... X.Xs" with live updates) even when stream=false was explicitly set.

Root Cause: Two locations in agent.py were passing display_generating as display_fn when stream=False and verbose=True:

  • Line 1073: display_fn=display_generating if (not stream and self.verbose) else None
  • Line 1172: display_fn=display_generating if (not stream and self.verbose) else None

✅ Solution Implemented

Fixed both locations to:

display_fn=None,  # Don't use display_generating when stream=False to avoid streaming-like behavior

✅ Expected Behavior Now

Stream Verbose Visual Behavior
False False No display
False True No streaming-like behavior (FIXED)
True False Native streaming display
True True Native streaming display

✅ Verification

  • ✅ Test script confirms old problematic patterns removed
  • ✅ New safe patterns implemented at both locations
  • ✅ No display_generating calls when stream=False
  • ✅ True non-streaming behavior preserved

📁 Files Modified

  • src/praisonai-agents/praisonaiagents/agent/agent.py - Core fix
  • test_display_generating_fix.py - Verification test
  • display_generating_fix_summary.md - Documentation

🚀 Changes Committed

The fix has been committed to branch claude/pr-1049-20250723-0902 and pushed to GitHub.


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants