-
Couldn't load subscription status.
- Fork 16
feat(run/chat): add Markdown rendering #153
Conversation
Signed-off-by: Dorin Geman <dorin.geman@docker.com>
|
In the CLI case, we may not want to blindly add color/md, we should check that the terminal supports color, that we are not redirecting to a file, etc. Because in these cases you want to use plain-old text, an example of how you do this is here: ggml-org/llama.cpp#15792 (common_log_should_use_colors_auto function) grep, systemd, ramalama would be other reference implementations of this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds Markdown rendering functionality to the chat feature, replacing the previous text output with styled Markdown rendering using the glamour library.
- Removes the
fatih/colordependency and replaces it withcharmbracelet/glamourfor Markdown rendering - Refactors the
Chatmethod to return response content instead of directly printing to stdout - Adds terminal-aware Markdown rendering with automatic width detection and fallback to plain text
Reviewed Changes
Copilot reviewed 4 out of 783 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| go.mod | Adds glamour and related dependencies, removes fatih/color dependency |
| desktop/desktop.go | Refactors Chat method to return response content and removes direct console output |
| desktop/desktop_test.go | Updates test to handle Chat method's new return signature |
| commands/run.go | Adds Markdown rendering functionality with terminal width detection and fallback handling |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| var ( | ||
| markdownRenderer *glamour.TermRenderer | ||
| lastWidth int | ||
| ) |
Copilot
AI
Sep 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Global variables markdownRenderer and lastWidth are not thread-safe. If multiple goroutines call getMarkdownRenderer() concurrently, this could lead to race conditions when checking and updating these variables.
Good point. Thanks! |
If it was me, I would do both like llama-server/grep does: with auto being the default. I do wonder about markdown for the CLI in general, that makes sense to me for the UI, but for CLI, people may not want that. Syntax highlighting I think makes sense though for code blocks etc. If you want --color and --markdown to be one combined flag or separate flags, I'll let you make that choice but I suggest making it a flag with 3 options with auto being the default: |
|
I also think https://github.com/fatih/color/blob/1c8d8706604ee5fb9a464e5097ba113101828a75/color.go#L16-L23 You could either add it back as a dependency (I'm sure we can find other uses) and use its |
Yup, that's super similar to what we did in llama-server |
Signed-off-by: Dorin Geman <dorin.geman@docker.com>
|
This breaks the streaming experience. I need to come up with a solution to keep streaming the text and only block for a bit while, for example, a Markdown code block is completed. |
Signed-off-by: Dorin Geman <dorin.geman@docker.com>
Signed-off-by: Dorin Geman <dorin.geman@docker.com>
Signed-off-by: Dorin Geman <dorin.geman@docker.com>
Signed-off-by: Dorin Geman <dorin.geman@docker.com>
To try it locally, checkout this branch and run:
E.g.,

In the second commit I added a
--colorflag to choose whether you want the output to be colored and formatted:I decided to also display the token usage:
