Skip to content

Stream LLM output to terminal before opening $EDITOR #12

Open
@ivy

Description

@ivy

OpenAI's official Go SDK should work. I used GPT-4o to evaluate some options for redrawing the screen officiently. While ANSI escape sequences work, I don't want to go down the rabbit hole of figuring out terminfo and a bunch of low-level stuff. Bubbletea looks effective for this use-case and GPT-4o was able to write a working example.


Design idea:

  • Make a tui package
  • Implement a tui.Stream struct to manage LLM output
    • Consumers can write to the stream
    • Consumers can close the stream

Bubbletea example
package main

import (
	"fmt"
	"time"

	tea "github.com/charmbracelet/bubbletea"
)

// Model stores the counter value.
type model struct {
	counter int
	quit    bool
}

// Init starts the timer loop.
func (m model) Init() tea.Cmd {
	return tick()
}

// Update handles messages.
func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
	switch msg := msg.(type) {
	case tickMsg:
		if m.quit {
			return m, tea.Quit // Exit if quit flag is set
		}
		m.counter++
		return m, tick() // Continue updating
	case tea.KeyMsg:
		if msg.String() == "q" || msg.String() == "ctrl+c" {
			m.quit = true
			return m, tea.Quit
		}
	}
	return m, nil
}

// View renders the updated output.
func (m model) View() string {
	return fmt.Sprintf("Updated Line: %d\nPress 'q' to quit.", m.counter)
}

type tickMsg struct{}

func tick() tea.Cmd {
	return tea.Tick(time.Second, func(time.Time) tea.Msg {
		return tickMsg{}
	})
}

func main() {
	p := tea.NewProgram(model{})
	if err := p.Start(); err != nil {
		panic(err)
	}
}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    Status

    Ready

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions