Skip to content

Releases: qduc/chat

0.16.1

30 Jan 16:26

Choose a tag to compare

Added

  • User context to draft management for improved conversation handling

Changed

  • Extracted web-fetch to standalone package for reusability

Fixed

  • Deduplicate outputs by tool_call_id or name to prevent double rendering during streaming
  • Tool output requiring double-click to expand now expands on single click

Full Changelog: v0.16.0...v0.16.1

0.16.0

29 Jan 14:48

Choose a tag to compare

Added

  • Explicit OpenAI provider types: openai-responses for the official OpenAI Responses API and openai-completions for OpenAI-compatible providers like OpenRouter or local LLMs
  • Periodic log cleanup for upstream logs in non-test environments

Changed

  • Replaced URL-based provider heuristic with specific provider type configuration

Fixed

  • Error message no longer persists when changing conversations
  • Error message clears without requiring user interaction

Removed

  • Unused quality_level parameter
  • Unused dependencies and exports

Full Changelog: v0.15.3...v0.16.0

0.15.3

29 Jan 10:57

Choose a tag to compare

Added

  • Reddit content fetching via JSON API for improved reliability

Fixed

  • Draft content now persists when switching conversations or starting a new chat
  • Stale cache and data leakage issues that occurred when switching users

Full Changelog: v0.15.2...v0.15.3

0.15.2

29 Jan 04:56

Choose a tag to compare

Fixed

  • Fixed tool call ID collision between users in multi-user environments
  • Fixed issue where different users could not reuse the same provider name or ID
  • Fixed Gemini provider not fetching model list in background
  • Fixed hung providers blocking background refresh or batch model list calls
  • Fixed code blocks inside thinking blocks causing premature closure of thinking block
  • Fixed session index drift with new migration consistency test

Changed

  • Cleaned up config.provider design for improved maintainability

Full Changelog: v0.15.1...v0.15.2

0.15.1

26 Jan 18:26

Choose a tag to compare

Added

  • Customizable storage keys for favorites and recent models in ModelSelector and JudgeModal
  • Bias-mitigation masking mechanism for the model evaluation (judgment) feature

Changed

  • Refactored comparison feature into ModelSelector using contextual actions per row for improved UX
  • Decoupled judge model selection from the primary chat model with persistence support
  • Judge model selection now orders selected model to the top for easy recognition

Fixed

  • Title generation request being called many times in comparison mode
  • Cannot change provider in judge model selector
  • Cannot use Gemini provider models as judge
  • Type errors
  • API integration for better maintainability

Full Changelog: v0.15.0...v0.15.1

0.15.0

26 Jan 06:14

Choose a tag to compare

Added

  • N-way judge evaluation for comparing responses from multiple models with configurable judge models
  • Real-time conversation title updates for new and existing conversations

Changed

  • Improved backend test coverage with additional test cases
  • Improved frontend test coverage with additional test cases
  • Enhanced useChat hook for better maintainability and reliability
  • Refactored MessageList.tsx component for improved code quality
  • Optimized mobile UI header row layout
  • Judge evaluation now uses actual model names instead of generic 'primary' label
  • Removed category parameters from SearXNG search tool for better search result quality

Fixed

  • Type errors in codebase
  • Ensured messageId is correctly assigned in useChat hook for non-primary conversation cases
  • ChatHeader dropdown styling for consistent behavior across breakpoints
  • Reduced test output noise with --silent flag

Full Changelog: v0.14.1...v0.15.0

0.14.1

24 Jan 08:13

Choose a tag to compare

Changed

  • Improved visual presentation of judge response display
  • Updated delete icon to use consistent Trash component across UI

Full Changelog: v0.14.0...v0.14.1

0.14.0

24 Jan 07:40

Choose a tag to compare

Added

  • Judge/Evaluation System - Compare model responses with automated judge model evaluation, scoring, and reasoning
  • Custom Request Parameters - User-defined request parameters with multi-select support for advanced API configuration
  • Usage Tracking with Timing Metrics - Comprehensive performance insights including prompt tokens, cached tokens, and timing data
  • Judge Response Management - Delete judge responses from evaluation comparisons

Changed

  • Message ID Protocol - Unified to use UUIDs consistently for assistant responses across frontend and backend
  • OpenAI API Compatibility - Updated response_format parameter handling (moved to text.format for compatibility)
  • Judge Response Format - Enhanced judge evaluation response structure for better display and usability

Fixed

  • Custom Parameters UI - Improved width consistency in custom request parameter popup items
  • Message ID Handling - Resolved issue where frontend mixed sequential (integer-based) and UUID formats in judge requests

Full Changelog: v0.13.2...v0.14.0

0.13.2

23 Jan 09:01

Choose a tag to compare

Added

  • Clear button for custom parameters and tools to quickly reset configurations
  • Copy button for custom parameters settings to duplicate existing configurations
  • Auto-generated IDs for custom parameter settings for better tracking and management

Full Changelog: v0.13.1...v0.13.2

0.13.1

22 Jan 16:35

Choose a tag to compare

Added

  • Show content of custom parameters on hover for better visibility

Changed

  • Improved tooltip visual styling

Fixed

  • Tooltips no longer remain visible after clicking buttons that open popups
  • Corrected prompt_tokens calculation in timings to properly account for cached tokens

Full Changelog: v0.13.0...v0.13.1