A powerful tool for viewing and analyzing AI conversation transcripts from various platforms including ChatGPT, Claude, and SpecStory transcripts from VS Code and Cursor AI.
I use this tool to view downloaded transcripts using the HumainLabs Firefox Extension for ChatGPT and Claude JSON Download
The AI Transcript Viewer provides a comprehensive interface for analyzing and exploring AI conversation transcripts. Designed to work with multiple AI platforms, it offers powerful capabilities for navigating complex conversations, viewing AI thinking processes, and analyzing token usage patterns.
Feature | Description |
---|---|
Multi-Platform Support | Seamlessly view transcripts from ChatGPT, Claude, PaLM/Gemini, and LLaMA |
Three-Message View | See conversation context with previous, current, and next messages |
Thinking Process Visualization | View AI reasoning and thought processes when available |
Message Navigation | Navigate between messages with keyboard shortcuts or UI buttons |
Semantic Search | Find content based on meaning, not just keywords |
Token Count Analysis | Get accurate approximations of token usage by message and conversation |
Metadata Display | Examine detailed message metadata for technical analysis |
Voice Mode Support | Special handling for voice conversation transcripts |
Message Combination | Intelligent merging of thinking and response messages |
Message Export | Star important messages and export them as JSON for use in AI context windows |
The viewer supports transcript formats from the following AI platforms:
- ChatGPT - Including thinking process from a8km123 tool messages
- Claude - Multiple formats including JSON and SpecStory variants
This application uses built-in approximation methods to estimate token counts for different AI models:
Model | Tokenization Method |
---|---|
GPT models | Character and word-based approximation with adjustment for special characters and code blocks |
Claude models | Custom approximation considering Claude's tokenization patterns |
The token counts provided are approximations and may not match the exact counts used by AI providers. However, they are sufficiently accurate for most practical purposes such as:
- Estimating the cost of API calls
- Checking if a prompt fits within token limits
- Analyzing conversation length and distribution
- Download the repository
- Open the
index.html
file in any modern web browser - Upload a JSON transcript file
For developers who want advanced features:
# Make sure you have Node.js installed
# Navigate to the project directory
node serve.js
# The application will automatically open in your default browser
โ / โ Arrow Keys - Navigate between messages
โ / โ Arrow Keys - Jump to next/previous user message
Space - Toggle between single and three-message view
- Click the upload button or drag-and-drop a transcript file
- The viewer will automatically detect the format and display the conversation
- Use navigation controls to explore the transcript
- Click the search button in the top navigation
- Enter search terms
- Results will highlight matching messages
- Use navigation buttons to move between matches
Click on the token count in the header to:
- View detailed token usage for each message
- See total token consumption for the entire conversation
- Clear the token cache for fresh calculations
Click the metadata button on a message to examine:
- Raw message data
- Platform-specific properties
- Technical details about the message
This feature allows you to select specific messages for export to use in AI context windows:
- Click the star icon (โญ) on any message you want to save
- Star multiple messages across the conversation as needed
- Click the "Copy Starred" button in the top navigation bar
- A JSON array containing the content of all starred messages will be copied to your clipboard
- Paste this directly into an AI session to provide rich context from previous conversations
This is particularly useful for:
- Continuing conversations across different AI sessions
- Providing key insights from previous discussions to a new AI model
- Creating curated context for complex prompting techniques
- Building composite prompts from multiple transcript segments
The exported JSON preserves the role (user/assistant) and content of each message, making it ready to use in most AI context windows.
Key | Function |
---|---|
Space |
Toggle view mode |
โ/โ |
Previous/Next message |
โ/โ |
Previous/Next user message |
Esc |
Close modals |
The application uses a platform handler system that can be extended:
- Create a new handler in the
js/platforms/
directory - Implement the required interface methods
- Register the handler in the platform interface
js/
- JavaScript filesplatforms/
- Platform-specific handlerstokenizers/
- Token counting implementations
css/
- Stylesheetslib/
- External libraries
For developers who want to implement more precise token counting with external libraries, the code includes commented sections for loading tokenization libraries. This is an advanced option and requires serving the application from a proper web server.