-
Notifications
You must be signed in to change notification settings - Fork 9
WIP: This introduce new version and a few changes #27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
rokon12
wants to merge
24
commits into
workshop
Choose a base branch
from
update-version
base: workshop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Update parent POM langchain4j.version to 1.4.0 (latest stable) - Update step-03-tools module from hardcoded 0.36.2 to 1.4.0 - Update MCP and embeddings modules to compatible beta versions - Migrate deprecated API methods: - chatLanguageModel() -> chatModel() - streamingChatLanguageModel() -> streamingChatModel() - onNext()/onComplete() -> onPartialResponse()/onCompleteResponse() - Ensure all modules use consistent LangChain4j versions - Move from beta to stable release for better reliability
- Replace maxTokens with maxCompletionTokens for LangChain4j 1.4.0 API - Add model-specific parameter validation (O1 and GPT-5 model restrictions) - Refactor LangChainService to use builder pattern with conditional parameters - Update configuration property names for consistency - Add support for new GPT-5 model family in allowed models list - Improve code organization by extracting createModel() method
- Replace maxTokens with maxCompletionTokens for LangChain4j 1.4.0 API - Add model-specific parameter validation for personality-based chat - Support O1 and GPT-5 model restrictions in streaming chat models - Update streaming API calls: onNext() -> onPartialResponse(), onComplete() -> onCompleteResponse() - Migrate streamingChatLanguageModel() -> streamingChatModel() in AiServices builder - Add topP parameter support and allowed models configuration - Enhance configuration with GPT-5 model family support - Improve code organization with model validation helper methods
- Replace maxTokens with maxCompletionTokens for LangChain4j 1.4.0 API - Add model-specific parameter validation for persistent chat memory - Support O1 and GPT-5 model restrictions in streaming chat with memory - Update streaming API calls: onNext() -> onPartialResponse(), onComplete() -> onCompleteResponse() - Migrate streamingChatLanguageModel() -> streamingChatModel() in AiServices builder - Add topP parameter support and allowed models configuration - Enhanced persistent memory with better model compatibility - Improved Java concurrency chatbot with smart parameter handling - Maintained all database persistence and memory management features
…support - Add comprehensive error handling for chat model responses - Add null response protection and logging - Add exception handling with user-friendly error messages - Enhance model parameter validation with supportsTopP() method - Fix frequency penalty support for O1 models (not supported) - Improve parameter validation logic for better model compatibility
- Refactor step-01 and step-02 POM files to inherit from parent - Remove duplicate dependency versions in favor of parent management - Add comprehensive model types including GPT-5, O1, and GPT-4.5 families - Update streaming response handler to use ChatResponse for better type safety - Improve code organization by leveraging parent POM dependency management - Maintain all LangChain4j 1.4.0 compatibility improvements
- Add NoCacheFilter for better development experience with static resources - Prevent browser caching of CSS/JS during development iterations - Enhanced logback configuration with structured logging levels - Add dedicated loggers for LangChain4j and application packages - Add logback-classic dependency for proper logging in step-01 - Improve debugging capabilities with targeted log levels - Better separation of concerns with component-specific logging
- Enhanced MavenUtility with better system properties configuration - Added proper working directory validation and creation - Improved Maven CLI invocation with better error capturing - Added fallback suggestions for Maven configuration issues - Enhanced logging with structured output capture - Added helpful user guidance when Maven archetype generation fails - Fixed NoSuchElementException by properly setting Maven environment - Improved error messages with actionable alternatives
- Optimize import usage with Arrays.stream() for better performance - Simplify lambda expression in response handler - Update default model to gpt-5-mini for better performance - Clean up code style and remove unnecessary verbose generics
- Add MavenProcessUtility for alternative Maven execution approach - Improve JakartaEEProjectGeneratorTool with enhanced error handling - Update POM dependencies for better Maven integration - Enhanced MavenUtility with robust environment configuration - Add fallback Maven execution strategies for better reliability - Maintain backward compatibility while improving tool robustness
- Update LangChain4JConfig with maxCompletionToken and model parameter support - Migrate LangChainService to use new streaming APIs and model validation - Update JakartaEEAgent with enhanced streaming response handling - Add comprehensive model parameter validation for tool integration - Enhanced configuration with allowed models and smart parameter handling - Maintain all existing tool functionality while upgrading to latest APIs - Improved error handling and logging for better development experience
- Add comprehensive GitHubTool with repository operations - Support for repository creation, search, and management - Implement code analysis and issue tracking capabilities - Add file operations (read, write, create) for GitHub repositories - Include branch management and pull request operations - Provide commit history analysis and contributor insights - Support for GitHub API integration with proper authentication - Enable automated code review and repository insights
- Update LangChain4JConfig with GPT-5 model parameter support - Add comprehensive allowed models list including gpt-5, gpt-5-mini, gpt-5-nano - Set default model to gpt-5-nano for optimal tools integration performance - Implement model-specific parameter validation for GPT-5 family - Add smart parameter handling: GPT-5 models don't support temperature/frequency penalty - Enhanced LangChainService with maxCompletionTokens and topP support - Maintain backward compatibility while enabling latest model features - Improved configuration organization with better documentation
…tures - Transform JakartaEEAgent into Java expert with stand-up comedian personality - Add comprehensive GitHub roast capabilities with multiple tone options - Implement advanced roast intelligence matrix for different developer patterns - Add special effects: badges, awards, ASCII art, emoji support - Enhanced command syntax with tone, length, style, and extras options - Add haiku, limerick, tweet, and error-log style roast formats - Improve user engagement with humor while maintaining technical accuracy - Update default model to gpt-5-mini for better performance balance - Add comprehensive safety protocols and humor style guidelines - Enhanced output formatting with visual elements and structure
- Change default model from gpt-5-mini to gpt-5 - Enable maximum AI capabilities for complex tool integrations - Improve performance for GitHub analysis and Jakarta EE project generation - Enhance comedy and roasting capabilities with full model intelligence - Better handling of complex technical explanations and code analysis - Optimal performance for multi-tool workflows and advanced reasoning
…pport - Update LangChainService.java with LangChain4j 1.4.0 API compatibility - Replace maxTokens with maxCompletionTokens - Add model parameter validation helpers (supportsTemperature, supportsFrequencyPenalty, supportsTopP) - Implement conditional parameter setting for GPT-5 and O1 model families - Add safeLower utility for model name normalization - Update microprofile-config.properties with GPT-5 models - Set default model to gpt-5 for maximum capabilities - Add topP configuration parameter - Include comprehensive allowed-models list: gpt-5, gpt-5-mini, gpt-5-nano, o1-preview, o1-mini - Maintain consistency with other updated modules (step-00 through step-03) - Preserve RAG functionality with in-memory embedding store
- Update ConfigProp.java with GPT-5 model parameters and LangChain4j 1.4.0 API compatibility - Add conditional parameter validation for GPT-5 and O1 model families - Migrate from maxTokens to maxCompletionTokens and deprecated API methods - Update configuration properties to include GPT-5 model support and allowed models list - Maintain PgVector RAG functionality with improved content retrieval configuration
…8, and step-10 - step-06-advanced-rag: Update to LangChain4j 1.4.0 with GPT-5 support - step-07-multi-model: Fix compilation issues and remove deprecated StreamingChatModel - step-08-advanced-tools-and-rag: Upgrade dependencies and configuration - step-10-mcp-client: Update to latest LangChain4j version Key API migrations: - .maxTokens() → .maxCompletionTokens() for Anthropic and Mistral models - Remove dev.langchain4j.model.chat.StreamingChatModel import (deprecated) - Update method signatures to use Object type for compatibility - Add GPT-5 model family support with parameter restrictions - Update chat.xhtml for improved UI compatibility
- Add multi-agent architecture with BookCreationOrchestrator, IllustrationAgent, ContentRefinementAgent, and PDFGenerationAgent - Implement comprehensive CLI interface using PicoCLI with configurable model selection - Support age-appropriate content generation (2-3, 4-5, 6-8 years) with educational goals - Add DALL-E 3 integration for professional illustration generation - Create PDF output using iText7 with styled formatting and layout - Use modern Java 21 features including records, virtual threads, and text blocks - Provide configurable model settings (gpt-4o-mini, gpt-4o) with temperature control - Include dry-run mode for testing without API costs - Add comprehensive error handling and progress tracking - Support multiple illustration styles (watercolor, cartoon, digital) - Implement caching and async processing for performance optimization
…alog integration - Add CharacterDesignAgent for detailed, consistent character descriptions across illustrations - Implement dialog text overlay on DALL-E generated images for immersive reading experience - Enhance IllustrationAgent with character consistency prompts and dialog integration - Add unique book naming with topic-based filenames and timestamps - Improve story generation with real narrative content instead of placeholders - Add verbose logging to track dialog text processing and character design - Update workflow with 5-step process: character design → story outline → content refinement → illustrations → PDF - Enhance prompts to ensure actual story progression rather than generic content - Add text cleaning and formatting for optimal readability on illustrations Key improvements: - Character appears identical across all pages (same face, hair, clothing, etc.) - Story dialog text displays directly on illustration bottom with readable formatting - Each book gets unique filename: topic-timestamp.pdf format - Real story content with meaningful dialog and narrative progression
340c284 to
d3460b9
Compare
- Implement concurrent image generation with up to 3 parallel DALL-E requests - Add --parallel CLI flag to enable/disable parallel processing (default: enabled) - Use virtual threads (Java 21) for lightweight concurrent execution - Implement smart rate limiting with Semaphore to respect API quotas - Add real-time progress tracking showing completion status and timing - Create separate methods for parallel and sequential generation modes - Add thread-safe error handling with CopyOnWriteArrayList for failed pages - Include 3-minute timeout protection for overall generation process - Display generation mode (parallel/sequential) in workflow output Performance improvements: - 2-3x faster generation for multi-page books - Maintains 1-second delay between API calls for rate limiting - Individual page failures don't block other pages - Progress indicators show page completion order and timing
- Add BookCreationSupervisor with autonomous planning agent - Implement @tool annotations for agent execution - Create BookCreationTools with working memory - Add AgenticBookCreatorCLI with plan-only mode - Remove old workflow classes (BookCreationOrchestrator, CharacterDesignAgent, etc.) - Enable supervisor agent to plan and execute autonomously - Support execution plan visualization
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.