A Flutter library for natural language interface components that integrate with LLMs using LangChain.dart. Build voice and text-enabled AI interfaces with MVVM architecture. LLM and speech integration for voice interaction from within the app als well as from external assistants using MCP.
It showcases an architecture for interfacing between GUIs and LLM based conversational assistance as described in https://arxiv.org/abs/2510.06223.
- π€ Voice-enabled LLM integration with speech-to-text and text-to-speech
- ποΈ MVVM architecture with
GenericScreenViewModel
base class - π Dependency injection via get_it for flexible configuration
- π§ Multi-provider LLM support (OpenAI, Groq, Ollama, OpenRouter)
- π§° Auto-generated tools from GoRouter navigation routes
- πΎ Conversation history with persistent storage
- π― Natural language input via
LangField
widget - π± Cross-platform Flutter support
- π MCP (Model Context Protocol) support for e.g. Claude Desktop integration
The following video demonstrates the MCP connection of the app in the example dir of this project to Claude Desktop. Switch sound ON to hear the TTS:
Produce.2.mp4
To run the example application:
Create a .env
file in the example directory:
OPENAI_API_KEY=your_openai_key_here
# Or other provider keys as needed
import 'package:flutter_dotenv/flutter_dotenv.dart';
import 'package:langbar_core/langbar_core.dart';
void main() async {
await dotenv.load();
// Setup LLM with dependency injection
setupLLMDependencyInjection(
Service.openai,
systemPrompt: "You are a helpful assistant."
);
// Configure navigation routes for tool generation
setRoutes(yourAppRoutes);
runApp(MyApp());
}
import 'package:langbar_core/langbar_core.dart';
class MyScreen extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('AI Assistant')),
body: Column(
children: [
Expanded(child: HistoryView()),
LangField(showHistoryButton: true),
],
),
);
}
}
class MyScreenViewModel extends GenericScreenViewModel<MyScreenState> {
MyScreenViewModel(super.initialState, {required super.context});
// TTS service automatically available via inherited 'tts' property
// Voice interaction enabled via SpeechEnabled mixin
void handleUserAction() {
// Your business logic here
tts.speak("Action completed!");
}
}
- clone this repo, e.g. to ~/myprojects/langbar_core
- get a subscription to an AI code CLI (Claude Code, Cursor CLI etc.).
- open the cli in your own project
- issue a prompt like:
"First read ~/myprojects/langbar_core/readme.md. Then make a plan to modify the flutter app in this package to include LLM based assistance by carefully analyzing the code of the example project in ~/myprojects/langbar_core/example. Analyze carefuly how you can apply the mechanism employed there to the current project. Pay special attention to the routes and viewmodels for screens in the example project. Make sure routes and screens in the project in the current directory follow the same strategy for exposing their functionality and handling of LLM responses."
- after the plan is created, execute it and see how far you get.
mcp-banking-github_3.mp4
Langbar Core is a Flutter library for natural language interface components that integrate with LLMs using LangChain.dart. The architecture follows MVVM pattern where ViewModels serve as orchestrators between GUI and LLM assistants.
Standard Flutter commands apply:
flutter pub get
- Install dependenciesflutter analyze
- Run static analysisflutter test
- Run testsflutter pub deps
- Show dependency tree
MVVM Architecture
GenericScreenViewModel<State>
inlib/ui/cubits/generic_screen_view_model.dart
- Base ViewModel that extends Cubit and mixes in SpeechEnabled- ViewModels register with
CurrentScreenCubit
for coordination - Uses flutter_bloc for state management
LLM Integration
send_to_llm.dart
- Main LLM orchestration with support for OpenAI, OpenRouter, Ollama, and GroqService
enum defines available LLM providers- System prompt configuration via dependency injection
- Route-based tool generation via
setRoutes()
- Dependency injection using get_it for LLM instance and configuration management
Natural Language Input
LangField
widget inlib/ui/langfield/langfield.dart
- Primary input componentLangBarState
provider manages input state- Speech-to-text integration via
speech.dart
Tool System
GenericScreenTool
- Creates LLM tools from GoRouter routesRetrieverTool
- Vector database integration- Tools are auto-generated from router configuration
lib/ui/
- UI components, cubits, and scaffoldslib/tools/
- LLM tool implementationslib/data/
- Data models and LangChain integration utilitieslib/function_calling_v3/
- Latest function calling implementationlib/utils/
- General utilities and extensions
Configuration
lib/llm_keys.dart
- API keys and provider configurations (git-ignored with skip-worktree)lib/documented_route.dart
- Route documentation for tool generation
Core Services
lib/send_to_llm.dart
- Main LLM service orchestratorlib/my_conversation_buffer_memory.dart
- Custom conversation memory implementationlib/langbar_history_storage.dart
- Persistent conversation history
Provider | Service Enum | Environment Variable |
---|---|---|
OpenAI | Service.openai |
OPENAI_API_KEY |
Groq | Service.groq |
GROQ_API_KEY |
OpenRouter | Service.openrouter |
OPENROUTER_API_KEY |
Ollama | Service.ollama |
Local installation |
Key external dependencies:
langchain: ^0.7.6
- Core LLM frameworklangchain_openai: ^0.7.6+1
- OpenAI integrationlangchain_ollama: ^0.3.3+2
- Local Ollama integrationflutter_bloc: ^9.1.1
- State managementgo_router: ^16.2.1
- Navigationspeech_to_text: ^7.3.0
- Voice inputflutter_tts: ^4.2.3
- Text-to-speech outputget_it: ^7.7.0
- Dependency injection
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.