(Created in Cursor AI IDE)
Truth Training is built on the fundamental principle of confidentiality: No user actions are logged or persistently stored. The application does not track, record, or save any user interactions, navigation patterns, clicks, or behavioral data. This ensures complete privacy and anonymity — users can interact with the system without leaving any trace of their actions.
We are actively discussing the core assumptions of this project here:
- 🧠 What if human intelligence is fundamentally collective?
Discussion
Key Privacy Guarantees:
- ✅ No Persistent User Tracking: No identifiers, session data, or behavioral analytics
- ✅ No Telemetry Collection: No user activity is transmitted or stored
- ✅ Ephemeral Logs Only: Only system-level logs (errors, sync operations) are temporarily stored for debugging purposes
This confidentiality principle is enforced across all platforms (Desktop UI, Android, Server, CLI) and is a core architectural requirement.
I have taken a reverse development path from idea to infrastructure: Infrastructure First, Logic Second: Instead of first developing a breakthrough model of collective intelligence and then wrapping it in code, I began by creating a perfectly tuned pipeline for cross-platform development (Rust Core, CI/CD, releases for all OS, including "stubs"), with the help of an AI assistant.
Prototype "for Growth": Libraries and builds in releases are essentially a technical demonstration of the created CI pipeline. They confirm that the system can be compiled and packaged, but the level of implementation of the declared "wisdom of the crowd" model and P2P synchronization is still under development.
Documentation as Product: Active documentation updates in the absence of core code mean that the project is at the stage of conceptual formalization of ideas for future development or attracting participants.
The current state is not so much a "rough project" in the classical sense (buggy but working code) as it is a modern, empty high-readiness framework.
The idea sits at the intersection of two major trends: the crisis of traditional social networks and the explosive growth of AI. Its originality gives it a chance, but success depends on strategy:
Niche Launch: Start not as a mass network, but as a tool for specific communities where the value of collective intelligence is obvious: scientific collaborations, fan communities of complex universes, platforms for collaborative coding or script writing with AI.
Hybrid with Existing Platforms: Create a bot or plugin for Discord, Telegram, or even Mastodon that would implement Truth Training logic for evaluating content within these communities. This lowers the entry barrier and leverages existing social graphs.
Focus on Unique AI Experience: Make the main "hook" not the social graph, but the ability to create, train, and share unique AI agents "raised" by the community. This could attract creative and technically savvy users.
The idea fits perfectly into the conceptual framework of the project:
Anonymity and Focus on Content, Not Personality: In the model, there are no "profiles," only impact (Impact) and judgment (Judgment) ratings. This immediately removes toxicity based on self-presentation and focuses users on the quality and consequences of the content itself (whether it's a post, algorithm, or meme).
Collective Intelligence as Ranking Engine: Instead of algorithms driven by platform commercial benefit, content and AI agents receive a "truth rating" or "usefulness" through a decentralized "wisdom of the crowd" mechanism. This makes the system self-regulating and resistant to centralized manipulations.
People as AI "Trainers": This is the key innovation. Users don't just consume content, but through their evaluations (e.g., "this AI conclusion is useful," "this generation is harmful") directly participate in training and selection of AI agents that are part of the network. This transforms the social network into a living, distributed laboratory for AI development where quality is determined by the community.
Resilience Through Temporal Stability: The decay-code mechanism and requirement for temporal stability for "truth" protects the network from viral spam and coordinated attacks. Only what is consistently considered valuable gets distribution.
Truth Training is a decentralized communication ecosystem where truth travels without identity.
Events move freely through the network — encrypted, verified, and echoed by others — creating a distributed field of awareness instead of a chain of messages.
Each reflection of an event confirms its existence; each independent echo increases its credibility.
Like confession without a priest, users anonymously release truths into the network — and the collective conscience responds.
It can serve as an alternative to voting systems, measuring the authenticity of social signals and public sentiment not through ballots, but through shared evaluation of facts.
Unlike LoRa-based mesh systems such as Meshtastic, Truth Training builds a mesh of minds, not hardware — using Wi-Fi and the Internet as carriers of encrypted meaning, forming an autonomous infrastructure of human understanding.
Originally conceived to combat fraud, Truth Training evolves into a self-learning immunity against falsehood — distinguishing truth from deception through context, correlation, and collective resonance.
And beyond communication, Truth Training enables teamwork without a team lead — a coordination model where decisions arise from collective consensus, not hierarchy, creating a self-organizing environment for groups and projects.
Ultimately, without network connectivity, the application can serve as a personal electronic diary — a private space for individual reflection and truth-tracking.
Prototyping and Community Engagement. An "idea proposal" and a powerful framework have been created to demonstrate potential and attract like-minded individuals/developers for joint implementation. The core is deliberately left as a task for the open community to develop without financial investment.
The Truth Training project demonstrates various forms of "transition cost" inertia and techno-conceptual inertia on its own skin:
Financial Reasons: lack of funding, open-source strategy
Cost of transition from concept and infrastructure to a meaningful, working system that will attract users.
Creating CI/CD and documentation for a hypothetical application turned out to be easier than implementing its core and overcoming the inertia of a blank network effect. This is a vivid illustration of the very problem: overcoming the technical inertia of an empty project is easier than the social inertia of having no users around it.
Truth Training in its current form is not a tool for studying collective intelligence, but a thoroughly developed research proposal and technological manifesto, wrapped in the form of a professional repository.
Its main value right now is in clear problem formulation and demonstrating what the technical implementation could look like. Its prospects in the scientific field depend not on the current code, but on whether the community (or authors) can bring this framework to life — with a working P2P core, consensus-building mechanisms, and ultimately, users.
The core of the Truth Training model is its main value and most interesting part. A standalone, distributed social network where people act as "agents of reason," forming and training AI agents through anonymous interaction with content — this is a powerful, original idea. It overturns the familiar paradigm of social networks centered on personalities and likes.
To overcome social inertia (attract users), we first need to overcome technical inertia (implement the core). And to find motivation and resources for core implementation, we often need confidence that social success will follow.
The project has frozen at this equilibrium point: the idea is too complex for quick hackathon implementation, but insufficiently developed and presented to reliably attract serious resources.
The model remains intellectual property in text form. Its main value right now is stimulating thinking. It can inspire another team with different resources to create a similar, but already working project.
The Truth Training model is not a ready-made product, but a source of breakthrough ideas for post-digital sociality. It offers an alternative where value is created not through attention to personalities, but through collective intelligence focused on content and agents.
Truth Training as a repository is an invitation to collaboration. The most likely path to core emergence is if experienced developers in distributed systems (Rust, P2P) join the project, who are interested in the engineering challenge rather than just the philosophical idea.
For researchers of collective intelligence, this project is currently useless as a tool but useful as a case study. It demonstrates how a theoretical model tries (and currently cannot) make the journey to technical implementation, which in itself is a subject for study.
🧠 Truth Training — Operating Logic and Computational Model For more details see : docs/model_core.md
Truth Training is a system for collective evaluation of events and statements, based on the principle of the wisdom of the crowd.
It does not assume the presence of a central arbiter of truth and does not require expert knowledge from individual participants.
In the system, truth:
- is not declared
- is not voted on directly
- is not determined by authority
Instead, it emerges statistically — as a stable result of independent evaluations accumulated over time.
An Event is a statement or fact that has been recorded in the system.
An event:
- appears as unverified
- circulates within the system
- receives independent evaluations
- over time is either reinforced or rejected
- No event is immediately considered true or false
- Truthfulness is a process, not a state
Each event has an 8-bit code used not for semantic meaning, but for protocol-level propagation logic.
In the described model, the code controls:
- event transmission
- retransmission
- termination of propagation
The code:
- does not directly participate in truth calculation
- can be algorithmically modified during propagation
This allows the system to:
- prevent infinite propagation
- implement P2P logic without changing data structures
- separate transport logic from evaluation logic
Impact is a subjective assessment of the consequences of an event, not an evaluation of the fact itself.
Impact answers the question:
“What effect did (or will) this event have?”
Each Impact:
- is linked to a specific event
- has a type (reputation, finance, emotions, etc.)
- has a sign:
- positive
- negative
- is time-stamped
The system does not ask “Is this true?”
It asks:
“What happened as a result of this being accepted as true?”
Judgment is an individual user's assessment of an event's truthfulness, forming the basis for collective truth determination.
Judgment answers the question:
"Is this event true or false based on my understanding?"
Each Judgment:
- is linked to a specific event
- represents individual truth assessment
- contributes to collective truth score
- preserves user anonymity
- is time-stamped
The truthfulness of an event emerges from the aggregation of individual judgments:
Event truthfulness ≈ (Σ positive judgments − Σ negative judgments) ÷ total number of judgments
Key Principle:
Truth is determined through collective independent assessments rather than direct declaration or voting.
While Impact evaluates consequences, Judgment determines truthfulness:
- Impact: "What effect did this have?" (consequence-focused)
- Judgment: "Is this true or false?" (truth-focused)
The truthfulness of an event is not stored as a field and is not explicitly defined.
It is derived from:
- the number of Impact evaluations
- their direction (positive / negative)
- accumulation over time
- stability of the result
Event truthfulness ≈ (Σ positive impacts − Σ negative impacts) ÷ number of events
Important:
- early evaluations carry are a predictions of the final result and carry more weight if this prediction turns out to be correct
- stable evaluations gain significance over time
- sharp changes indicate conflicting interpretations
The system embeds key conditions for valid collective evaluation:
-
Independence of participants
- Participants see the ratings of others and can make their own assumptions based on these ratings
-
Diversity of sources
- different contexts, motivations, experiences
-
Sufficient number of evaluations
- the law of large numbers applies
-
Absence of a central truth authority
Truth emerges as statistical equilibrium, not as a decision.
Each event is linked to a context, which defines:
- domain (social, financial, political, etc.)
- form (truth, deception, omission)
- cause
- development path
- effect
Context:
- does not define truth
- provides a frame for interpreting consequences
The system periodically computes aggregates:
- total number of events
- ratio of positive to negative impacts
- trend (slope of trust change)
These metrics:
- are not used for decision-making
- serve as system state indicators
- allow observation of dynamics
Stores events:
- description
- context
- timestamps
- discovery status
- protocol propagation code
Stores impacts:
- link to event
- impact type
- sign (positive / negative)
- timestamp
Multifactor interpretation model:
- category
- form
- cause
- development
- effect
Stores judgment:
- link to event
- truth assessment (true/false)
- timestamp
- user anonymity identifier
Aggregated system state indicators
Truth Training:
- does not fight lies directly
- does not require acknowledging falsehood
- does not force truth
Instead, the system:
- allows events to “live through time”
- records consequences
- shows which statements remain stable over time
Truth here is:
that which continues to function without destroying the system
- lies may be profitable in the short term
- but their consequences accumulate
- collective evaluation does not require trust in participants
- only trust in statistics
Thus, the system naturally identifies and suppresses fraud — not through control, but through observation of consequences.
The system implements a reputation model to track participant accuracy and influence:
- Reputation Calculation: Based on historical accuracy of participant's impact and judgment assessments
- Accuracy Tracking: Monitors how well participant's predictions align with collective outcomes
- Dynamic Adjustment: Reputation scores evolve based on continued performance
- Weighted Influence: Higher reputation participants have proportionally greater impact on collective assessments
- Anonymous Identity: Reputation is tied to cryptographic keys, preserving participant anonymity
This model ensures that consistently accurate participants gradually gain more influence in the system while maintaining privacy.
The system incorporates predictive capabilities for anticipating event consequences:
- Prediction Modeling: Participants can forecast potential outcomes of events
- Accuracy Assessment: Predictions are evaluated against actual outcomes over time
- Temporal Horizon: Predictions include timing estimates for when consequences may manifest
- Probability Weighting: Confidence levels are assigned to different prediction scenarios
- Learning Feedback: Prediction accuracy contributes to participant reputation
This mechanism enables proactive assessment of potential future impacts rather than solely reactive evaluation.
Time plays a crucial role in the system's evaluation process:
- Event Timeline: Each event has associated temporal boundaries and duration
- Consequence Timing: Impacts may manifest with delays relative to event occurrence
- Truth Evolution: Event truthfulness may change as more temporal data accumulates
- Stability Detection: Events are evaluated for temporal consistency over time
- Decay Functions: Older assessments may have reduced influence through temporal decay
The temporal dimension allows the system to capture delayed consequences and evolving understanding of events.
The system implements multiple layers of protection against manipulation attempts:
- Behavioral Analysis: Monitors for suspicious assessment patterns or coordinated activity
- Trust Limiting: Caps maximum influence any single participant can have on outcomes
- Anomaly Detection: Identifies unusual correlation patterns that may indicate manipulation
- Decentralized Control: No single authority can override collective assessments
- Transparency: All assessments and their origins remain traceable for verification
These mechanisms maintain system integrity even when faced with deliberate attempts to skew results.
- Core Library — integration guide: docs/quickstart_core.md
- CLI — quickstart: docs/quickstart_cli.md, reference: docs/CLI_Usage.md
- Server — quickstart: docs/quickstart_server.md, deployment: docs/Deployment.md
- Desktop UI — quickstart: docs/quickstart_desktop.md, reference: docs/UI_Desktop.md
- Android Mobile — quickstart: docs/quickstart_android.md, architecture: docs/android_discovery_architecture.md
- iOS Mobile — quickstart: docs/quickstart_ios.md
Ready-to-use binaries and installers for all platforms are available in the GitHub Releases section. Download pre-compiled executables, installers, and packages for:
- Desktop Applications: Linux (AppImage, DEB, RPM), Windows (MSI, EXE), macOS (DMG, PKG)
- Android: APK and AAB packages for direct installation
- iOS: IPA packages and App Store builds
- Server: RPM and DEB packages for Linux distributions, PKG for macOS
- CLI Tools: Pre-compiled
truthctlbinaries for all supported platforms - Core Libraries: Static and dynamic libraries (
.a,.so,.dylib) for integration
All releases include checksums (SHA256) for verification and are signed for security. Visit the releases page to download the latest stable version or development builds.
- docs/README.md — Human-readable, narrative depth
- spec/README.md — AI-focused directives and constraints
- docs/Documentation_Refactor_Overview.md — Pipeline summary
- docs/Documentation_Refactor_Inventory.md — Inventory instructions
- docs/Documentation_Refactor_Links.md — Link validation workflow