Olares: An Open-Source Personal Cloud to Reclaim Your Data
-
Updated
Feb 23, 2026 - Go
Olares: An Open-Source Personal Cloud to Reclaim Your Data
Powerful search page powered by LLMs and SearXNG
A curated list of academic events on AI Security & Privacy
When the stakes are high, intelligence is only half the equation - reliability is the other
[ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text
Convert Word docs to Markdown privately - 100% offline, no uploads. Perfect for processing sensitive documents with Ollama, LM Studio, GPT4All & other local AI tools. Just double-click the standalone/word-to-markdown.html file to use.
OfflineAI is an artificial intelligence that operates offline and uses machine learning to perform various tasks based on the code provided. It is built using two powerful AI models by Mistral AI.
Local RAG system with a built-in governance agent that filters sensitive or restricted information with separated agent logging systems to keep privacy and security
[ICCV 2025] Geminio is a VLM-powered gradient inversion attack in federated learning (FL). It allows the adversary (the FL server) to describe the data of value and reconstruct the victim client's private data matching the description.
Comprehensive taxonomy of AI security vulnerabilities, LLM adversarial attacks, prompt injection techniques, and machine learning security research. Covers 71+ attack vectors including model poisoning, agentic AI exploits, and privacy breaches.
The LLM Unlearning repository is an open-source project dedicated to the concept of unlearning in Large Language Models (LLMs). It aims to address concerns about data privacy and ethical AI by exploring and implementing unlearning techniques that allow models to forget unwanted or sensitive data. This ensures that AI models comply with privacy.
awesome list of multi-agent security resources
🔒 Detect security leaks in AI-assisted codebases. Static analysis tool for Python & JS/TS with cross-file taint tracking.
Semantic PII Masking & Anonymization for LLMs (RAG). GDPR-compliant, reversible, and context-aware. Supports LangChain & OpenAI
A transparent, local-only tool to sanitize sensitive info for AI.
Un dépot centralisé pour partager des idées et des implémentations de techniques en sécurité de l'IA
A complete, menu-driven AI model interface for Windows that simplifies running local GGUF language models with llama.cpp. This tool automatically manages dependencies, provides multiple interaction modes, and prioritizes user privacy through fully offline operation.
Sentinel AI: A high-performance security proxy that sanitizes PII from LLM requests in real-time. Features stable token mapping, response rehydration, and a full observability dashboard.
An AI platform that is truly private
Add a description, image, and links to the ai-privacy topic page so that developers can more easily learn about it.
To associate your repository with the ai-privacy topic, visit your repo's landing page and select "manage topics."