-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
android_ipc_eng
After migrating MMKV to the Android platform, many requested support for multi-process access—a scenario not previously considered (as iOS doesn't support multi-process), requiring comprehensive design and careful implementation.
When discussing IPC, the primary concern is architecture selection, as different architectures yield vastly different results.
The first option that comes to mind on Android is ContentProvider
: a separate process manages data, ensuring data synchronization is less error-prone, simple, and easy to use. However, its major drawback is slowness: slow startup and slow access. This is a common pain point for Binder-based CS architecture components on Android. Other CS architectures like traditional sockets, PIPE, or message queues are even slower due to requiring at least two memory copies.
MMKV prioritizes extreme access speed, so we must minimize inter-process communication. A CS architecture is unsuitable. Considering that MMKV uses mmap
under the hood, a decentralized architecture is a natural choice. By mapping the file into each accessing process's memory space, adding appropriate process locks, and handling data synchronization, we can achieve multi-process concurrent access.
Implementing a decentralized architecture is non-trivial. Android, being a stripped-down Linux, has limited IPC component support. For example, the first thought for a process lock is pthread_mutex
from the pthread library. A pthread_mutex
created in shared memory can serve as a process lock. However, Android's pthread_mutex
does not guarantee robustness—if a process holding a pthread_mutex
is killed, the system won't clean up, leaving the lock permanently, causing other waiting processes to starve. Other IPC components like semaphores and condition variables share this issue. Android goes to great lengths to terminate processes quickly.
After research, the only robust options are open file descriptors and components built on them: file locks and Binder's death notifications (yes, Binder also relies on this cleanup mechanism, where the open file is /dev/binder
).
We have two choices:
- File locks: Pros—naturally robust. Cons—no support for recursive locking or read-write lock upgrades/downgrades, requiring manual implementation.
-
pthread_mutex
: Pros—pthread supports recursive locking and read-write lock upgrades/downgrades. Cons—not robust, requiring manual cleanup.
For mutex cleanup, one potential solution is using Binder death notifications: Processes A and B register each other's death notifications to clean up when the other dies. However, a problematic scenario arises if only Process A exists—its death notification won't be handled, leaving a permanently locked mutex. Binder mandates that death notifications cannot be handled by the process itself, requiring another process, making this issue challenging.
After weighing options, we first use file locks as simple mutexes for MMKV's multi-process development and address recursive locking and lock upgrades/downgrades later.
Let’s briefly review MMKV's original logic. MMKV essentially mmap
s a file into a memory block, appending new key-values to memory. When reaching capacity, it defragments and rewrites to free space; if still insufficient, it doubles the memory space. For duplicate keys in the memory file, MMKV uses the last written value. Other processes must handle three scenarios to maintain consistency: write pointer growth, memory defragmentation, and memory expansion. But first, how do other processes detect these changes?
-
Write Pointer Synchronization
Each process caches its own write pointer. When writing a key-value, the latest write pointer position is also written to themmap
memory. Each process compares its cached pointer with themmap
write pointer—if they differ, another process has written data. MMKV already stores the valid memory size in the file header, which coincides with the write pointer's memory offset. We reuse this value to synchronize the write pointer. -
Detecting Memory Defragmentation
Use a monotonically increasing sequence number incremented on each defragmentation. Stored inmmap
memory, each process caches this number. Comparing sequence numbers reveals if another process triggered defragmentation. -
Detecting Memory Expansion
MMKV attempts defragmentation before expanding memory. Memory expansion can be treated like defragmentation. The new size is obtained via file size, avoiding extra storage inmmap
memory.
State synchronization pseudocode:
void checkLoadData() {
if (m_sequence != mmapSequence()) {
m_sequence = mmapSequence();
if (m_size != fileSize()) {
m_size = fileSize();
// Handle memory expansion
} else {
// Handle memory defragmentation
}
} else if (m_actualSize != mmapActualSize()) {
auto lastPosition = m_actualSize;
m_actualSize = mmapActualSize();
// Handle write pointer growth
} else {
// No changes
return;
}
}
When a process detects mmap
write pointer growth, another process has appended new key-values. These new entries are appended after the original pointer, potentially overwriting existing keys. The process reads these new entries, inserts/replaces them in its cache, and syncs the write pointer.
auto lastPosition = m_actualSize;
m_actualSize = mmapActualSize();
// Handle write pointer growth
auto bufferSize = m_actualSize - lastPosition;
auto buffer = Buffer(lastPosition, bufferSize);
map<string, Buffer> dictionary = decodeMap(buffer);
for (auto& itr : dictionary) {
// m_cache remains valid
m_cache[itr.first] = itr.second;
}
When defragmentation occurs, all pre-pointer keys become invalid. The simplest approach is to reload from scratch.
// Handle memory defragmentation
m_actualSize = mmapActualSize();
auto buffer = Buffer(0, m_actualSize);
m_cache = decodeMap(buffer);
Memory expansion always follows defragmentation, so the handling is identical to defragmentation.
Now that data synchronization is complete, we address locking: recursive locks and lock upgrades/downgrades.
-
Recursive Locks
A process/thread holding a lock can re-lock without deadlock, and unlocking doesn't release outer locks. File locks satisfy the first but not the second—they are state locks without counters. A single unlock releases all locks, making recursive functions risky. -
Lock Upgrades/Downgrades
Upgrading a shared lock to exclusive (read to write) is supported but prone to deadlock. If two processes hold read locks and both attempt upgrades, they deadlock. File locks also prevent downgrades since a downgrade fully releases the lock.
To solve these, encapsulate file locks with read/write lock counters. Logic:
Read Counter | Write Counter | Add Read Lock | Add Write Lock | Remove Read Lock | Remove Write Lock |
---|---|---|---|---|---|
0 | 0 | Add read | Add write | - | - |
0 | 1 | +1 | +1 | - | Release write |
0 | N | +1 | +1 | - | -1 |
1 | 0 | +1 | Release read, add write | Release read | - |
1 | 1 | +1 | +1 | -1 | Add read |
1 | N | +1 | +1 | -1 | -1 |
N | 0 | +1 | Release read, add write | -1 | - |
N | 1 | +1 | +1 | -1 | Add read |
N | N | +1 | +1 | -1 | -1 |
Key points:
- When adding a write lock while holding a read lock, first try a write lock. If
try_lock
fails (another process holds a read lock), release the read lock to avoid deadlock. - When releasing a write lock while holding a read lock, add a read lock to downgrade.
A simple test created two Services to benchmark MMKV, MultiProcessSharedPreferences
, and SQLite. Code: git repo.
Test environment: Pixel 2 XL 64G, Android 8.1.0, unit: ms. Each test looped 1000 times. MultiProcessSharedPreferences
used apply()
; SQLite enabled WAL.
Test Case | MMKV | MultiProcessSharedPreferences | SQLite |
---|---|---|---|
Write 1000 times | 25 | 98 | 207 |
Read 1000 times | 4 | 36 | 17 |
MMKV significantly outperforms alternatives in both read and write operations.
MMKV is published under the BSD 3-Clause license. For details check out the LICENSE.TXT.
Check out the CHANGELOG.md for details of change history.
If you are interested in contributing, check out the CONTRIBUTING.md, also join our Tencent OpenSource Plan.
To give clarity of what is expected of our members, MMKV has adopted the code of conduct defined by the Contributor Covenant, which is widely used. And we think it articulates our values well. For more, check out the Code of Conduct.
Check out the FAQ first. Should there be any questions, don't hesitate to create issues.
User privacy is taken very seriously: MMKV does not obtain, collect or upload any personal information. Please refer to the MMKV SDK Personal Information Protection Rules for details.
- In English
- 中文
- In English
- 中文
- In English
- 中文
-
In English
-
中文
-
Golang
- In English
- 中文