Fix used_memory_dataset underflow due to miscalculated used_memory_overhead#3005
Merged
enjoy-binbin merged 6 commits intovalkey-io:unstablefrom Jan 15, 2026
Merged
Fix used_memory_dataset underflow due to miscalculated used_memory_overhead#3005enjoy-binbin merged 6 commits intovalkey-io:unstablefrom
enjoy-binbin merged 6 commits intovalkey-io:unstablefrom
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## unstable #3005 +/- ##
============================================
- Coverage 74.34% 74.26% -0.09%
============================================
Files 129 129
Lines 70908 70914 +6
============================================
- Hits 52714 52661 -53
- Misses 18194 18253 +59
🚀 New features to boost your workflow:
|
Member
We'll backport it as needed after the main PR is submitted. |
madolson
reviewed
Jan 5, 2026
…overhead Signed-off-by: Ace Breakpoint <chemistudio@gmail.com>
Corrected spelling of `necessary` to `necessary` in comments. Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Ace Breakpoint <chemistudio@gmail.com>
Signed-off-by: Ace Breakpoint <chemistudio@gmail.com>
Co-authored-by: Binbin <binloveplay1314@qq.com> Signed-off-by: bpint <chemistudio@gmail.com>
Co-authored-by: Binbin <binloveplay1314@qq.com> Signed-off-by: bpint <chemistudio@gmail.com>
madolson
reviewed
Jan 5, 2026
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
enjoy-binbin
approved these changes
Jan 6, 2026
Member
enjoy-binbin
left a comment
There was a problem hiding this comment.
Would you able to write a TCL test case to cover this? If not, i can take a look later.
Contributor
Author
really sorry that i am familiar with neither tcl language nor the principle of test cases. |
Member
|
@bpint Merged, thank you. |
Contributor
Author
|
respect to @madolson @enjoy-binbin and all maintainers for your efficient and awesome work |
arshidkv12
pushed a commit
to arshidkv12/valkey
that referenced
this pull request
Jan 23, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. ## Double-Counted database memory When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. ## Missed Empty Databases In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: arshidkv12 <arshidkv12@gmail.com>
zuiderkwast
pushed a commit
to zuiderkwast/placeholderkv
that referenced
this pull request
Jan 29, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. ## Double-Counted database memory When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. ## Missed Empty Databases In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
roshkhatri
pushed a commit
to roshkhatri/valkey
that referenced
this pull request
Jan 29, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
roshkhatri
pushed a commit
to roshkhatri/valkey
that referenced
this pull request
Jan 29, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
roshkhatri
pushed a commit
to roshkhatri/valkey
that referenced
this pull request
Jan 29, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
roshkhatri
pushed a commit
to roshkhatri/valkey
that referenced
this pull request
Jan 29, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
roshkhatri
pushed a commit
to roshkhatri/valkey
that referenced
this pull request
Jan 29, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
roshkhatri
pushed a commit
to roshkhatri/valkey
that referenced
this pull request
Jan 30, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
zuiderkwast
pushed a commit
to zuiderkwast/placeholderkv
that referenced
this pull request
Jan 30, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. ## Double-Counted database memory When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. ## Missed Empty Databases In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
zuiderkwast
pushed a commit
that referenced
this pull request
Feb 3, 2026
…erhead (#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in #2994. ## Double-Counted database memory When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. ## Missed Empty Databases In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
roshkhatri
pushed a commit
to roshkhatri/valkey
that referenced
this pull request
Feb 4, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
roshkhatri
pushed a commit
to roshkhatri/valkey
that referenced
this pull request
Feb 18, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
harrylin98
pushed a commit
to harrylin98/valkey_forked
that referenced
this pull request
Feb 19, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. ## Double-Counted database memory When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. ## Missed Empty Databases In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
roshkhatri
pushed a commit
to roshkhatri/valkey
that referenced
this pull request
Feb 20, 2026
…erhead (valkey-io#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in valkey-io#2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
madolson
added a commit
that referenced
this pull request
Feb 24, 2026
…erhead (#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in #2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
madolson
added a commit
that referenced
this pull request
Feb 24, 2026
…erhead (#3005) The metric `used_memory_dataset` turned into an insanely large number close to 2^64 (actually overflowed negative value), as reported in #2994. When server starts, the global variable `server.initial_memory_usage` is used to record a memory baseline in InitServerLast. This `server.initial_memory_usage` has clearly included initial database memory, since databases are created in initServer. In function getMemoryOverheadData, the `mem_total` is firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE. This eventually caused wrongly larger `used_memory_overhead`. For a database with only a couple of keys, the `used_memory_overhead` is easily larger than `used_memory` and causes an overflowed `used_memory_dataset`. In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation: ```c if (db == NULL || !kvstoreNumAllocatedHashtables(db->keys)) continue; ``` However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including `hashtable_size_index`, which can be larger than 128 KiB. On the contrary, this caused wrongly smaller `used_memory_overhead` for an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, and `used_memory_overhead` will increase (for `used_memory_dataset` decrease) by more than 128 KiB due to the single key insertion. Signed-off-by: Ace Breakpoint <chemistudio@gmail.com> Signed-off-by: bpint <chemistudio@gmail.com> Signed-off-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Describe the bug
The metric
used_memory_datasetturned into an insanely large number close to 2^64 (actually overflowed negative value),as reported in #2994 (comment).
Versions affected include 8.x and 9.x. (Should i submit multiple PRs?)
Analysis
We found that miscalculated
used_memory_overheadis the core reason.Double-Counted database memory
When server starts, the global variable
server.initial_memory_usageis used to record a memory baseline in InitServerLast(). Thisserver.initial_memory_usagehas clearly included initial database memory, since databases are created in initServer().In function getMemoryOverheadData(), the
mem_totalis firstly assigned the baseline, which includes initial database memory. And then all extra memory usage of databases are added to mem_total. The initial database memory are therefore counted TWICE.This eventually caused wrongly larger
used_memory_overhead. For a database with only a couple of keys, theused_memory_overheadis easily larger thanused_memoryand causes an overflowedused_memory_dataset.Missed Empty Databases
In function getMemoryOverheadData(), kvstores without any allocated hashtable are ignored from calculation,
However, even the kvstore has no allocated hashtable, there are still some memory allocated by kvstoreCreate(), including
hashtable_size_index, which can be larger than 128 KiB.On the contrary, this caused wrongly smaller
used_memory_overheadfor an empty database. When we insert only ONE key to the database, the database is suddenly taken into account, andused_memory_overheadwill increase (forused_memory_datasetdecrease) by more than 128 KiB due to the single key insertion.Proposed Fix
In this PR,
a) We record the zmalloc_used_memory() change before and after database creation during server initialization. And later when we decide the baseline
server.initial_memory_usage, the initial database memory usage is excluded.b) We remove the condition kvstoreNumAllocatedHashtables() in getMemoryOverheadData(), to make the
used_memory_overheadreasonable for both empty and non-empty kvstores.