You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Redis keys and values cannot be greater than 512M. The values of Cactus disk records know no such limits. So cactus crashes with a segfault whenever a record >512M is stored.
Kyoto Tycoon also has an effective record size limit. In Sonlib it is assumed to be 10M. Records bigger than this are split up with special "split record" logic.
I suspect that this splitting logic, which is not advertised as being perfectly robust leads to periodic crashes and database corruptions. The bigger the record the (probably) greater the risk.
I think that we can mirror this splitting logic in Redis using its native features to hopefully be more robust. In particular, its ability to store lists of values for a given key.
The text was updated successfully, but these errors were encountered:
For the record, this is the ktserver error I get with this data
RuntimeError: Command /usr/bin/time -v cactus_halGenerator --logLevel INFO --cactusDisk '<st_kv_database_conf type="kyoto_tycoon">
<kyoto_tycoon database_dir="fakepath" host="172.31.3.191" port="26354" />
</st_kv_database_conf>
' --secondaryDisk '<st_kv_database_conf type="kyoto_tycoon">
<kyoto_tycoon database_dir="fakepath" host="172.31.3.191" port="26181" />
</st_kv_database_conf>
' --referenceEventString Anc0 exited 128: stdout=None, stderr=Set up the flower disk
Set up the secondary database
Exception: ST_KV_DATABASE_EXCEPTION: An unknown database error occurred when we tried to bulk remove records from the database
caused by: ST_KV_DATABASE_EXCEPTION: stKVDatabase_bulkRemoveRecords with 20037496 records to update
caused by: ST_KV_DATABASE_EXCEPTION: kyoto tycoon bulk remove record failed: network error
Uncaught exception
Command exited with non-zero status 128
Redis keys and values cannot be greater than 512M. The values of Cactus disk records know no such limits. So cactus crashes with a segfault whenever a record >512M is stored.
Kyoto Tycoon also has an effective record size limit. In Sonlib it is assumed to be 10M. Records bigger than this are split up with special "split record" logic.
I suspect that this splitting logic, which is not advertised as being perfectly robust leads to periodic crashes and database corruptions. The bigger the record the (probably) greater the risk.
I think that we can mirror this splitting logic in Redis using its native features to hopefully be more robust. In particular, its ability to store lists of values for a given key.
The text was updated successfully, but these errors were encountered: