This repository has been archived by the owner on Apr 26, 2024. It is now read-only.
Feature request: delete old state event data #8253
Labels
A-Admin-API
A-Disk-Space
things which fill up the disk
T-Enhancement
New features, changes in functionality, improvements in performance, or user-facing enhancements.
Hi,
In our use of matrix in production we have some rooms where some bots post state_events like this:
PUT _/matrix/client/r0/rooms/{roomId}/state/{eventType}/{stateKey}
More info here: https://matrix.org/docs/spec/client_server/latest#put-matrix-client-r0-rooms-roomid-state-eventtype-statekey
Each x secs the bots will update the content of the events with the same {stateKey} or post new events with another {stateKey}.
After a while the database is bound to grow and take all space available if the json data and the row corresponding of these state event are not deleted periodically.
So in order to stop the growing of the database in size and still be able to continue using the room we tried:
(K.O.) Purge history as described here:
https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst#purge-history-api
With a body
The purge is "complete" but while all messages are deleted up to the ts, the state events are still there.
Correct me if I am wrong but the "API purge history" doesn't delete state events from rooms?
(K.O.) Deleting the data directly in the database:
Deletion of row corresponding to the state events in database make the room unusable like the case in:
#2919 (comment)
Isn't there a way to delete events row in the database from the necessary table without breaking the room?
(K.O.) Messages retentions is not useful since "Retention is only considered for non-state events." quoted from:
https://github.com/matrix-org/matrix-doc/blob/matthew/msc1763/proposals/1763-configurable-retention-periods.md#room-admin-specified-per-room-retention
There is really no way to delete state_events from rooms and database to prevent an ever growing disk usage?
Original post: #8114
The text was updated successfully, but these errors were encountered: