This repository has been archived by the owner on Apr 26, 2024. It is now read-only.
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.
Open
Description
In our log files appear these lines multiple times (around 5 per day). Our synapse instance is very small. Until now, I have not yet recognized problems with our synapse server which seem to be related to that. However, the log level is WARN
, so do I need to worry?
16:22:32 Starting db txn 'update_presence' from sentinel context
16:22:32 Starting db connection from sentinel context: metrics will be lost
16:23:27 [TXN OPERROR] {claim_e2e_one_time_keys-19645} could not serialize access due to concurrent update
16:23:27 0/5
16:23:27 [TXN OPERROR] {claim_e2e_one_time_keys-19645} could not serialize access due to concurrent update
16:23:27 1/5
16:23:27 [TXN OPERROR] {claim_e2e_one_time_keys-19648} could not serialize access due to concurrent update
16:23:27 0/5
16:23:27 [TXN OPERROR] {add_messages_to_device_inbox-19655} could not serialize access due to concurrent update
16:23:27 0/5
16:23:27 [TXN OPERROR] {add_messages_to_device_inbox-19654} could not serialize access due to concurrent update
16:23:27 0/5
16:23:27 [TXN OPERROR] {add_messages_to_device_inbox-19652} could not serialize access due to concurrent update
16:23:27 0/5
16:23:27 [TXN OPERROR] {add_messages_to_device_inbox-19655} could not serialize access due to concurrent update
16:23:27 1/5
16:23:32 Starting db txn 'update_presence' from sentinel context
16:23:32 Starting db connection from sentinel context: metrics will be lost
OS: Ubuntu 18.04
Version of package matrix-synapse-py3
: 0.99.3+bionic1
DB is powered by PostgreSQL, and is also accessed by MXISD