Skip to content

Threads

O2eg edited this page Apr 17, 2022 · 11 revisions

db_converter supports the parallel deployment of packets. To implement this functionality, when db_converter is launched, a pair of threads is created for each processed database:

  • lock_observer - observe locks and manages executed queries
  • worker_db_func - performs steps and actions, also decides to continue working or interrupt the package execution when various kinds of exceptions occur

The --seq option disables parallel execution in cases when --db-name contains a comma-separated list of several bases or has a value ALL.

 

 
class DBCCore:
    @threaded
    def lock_observer(self, thread_name, db_conn_str, db_name, app_name_postfix):
        ...

    @threaded
    def worker_db_func(self, thread_name, db_conn_str, db_name, packet_name, read_only):
        ...

For each processed database, one lock_observer thread is created, the tasks of which are:

  • tracking of the db_converter's queries that locks other queries from being executed and canceling the blocking query when the timeout is reached cancel_blocker_tx_timeout
  • tracking of the db_converter's queries that are waiting for unlocking the relation being processed and canceling the pending queries when the timeout is reached cancel_wait_tx_timeout

The lock_observer thread exists as long as the worker_db_func linked by the name of the database being processed is running.

The interrupted query is repeated via conn_exception_sleep_interval seconds if --skip-step-cancel or --skip-action-cancel is not enabled. The number of attempts is not limited.

 

 

The higher value of cancel_wait_tx_timeout and cancel_blocker_tx_timeout timeouts, the greater impact on the performance of regular queries in case of the lock intersections.

Clone this wiki locally