Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Not a bug] Proper usage of with_advisory_lock #36

Open
donatoaz opened this issue Dec 26, 2018 · 0 comments
Open

[Not a bug] Proper usage of with_advisory_lock #36

donatoaz opened this issue Dec 26, 2018 · 0 comments

Comments

@donatoaz
Copy link

Hi! first off, thanks for the attention and for for the gem.

Rails: 5.0.0
psql (PostgreSQL) 10.3 (Debian 10.3-1.pgdg90+1)

My rails server is run inside a docker container with

bundle exec puma -t 0:5 -p ${PORT:-3000} -e ${RAILS_ENV:-development}

DB config has a pool of 5 threads as well

I am trying to figure out what I might be doing wrong in using this gem.

This is an excerpt of a service object

  def call
    Rails.logger.debug "Running thread #{Thread.current.object_id}"
    ActiveRecord::Base.with_advisory_lock_result("backlog-lock", timeout_seconds: 5) do
      if servicing.work_order.in_progress?
        servicing.update(status: :open, redone: true, approval_status: nil)
      else
        servicing.rescheduled!
        add_duplicated_servicing
        backlog_work_order
      end
    end
  end

This code gets executed as a result of a controller action, and to test I am executing two curl requests in parallel.

The behaviour I don't understand is that the second thread to reach the with_advisory_lock_result seems to be blocking the first one, instead of just waiting for 5 seconds while the other thread keeps running.

The log output from executing the code above is:

Started POST "/api/servicings/92/reschedule?redo_reason=testing" for 172.18.0.1 at 2018-12-26 20:46:17 +0000
Started POST "/api/servicings/91/reschedule?redo_reason=testing" for 172.18.0.1 at 2018-12-26 20:46:17 +0000
Processing by API::ServicingsController#reschedule as */*
  Parameters: {"redo_reason"=>"testing", "id"=>"91"}
Processing by API::ServicingsController#reschedule as */*
  Parameters: {"redo_reason"=>"testing", "id"=>"92"}
  User Load (1.6ms)  SELECT  "users".* FROM "users" WHERE "users"."deleted_at" IS NULL AND "users"."id" = $1 LIMIT $2  [["id", 1], ["LIMIT", 1]]
  User Load (7.7ms)  SELECT  "users".* FROM "users" WHERE "users"."deleted_at" IS NULL AND "users"."id" = $1 LIMIT $2  [["id", 1], ["LIMIT", 1]]
  Servicing Load (4.7ms)  SELECT  "servicings".* FROM "servicings" WHERE "servicings"."deleted_at" IS NULL AND "servicings"."id" = $1 LIMIT $2  [["id", 92], ["LIMIT", 1]]
  CACHE (0.0ms)  SELECT  "users".* FROM "users" WHERE "users"."deleted_at" IS NULL AND "users"."id" = $1 LIMIT $2  [["id", 1], ["LIMIT", 1]]
  CACHE (0.3ms)  SELECT  "servicings".* FROM "servicings" WHERE "servicings"."deleted_at" IS NULL AND "servicings"."id" = $1 LIMIT $2  [["id", 92], ["LIMIT", 1]]
  Servicing Load (9.5ms)  SELECT  "servicings".* FROM "servicings" WHERE "servicings"."deleted_at" IS NULL AND "servicings"."id" = $1 LIMIT $2  [["id", 91], ["LIMIT", 1]]
  CACHE (0.0ms)  SELECT  "users".* FROM "users" WHERE "users"."deleted_at" IS NULL AND "users"."id" = $1 LIMIT $2  [["id", 1], ["LIMIT", 1]]
  CACHE (0.5ms)  SELECT  "servicings".* FROM "servicings" WHERE "servicings"."deleted_at" IS NULL AND "servicings"."id" = $1 LIMIT $2  [["id", 91], ["LIMIT", 1]]
   (2.0ms)  BEGIN
   (2.8ms)  BEGIN
   (0.3ms)  COMMIT
   (0.4ms)  COMMIT
Running thread 47076409369740
Running thread 69856980992120
   (1.5ms)  SELECT pg_try_advisory_lock(986825911,0) AS t3f11a179e210d135eade537bf1cf4340 /* backlog-lock */
   (0.8ms)  SELECT pg_try_advisory_lock(986825911,0) AS t27331309003857f07af7ba8c5d482a7f /* backlog-lock */
  WorkOrder Load (11.5ms)  SELECT  "work_orders".* FROM "work_orders" WHERE "work_orders"."deleted_at" IS NULL AND "work_orders"."id" = $1 LIMIT $2  [["id", 39], ["LIMIT", 1]]
   (1.3ms)  BEGIN
   (1.3ms)  COMMIT
  Location Load (3.5ms)  SELECT  "locations".* FROM "locations" WHERE "locations"."deleted_at" IS NULL AND "locations"."id" = $1 LIMIT $2  [["id", 1], ["LIMIT", 1]]
  WorkOrder Load (2.6ms)  SELECT  "work_orders".* FROM "work_orders" WHERE "work_orders"."deleted_at" IS NULL AND "work_orders"."status" IN ('open', 'in_progress') AND "work_orders"."backlog" = $1 AND "work_orders"."location_id" = 1 LIMIT $2  [["backlog", true], ["LIMIT", 1]]
   (0.4ms)  BEGIN
  SQL (5.4ms)  INSERT INTO "work_orders" ("contact", "location_id", "description", "backlog", "starts_at", "timezone", "ends_at", "created_at", "updated_at") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9) RETURNING "id"  [["contact", "Donato test"], ["location_id", 1], ["description", "Test"], ["backlog", true], ["starts_at", 2018-12-27 12:00:00 UTC], ["timezone", "America/New_York"], ["ends_at", 2018-12-27 18:00:00 UTC], ["created_at", 2018-12-26 20:46:18 UTC], ["updated_at", 2018-12-26 20:46:18 UTC]]
   (3.9ms)  COMMIT
   (0.7ms)  BEGIN
   (0.5ms)  COMMIT
  Area Load (11.5ms)  SELECT  "areas".* FROM "areas" WHERE "areas"."deleted_at" IS NULL AND "areas"."id" = $1 LIMIT $2  [["id", 2], ["LIMIT", 1]]
   (99.2ms)  SELECT pg_try_advisory_lock(986825911,0) AS t00e16279f8d89a1b21ffe48bd6306f09 /* backlog-lock */
   (1.1ms)  SELECT pg_try_advisory_lock(986825911,0) AS tf3d6a44a3253fb07f24c8015b242f75b /* backlog-lock */
   (1.3ms)  SELECT pg_try_advisory_lock(986825911,0) AS tccae0e9b109ac8a5fa3dacabd26ea120 /* backlog-lock */
   (0.7ms)  SELECT pg_try_advisory_lock(986825911,0) AS t705f95fd6f580325093a5c300580671e /* backlog-lock */
   (0.8ms)  SELECT pg_try_advisory_lock(986825911,0) AS tbfeb8cd03aef66de28ea5193a5772ce2 /* backlog-lock */
   (0.6ms)  SELECT pg_try_advisory_lock(986825911,0) AS t331e00c084d8fd5f7bebfffb5e1ad7f0 /* backlog-lock */
   (0.7ms)  SELECT pg_try_advisory_lock(986825911,0) AS t33c88295291c7d1a319c46a7043d5115 /* backlog-lock */
   (0.6ms)  SELECT pg_try_advisory_lock(986825911,0) AS t1e5a62030123421fa68169c6bc3629c9 /* backlog-lock */
   (0.6ms)  SELECT pg_try_advisory_lock(986825911,0) AS t52bc53f618514af84bc297cdacc9d25a /* backlog-lock */

This last line goes on for the 5 seconds (parameter to with_advisory_lock_result) and then it gives up -- no lock acquired -- and only then does the first thread resume work.

From the lines

Running thread 47076409369740
Running thread 69856980992120
   (1.5ms)  SELECT pg_try_advisory_lock(986825911,0) AS t3f11a179e210d135eade537bf1cf4340 /* backlog-lock */
   (0.8ms)  SELECT pg_try_advisory_lock(986825911,0) AS t27331309003857f07af7ba8c5d482a7f /* backlog-lock */

I presume that there are indeed 2 threads in parallel (different Thread.current.object_id), almost in sync with each other.

I expected that I would see logs of the first thread doing its work intermingled with logs of the second thread querying the pg_try_advisory_lock, which does not seem to be the case.

Perhaps I am missing a parameter, or a db config in rails....

@donatoaz donatoaz changed the title [Not a bug] Proper usage of transaction option [Not a bug] Proper usage of with_advisory_lock Dec 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant