Skip to content

Async PG Fix v2 #712

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Async PG Fix v2 #712

wants to merge 6 commits into from

Conversation

ishantd
Copy link

@ishantd ishantd commented Mar 25, 2024

@levkk did all the heavylifting, just merged with main and tested out with both python and node async clients.
also removed a redundant variable and method

can fix #396

@ishantd ishantd changed the title Ishant/async fix Async PG Fix v2 Mar 25, 2024
@epollia
Copy link

epollia commented May 3, 2024

it's works for me.
Can you fix tests?

@ishantd
Copy link
Author

ishantd commented May 3, 2024

it's works for me. Can you fix tests?

not sure if it's a problem in code, getting this in the test runs, actual tests didn't even start running

image

@@ -1073,6 +1088,10 @@ impl Server {
'H' => {
self.in_copy_mode = true;
self.data_available = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this line still meant to be here?
If so, the following conditional assignment is unnecessary.

@smcgivern
Copy link
Contributor

I am trying this with the example script from pg-query-stream: https://www.npmjs.com/package/pg-query-stream

const pg = require('pg')
var pool = new pg.Pool()
const QueryStream = require('pg-query-stream')
const JSONStream = require('JSONStream')

//pipe 1,000,000 rows to stdout without blowing up your memory usage
pool.connect((err, client, done) => {
  if (err) throw err
  const query = new QueryStream('SELECT * FROM generate_series(0, $1) num', [1000000])
  const stream = client.query(query)
  //release the client when the stream is finished
  stream.on('end', done)
  stream.pipe(JSONStream.stringify()).pipe(process.stdout)
})

If I connect directly to Postgres, I see output immediately, as expected (ignore the error, that's just because there is no error handler in the sample script):

$ time node ./ | head
[
{"num":0}
,
{"num":1}
,
{"num":2}
,
{"num":3}
,
{"num":4}
node:events:492
      throw er; // Unhandled 'error' event
      ^

Error: write EPIPE
    at afterWriteDispatched (node:internal/stream_base_commons:160:15)
    at writeGeneric (node:internal/stream_base_commons:151:3)
    at Socket._writeGeneric (node:net:952:11)
    at Socket._write (node:net:964:8)
    at writeOrBuffer (node:internal/streams/writable:447:12)
    at _write (node:internal/streams/writable:389:10)
    at Writable.write (node:internal/streams/writable:393:10)
    at Stream.ondata (node:internal/streams/legacy:20:31)
    at Stream.emit (node:events:514:28)
    at drain (/Users/smcgivern/Code/forks/pgcat/tests/node/node_modules/through/index.js:36:16)
Emitted 'error' event on Socket instance at:
    at Socket.onerror (node:internal/streams/legacy:62:12)
    at Socket.emit (node:events:514:28)
    at emitErrorNT (node:internal/streams/destroy:151:8)
    at emitErrorCloseNT (node:internal/streams/destroy:116:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -32,
  code: 'EPIPE',
  syscall: 'write'
}

Node.js v20.9.0

real	0m0.209s
user	0m0.040s
sys	0m0.020s

If I connect to PgCat running this commit (having merged in main), it simply never terminates or logs any items, even with a single row. Given that this works fine with PgBouncer, I suspect there's something not quite right here.

@pchaseh
Copy link

pchaseh commented Aug 15, 2024

I am trying this with the example script from pg-query-stream: https://www.npmjs.com/package/pg-query-stream

const pg = require('pg')
var pool = new pg.Pool()
const QueryStream = require('pg-query-stream')
const JSONStream = require('JSONStream')

//pipe 1,000,000 rows to stdout without blowing up your memory usage
pool.connect((err, client, done) => {
  if (err) throw err
  const query = new QueryStream('SELECT * FROM generate_series(0, $1) num', [1000000])
  const stream = client.query(query)
  //release the client when the stream is finished
  stream.on('end', done)
  stream.pipe(JSONStream.stringify()).pipe(process.stdout)
})

If I connect directly to Postgres, I see output immediately, as expected (ignore the error, that's just because there is no error handler in the sample script):

$ time node ./ | head
[
{"num":0}
,
{"num":1}
,
{"num":2}
,
{"num":3}
,
{"num":4}
node:events:492
      throw er; // Unhandled 'error' event
      ^

Error: write EPIPE
    at afterWriteDispatched (node:internal/stream_base_commons:160:15)
    at writeGeneric (node:internal/stream_base_commons:151:3)
    at Socket._writeGeneric (node:net:952:11)
    at Socket._write (node:net:964:8)
    at writeOrBuffer (node:internal/streams/writable:447:12)
    at _write (node:internal/streams/writable:389:10)
    at Writable.write (node:internal/streams/writable:393:10)
    at Stream.ondata (node:internal/streams/legacy:20:31)
    at Stream.emit (node:events:514:28)
    at drain (/Users/smcgivern/Code/forks/pgcat/tests/node/node_modules/through/index.js:36:16)
Emitted 'error' event on Socket instance at:
    at Socket.onerror (node:internal/streams/legacy:62:12)
    at Socket.emit (node:events:514:28)
    at emitErrorNT (node:internal/streams/destroy:151:8)
    at emitErrorCloseNT (node:internal/streams/destroy:116:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -32,
  code: 'EPIPE',
  syscall: 'write'
}

Node.js v20.9.0

real	0m0.209s
user	0m0.040s
sys	0m0.020s

If I connect to PgCat running this commit (having merged in main), it simply never terminates or logs any items, even with a single row. Given that this works fine with PgBouncer, I suspect there's something not quite right here.

Works for me somewhat in session mode instead of transaction mode, however if I run a synchronous query after performing async queries I get DEALLOCATE ALL instead of my database rows, and then afterwards it randomly switches between that and SET/RESET for a response. As far as log output, I only see this

2024-08-15T18:29:40.127180895Z 2024-08-15T18:29:40.126615Z  INFO ThreadId(05) pgcat::server::cleanup: Server returned with session state altered, discarding state (SET: true, PREPARE: true) for application pgcat
2024-08-15T18:29:52.105634825Z 2024-08-15T18:29:52.105240Z  INFO ThreadId(07) pgcat::server::cleanup: Server returned with session state altered, discarding state (SET: true, PREPARE: false) for application pgcat
2024-08-15T18:29:56.960420756Z 2024-08-15T18:29:56.959998Z  INFO ThreadId(07) pgcat::server::cleanup: Server returned with session state altered, discarding state (SET: true, PREPARE: false) for application pgcat
2024-08-15T18:30:19.725779443Z 2024-08-15T18:30:19.725516Z  INFO ThreadId(07) pgcat::server::cleanup: Server returned with session state altered, discarding state (SET: true, PREPARE: false) for application pgcat

@NoamLiber-Reco
Copy link

is there any plan to merge this PR one day?

@ericalves
Copy link

Hi!

I have already migrated two nodejs projects.. 90% of my databases traffic and it's working perfectly!
this problem occurred in only one application preventing the adoption of pgCat in all databases.
Please, this feature is very important!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Pgcat is not working with asyncpg python client and I am getting this error in logs ERROR pgcat::client] Unexpected code: H
9 participants