Closed
Description
- asyncpg version: 0.9.0
- PostgreSQL version: 9.6
- Python version: 3.6
- Platform: Heroku, ubuntu locally
- Do you use pgbouncer?: no
- Did you install asyncpg with pip?: yes
- If you built asyncpg locally, which version of Cython did you use?: -
- Can the issue be reproduced under both asyncio and
uvloop?: yes
In a websocket handler with aiohttp (2.0.4) I have (simplified for brevity):
ws = WebSocketResponse()
await ws.prepare(request)
async with request.app['pg'].acquire(timeout=5) as conn:
await conn.add_listener('events', send_event_)
... # do stuff, including executing queries
async for msg in ws:
... # do stuff with msg, including executing queries
return ws
This is causing problems; if a browser is closed without calling websocket.close()
in js, the handler is terminated with a CancelledError
, __aexit__
also gets cancelled and the connection is never returned to the pool.
If I use
ws = WebSocketResponse()
await ws.prepare(request)
conn = await request.app['pg']._acquire(timeout=5)
try:
await conn.add_listener('events', send_event_)
... # do stuff, including executing queries
async for msg in ws:
... # do stuff with msg, including executing queries
finally:
async def release(app, conn):
await app['pg'].release(conn)
request.app.loop.create_task(release(request.app, conn))
The connection is correctly released and the pool doesn't run out of connections.
(actual code is here if that helps.)
I'm not sure whether asyncpg can resolve this or whether it's a problem with aiohttp?
Would it be possible for the pool manager to somehow recover those connections which weren't released?
Or could there be an alternative acquire
used with with await pool.acquire() as conn:
so __exit__
was not a coroutine?