Closed
Description
Describe the bug
When using the context manager on a client with buffering enabled, the buffering gets disabled globally as soon as a context manager exits.
To Reproduce
import socket
import datadog
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setblocking(False)
sock.bind(("localhost", 0))
host, port = sock.getsockname()
def read_sock():
data = b""
while True:
try:
data += sock.recv(1024)
except BlockingIOError:
break
print(" ", data)
client = datadog.DogStatsd(
host=host,
port=port,
disable_buffering=False,
max_buffer_len=10000000,
flush_interval=10000000,
)
print("increment with buffering")
client.increment("foo")
read_sock()
print("flush")
client.flush()
read_sock()
print("increment with context manager")
with client:
client.increment("foo")
read_sock()
print("context manager __exit__")
read_sock()
print("increment again")
client.increment("foo") # Here, the event is sent synchronously, but it shouldn't
read_sock()
sock.close()
Outputs:
increment with buffering
b''
flush
b'foo:1|c\n'
increment with context manager
b''
context manager __exit__
b'foo:1|c\n'
increment again
b'foo:1|c\n'
Buffer after increment again
should be empty.
Expected behavior
The buffering configuration of the client shouldn't be changed by the context manager.
Screenshots
N/A
Environment and Versions (please complete the following information):
- library version: 0.43.0
- python version: 3.10
Additional context
The bug seems to be caused by this:
datadogpy/datadog/dogstatsd/base.py
Line 522 in ee5ac16
The close_buffer()
function (which is called when the context manager exits) always sets the _send
function to _send_to_server
independently of the client buffering.