Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-30768: Recompute timeout on interrupted lock #4103

Merged
merged 7 commits into from
Oct 24, 2017
Merged

bpo-30768: Recompute timeout on interrupted lock #4103

merged 7 commits into from
Oct 24, 2017

Conversation

vstinner
Copy link
Member

@vstinner vstinner commented Oct 24, 2017

Fix the pthread implementation of PyThread_acquire_lock_timed() when
called with timeout > 0 and intr_flag=0: recompute the timeout if
sem_timedwait() is interrupted by a signal (EINTR).

See also the PEP 475.

https://bugs.python.org/issue30768

Fix the pthread+semaphore implementation of
PyThread_acquire_lock_timed() when called with timeout > 0 and
intr_flag=0: recompute the timeout if sem_timedwait() is interrupted
by a signal (EINTR).

See also the PEP 475.
if (microseconds > 0)

if (!intr_flag) {
_PyTime_t timeout = _PyTime_FromNanoseconds(microseconds * 1000);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if microseconds * 1000 overflows?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added _PyTime_FromMicroseconds() to detect overflow on "us * 1000". But I chose to not detect overflow on "_PyTime_GetMonotonicClock() + timeout".

Python/pytime.c Outdated
_PyTime_t t = (_PyTime_t)us;

if (_PyTime_check_mul_overflow(t, US_TO_NS)) {
_PyTime_overflow();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it will be safe to raise an exception from a PyThread function (which could be called without the GIL).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, you're right. I forgot that.

Python/pytime.c Outdated

t += (_PyTime_t)tv->tv_usec * US_TO_NS;
if (_PyTime_FromMicroseconds(&us, tv->tv_usec) < 0) {
if (!raise) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it too late? You've already raised the exception.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed _PyTime_FromMicroseconds(), it was a bad idea.

@vstinner
Copy link
Member Author

I read again the code. PyThread_acquire_lock_timed() doc is explicit about the timeout limit: "microseconds must be less than PY_TIMEOUT_MAX. Behaviour otherwise is undefined."

I removed _PyTime_FromMicroseconds(), it was a bad idea. Instead, I ajusted PY_TIMEOUT_MAX.

By the way, the main consumer of PyThread_acquire_lock_timed() is threading.Lock.acquire() and this function is careful on integer overflow. This function already stores internally the timeout as a _PyTime_t and explicitely ensures that "microseconds <= PY_TIMEOUT_MAX".

@vstinner
Copy link
Member Author

My patch doesn't change the value of threading.TIMEOUT_MAX, it's still a value larger than 292 years :-)

>>> import threading
>>> threading.TIMEOUT_MAX
9223372036.0
>>> (2**63-1) // 10**9
9223372036
>>> threading.TIMEOUT_MAX / (3600 * 24 * 365.25) # years
292.27102301822697

If you ignore PyThread_acquire_lock_timed() doc and pass a timeout larger than 292 years, you get an overflow. Sorry for you. But why do you use such crazy timeout? :-)

The pthread implementation of PyThread_acquire_lock() now fails with
a fatal error if the timeout is larger than PY_TIMEOUT_MAX, as done
in the Windows implementation.

The check prevents any risk of overflow in PyThread_acquire_lock().

Add also PY_DWORD_MAX constant.
}
else if (dt > 0) {
_PyTime_t realtime_deadline = _PyTime_GetSystemClock() + dt;
if (_PyTime_AsTimespec(realtime_deadline, &ts) < 0) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, this function raises an exception on overflow. This bug is now fixed in the new commit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which new commit? :-)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4th commit of this PR, commit 7c428b7 which added the following code at the top of the function:

+    if (microseconds > PY_TIMEOUT_MAX) {
+        Py_FatalError("Timeout larger than PY_TIMEOUT_MAX");
+    }

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, thank you for adding the comments :-)

@pitrou
Copy link
Member

pitrou commented Oct 24, 2017

My patch doesn't change the value of threading.TIMEOUT_MAX, it's still a value larger than 292 years :-)

I see, thank you. I guess it's large enough :-)

# define PY_TIMEOUT_MAX (PY_LLONG_MAX / 1000)
#elif defined (NT_THREADS)
/* In the NT API, the timeout is a DWORD and is expressed in milliseconds */
# if 0xFFFFFFFFLL * 1000 < PY_LLONG_MAX
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which integer width does the C preprocessor use? :-) Can 0xFFFFFFFFLL * 1000 be truncated?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't write this code, I only moved it.

The "LL" suffix is for "long long" which should be at least 64-bit signed integer, no?

Are you suggesting to replace the code with "0xFFFFFFFFLL < PY_LLONG_MAX / 1000" to prevent any risk of integer overflow?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, ok. No, nevermind.

@vstinner
Copy link
Member Author

Oh, I forgot to regenerate generated file using "make clinic" when I added the PY_DWORD_MAX constant. It's now fixed. I also merged master into this PR.

@pitrou
Copy link
Member

pitrou commented Oct 24, 2017

Hmm... I think this PR looks ok now.

@vstinner
Copy link
Member Author

I tested manually the PR using files attached to the issue: "Apply PR 4103, apply attached test.patch, recompile Python, and run interrupted_lock.py to test the PR." It works well. Result:

haypo@selma$ ./python interrupted_lock.py 
acquire(timeout=1.0) took 1.073 seconds and got 105 signals

I modified interrupted_lock.py with:

def burn_cpu():
    for i in range(10**4 * 3):
        pass

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants