You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
refactor(profiling): remove redundant locks from memalloc
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.
First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.
This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.
There are a few rules to follow in order to use the GIL correctly for
locking:
1) The GIL is held when a C extension function is called, _except_
possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
calls, and the beginning and end of C extension functions. Modifications
to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
pthread_mutex), and the extension code releases the GIL, then the
program will deadlock due to lock order inversion. We can only safely
take locks in C extension when the GIL is released.
The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.
Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:
- If we failed to take the lock when trying to report allocation
profile events, we'd raise an exception when it was in fact not
reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
with a try-lock. If we couldn't acquire the lock, we would fail to
remove a record for an allocation and effectively leak memory.
See #13317
- We attempted to make our locking fork-safe. The first attempt was
inefficient; we made it less inefficient but the fix only "worked"
because of try-locks. See #11848
Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.
The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.
For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
0 commit comments