Skip to content

perf(profiling): improve scaling of memory profiler for large heaps #13317

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
May 7, 2025

Conversation

nsrip-dd
Copy link
Contributor

@nsrip-dd nsrip-dd commented May 1, 2025

The heap profiler component of memalloc uses a plain array to track
allocations. It performs a linear scan of this array on every free to
see if the freed allocation is tracked. This means the cost of freeing
an allocation increases substantially for programs with large heaps.

Switch to a hash table for tracking allocations. This should have roughly
constant overhead even for larger heaps, where we track more allocations.

This PR uses the C implementation of the Abseil SwissTable
from https://github.com/google/cwisstable. This hash table is designed
for memory efficiency, both in terms of memory used (1 byte per entry of
metadata) and in terms of access (cache-friendly layout). The table
is designed in particular for fast lookup and insertion. We do a lookup
for every single free, so it's important that it's fast.

This PR previously tried to use a C++ std::unordered_map, but it
proved far too difficult to integrate with the existing C memalloc
implementation.

This PR includes the single cwisstable.h header with the following
modifications, which can be found in the header by searching for
BEGIN MODIFICATION:

  • Fix CWISS_Mul128: for 64-bit windows we need a different intrinsic,
    and for 32-bit systems we can't use it at all
  • Fix a multiple declaration and invalid C++ syntax in CWISS_LeadingZeroes64
    for 32-bit Windows

The PR also supplies a 32-bit compatible hash function since the
one in cwisstable.h couldn't be easily made to work. We use xxHash
instead. Note that cwisstable.h requires the hash function to return
a size_t, which is 32 bits on 32-bit platforms, but the SwissTable design
expects 64-bit hashes. I don't know if this is a huge problem in practice
or if people even use our profiler on 32-bit systems, but it's something to
watch out for.

Note that the cwisstable.h hash table implementation basically never
gets rid of backing memory unless we completely clear the table. This
is not a big deal in practice. Each entry in the table is an 8 byte void* key
and an 8 byte traceback_t*, plus 1 byte of control metadata. If we assume
a target load factor of 50%, and we keep the existing 2^16 element cap,
then a completely full table is only about 2MiB. Most of the heap profiler
memory usage comes from the tracebacks themselves, which we do
free when they're removed from the profiler structures.

This PR also fixes a potential memory leak. Since we use try-locks, it theoretically
possible to fail to un-track an allocation if we can't acquire the lock. When this
happens, the previous implementation would just leak an entry. With the hash
table, if we find an existing entry for a given address, we can free the old traceback
and replace it with the new one. The real fix will be to fix how we handle locking,
but this PR improves the situation.

I've tested this locally and see very good performance even for larger heaps,
and this has also been tested in #13307.

Fixes #13307

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

The heap profiler component of memalloc uses a plain array to track
allocations. It performs a linear scan of this array on every free to
see if the freed allocation is tracked. This means the cost of freeing
an allocation increases substantially for programs with large heaps.

Switch to an unordered map for tracking allocations. This should have
roughly constant overhead even for larger heaps, where we track more
allocations.

Rather than find or implement a map in C, use C++. We can wrap a
std::unordered_map in a thin compatibility layer and use it from the
rest of memalloc without having to completely refactor memalloc.

Making this work requires some awkward modifications to setup.py to
build memalloc. Basically, the Extension class can have sources from
multiple languages, but it isn't possible to pass different flags per
language, such as the language standard. Instead, we can use a method
borrowed from a Stack Overflow answer and use the build_clib command,
extended to support compiler arguments. This lets us build a static
library, which we can link from memalloc.

Note that with the way we build the library, it'll be passed to the link
step for other C extensions as well. This should be harmless, though.
The other extensions will only include symbols they actually use, which
in practice will be none. We'll only have trouble if there are
duplicated definitions, but the functions in the map library here are
all "namespaced".

Note also that we might get slightly worse performance due to the extra
cost of map access for very small heaps, compared to using a simple
array.
TODO - demonstrate the difference in performance with this change for
large and small heaps.

Fixes #13307
Copy link
Contributor

github-actions bot commented May 1, 2025

CODEOWNERS have been resolved as:

ddtrace/profiling/collector/_memalloc_heap_map.c                        @DataDog/profiling-python
ddtrace/profiling/collector/_memalloc_heap_map.h                        @DataDog/profiling-python
ddtrace/profiling/collector/vendor/cwisstable.h                         @DataDog/profiling-python
releasenotes/notes/profiling-memalloc-faster-heap-map-470b7f3980135e8a.yaml  @DataDog/apm-python
ddtrace/profiling/collector/_memalloc_heap.c                            @DataDog/profiling-python
setup.py                                                                @DataDog/python-guild

Copy link
Contributor

github-actions bot commented May 1, 2025

Bootstrap import analysis

Comparison of import times between this PR and base.

Summary

The average import time from this PR is: 234 ± 4 ms.

The average import time from base is: 240 ± 3 ms.

The import time difference between this PR and base is: -5.5 ± 0.2 ms.

Import time breakdown

The following import paths have shrunk:

ddtrace.auto 2.430 ms (1.04%)
ddtrace.bootstrap.sitecustomize 1.673 ms (0.71%)
ddtrace.bootstrap.preload 1.673 ms (0.71%)
ddtrace.internal.products 1.673 ms (0.71%)
ddtrace.internal.remoteconfig.client 0.732 ms (0.31%)
ddtrace 0.757 ms (0.32%)
ddtrace._logger 0.081 ms (0.03%)
logging 0.081 ms (0.03%)
traceback 0.081 ms (0.03%)
contextlib 0.081 ms (0.03%)

@pr-commenter
Copy link

pr-commenter bot commented May 1, 2025

Benchmarks

Benchmark execution time: 2025-05-06 21:51:05

Comparing candidate commit c161949 in PR branch nick.ripley/memalloc-heap-map with baseline commit c5d75cd in branch main.

Found 0 performance improvements and 5 performance regressions! Performance is the same for 502 metrics, 5 unstable metrics.

scenario:iast_aspects-ospathsplit_aspect

  • 🟥 execution_time [+807.787ns; +1006.961ns] or [+17.020%; +21.217%]

scenario:iast_aspects-split_aspect

  • 🟥 execution_time [+104.422ns; +132.928ns] or [+7.005%; +8.917%]

scenario:iast_aspects-upper_aspect

  • 🟥 execution_time [+223.227ns; +257.671ns] or [+9.802%; +11.314%]

scenario:otelspan-start

  • 🟥 execution_time [+2.980ms; +5.070ms] or [+7.139%; +12.144%]

scenario:telemetryaddmetric-1-distribution-metric-1-times

  • 🟥 execution_time [+290.617ns; +355.229ns] or [+9.745%; +11.911%]

@taegyunkim
Copy link
Contributor

import django
from django.test import Client
django.setup()
client = Client()
def _(loops):
for _ in range(loops):
client.get(self.path)

Benchmark results look quite promising - given that the django simple scenario is simple as above.

nsrip-dd added 2 commits May 2, 2025 09:20
To get memalloc_map to build in all our setups. Some setups in
multiple_os_tests fail with errors finding `llvm-ar`. This might be due
to how the Python interpreter there was built, which affects what the AR
value is from sysconfig. Just set it to plain `ar`. This is desperate...
@nsrip-dd
Copy link
Contributor Author

nsrip-dd commented May 2, 2025

Currently investigating broken tests, failing with this error: ImportError: /go/src/github.com/DataDog/apm-reliability/dd-trace-py/ddtrace/profiling/collector/_memalloc.cpython-313-x86_64-linux-gnu.so: undefined symbol: __gxx_personality_v0. Something wrong with how we're linking the C++ library, I'm guessing...

nsrip-dd added 3 commits May 5, 2025 19:59
static_cast isn't a thing for C. Kinda worries me since that means this
has 32-bit Windows code that cleary hasn't been tested.
The cwisstable maps basically never shrink unless we completely clear
them. That's probably okay though since even for a very very large heap
the table itself only takes 1-2MiB. Most of the heap profiler memory use
comes from tracebacks, and we do free those when we're done with them.
@nsrip-dd nsrip-dd marked this pull request as ready for review May 6, 2025 19:26
@nsrip-dd nsrip-dd requested review from a team as code owners May 6, 2025 19:26
@nsrip-dd nsrip-dd requested review from avara1986 and quinna-h May 6, 2025 19:26
If we fail to acquire the heap lock when untracking an allocation, we
won't remove it. The next time we sample an allocation with the same
address, it'll collide with the existing entry. When this happens, free
the old traceback, which corresponds to a freed allocation, and replace
it with the new one.
@nsrip-dd nsrip-dd merged commit bb24ee3 into main May 7, 2025
723 of 726 checks passed
@nsrip-dd nsrip-dd deleted the nick.ripley/memalloc-heap-map branch May 7, 2025 16:21
Copy link
Contributor

github-actions bot commented May 7, 2025

The backport to 2.21 failed:

The process '/usr/bin/git' failed with exit code 1

To backport manually, run these commands in your terminal:

# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.21 2.21
# Navigate to the new working tree
cd .worktrees/backport-2.21
# Create a new branch
git switch --create backport-13317-to-2.21
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 bb24ee310b9728614a10fa3f435e68a3e1f82cd0
# Push it to GitHub
git push --set-upstream origin backport-13317-to-2.21
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.21

Then, create a pull request where the base branch is 2.21 and the compare/head branch is backport-13317-to-2.21.

nsrip-dd added a commit that referenced this pull request May 7, 2025
…backport 2.21] (#13317)

The heap profiler component of memalloc uses a plain array to track
allocations. It performs a linear scan of this array on every free to
see if the freed allocation is tracked. This means the cost of freeing
an allocation increases substantially for programs with large heaps.

Switch to a hash table for tracking allocations. This should have roughly
constant overhead even for larger heaps, where we track more allocations.

This PR uses the C implementation of the [Abseil
SwissTable](https://abseil.io/blog/20180927-swisstables)
from https://github.com/google/cwisstable. This hash table is designed
for memory efficiency, both in terms of memory used (1 byte per entry of
metadata) and in terms of access (cache-friendly layout). The table
is designed in particular for fast lookup and insertion. We do a lookup
for every single free, so it's important that it's fast.

This PR previously tried to use a C++ `std::unordered_map`, but it
proved far too difficult to integrate with the existing C `memalloc`
implementation.

This PR includes the single `cwisstable.h` header with the following
modifications, which can be found in the header by searching for
`BEGIN MODIFICATION`:

- Fix `CWISS_Mul128`: for 64-bit windows we need a different intrinsic,
   and for 32-bit systems we can't use it at all
- Fix a multiple declaration and invalid C++ syntax in
`CWISS_LeadingZeroes64`
   for 32-bit Windows

The PR also supplies a 32-bit compatible hash function since the
one in `cwisstable.h` couldn't be easily made to work. We use `xxHash`
instead. Note that `cwisstable.h` requires the hash function to return
a `size_t`, which is 32 bits on 32-bit platforms, but the SwissTable design
expects 64-bit hashes. I don't know if this is a huge problem in practice
or if people even use our profiler on 32-bit systems, but it's something to
watch out for.

Note that the `cwisstable.h` hash table implementation basically never
gets rid of backing memory unless we completely clear the table. This
is not a big deal in practice. Each entry in the table is an 8 byte `void*` key
and an 8 byte `traceback_t*`, plus 1 byte of control metadata. If we assume
a target load factor of 50%, and we keep the existing `2^16` element cap,
then a completely full table is only about 2MiB. Most of the heap profiler
memory usage comes from the tracebacks themselves, which we _do_
free when they're removed from the profiler structures.

This PR also fixes a potential memory leak. Since we use try-locks, it theoretically
possible to fail to un-track an allocation if we can't acquire the lock.
When this happens, the previous implementation would just leak an entry.
With the hash table, if we find an existing entry for a given address, we can free the
old traceback and replace it with the new one. The real fix will be to fix how we
handle locking, but this PR improves the situation.

I've tested this locally and see very good performance even for larger heaps,
and this has also been tested in #13307.

Fixes #13307
taegyunkim pushed a commit that referenced this pull request May 12, 2025
…backport 2.21] (#13317) (#13350)

Backports #13317 to 2.21, including the fix from #13353

The heap profiler component of memalloc uses a plain array to track
allocations. It performs a linear scan of this array on every free to
see if the freed allocation is tracked. This means the cost of freeing
an allocation increases substantially for programs with large heaps.

Switch to a hash table for tracking allocations. This should have
roughly
constant overhead even for larger heaps, where we track more
allocations.

This PR uses the C implementation of the [Abseil
SwissTable](https://abseil.io/blog/20180927-swisstables)
from https://github.com/google/cwisstable. This hash table is designed
for memory efficiency, both in terms of memory used (1 byte per entry of
metadata) and in terms of access (cache-friendly layout). The table
is designed in particular for fast lookup and insertion. We do a lookup
for every single free, so it's important that it's fast.

This PR previously tried to use a C++ `std::unordered_map`, but it
proved far too difficult to integrate with the existing C `memalloc`
implementation.

This PR includes the single `cwisstable.h` header with the following
modifications, which can be found in the header by searching for
`BEGIN MODIFICATION`:

- Fix `CWISS_Mul128`: for 64-bit windows we need a different intrinsic,
   and for 32-bit systems we can't use it at all
- Fix a multiple declaration and invalid C++ syntax in
`CWISS_LeadingZeroes64`
   for 32-bit Windows

The PR also supplies a 32-bit compatible hash function since the
one in `cwisstable.h` couldn't be easily made to work. We use `xxHash`
instead. Note that `cwisstable.h` requires the hash function to return
a `size_t`, which is 32 bits on 32-bit platforms, but the SwissTable
design
expects 64-bit hashes. I don't know if this is a huge problem in
practice
or if people even use our profiler on 32-bit systems, but it's something
to
watch out for.

Note that the `cwisstable.h` hash table implementation basically never
gets rid of backing memory unless we completely clear the table. This
is not a big deal in practice. Each entry in the table is an 8 byte
`void*` key
and an 8 byte `traceback_t*`, plus 1 byte of control metadata. If we
assume
a target load factor of 50%, and we keep the existing `2^16` element
cap,
then a completely full table is only about 2MiB. Most of the heap
profiler
memory usage comes from the tracebacks themselves, which we _do_
free when they're removed from the profiler structures.

This PR also fixes a potential memory leak. Since we use try-locks, it
theoretically
possible to fail to un-track an allocation if we can't acquire the lock.
When this happens, the previous implementation would just leak an entry.
With the hash table, if we find an existing entry for a given address,
we can free the
old traceback and replace it with the new one. The real fix will be to
fix how we
handle locking, but this PR improves the situation.

I've tested this locally and see very good performance even for larger
heaps,
and this has also been tested in #13307.

Fixes #13307

## Checklist
- [x] PR author has checked that all the criteria below are met
- The PR description includes an overview of the change
- The PR description articulates the motivation for the change
- The change includes tests OR the PR description describes a testing
strategy
- The PR description notes risks associated with the change, if any
- Newly-added code is easy to change
- The change follows the [library release note
guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html)
- The change includes or references documentation updates if necessary
- Backport labels are set (if
[applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting))

## Reviewer Checklist
- [x] Reviewer has checked that all the criteria below are met 
- Title is accurate
- All changes are related to the pull request's stated goal
- Avoids breaking
[API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces)
changes
- Testing strategy adequately addresses listed risks
- Newly-added code is easy to change
- Release note makes sense to a user of the library
- If necessary, author has acknowledged and discussed the performance
implications of this PR as reported in the benchmarks PR comment
- Backport labels are set in a manner that is consistent with the
[release branch maintenance
policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)
nsrip-dd added a commit that referenced this pull request May 14, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
nsrip-dd added a commit that referenced this pull request May 15, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
nsrip-dd added a commit that referenced this pull request May 20, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
nsrip-dd added a commit that referenced this pull request May 20, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
nsrip-dd added a commit that referenced this pull request May 20, 2025
We added locking to memalloc, the memory profiler, in #11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baselien cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
   calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460 addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
  reasonable for doing that to fail. See #12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
  because of try-locks. See #11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
taegyunkim added a commit that referenced this pull request May 23, 2025
This test is now runnable in CI after #13317


## Checklist
- [x] PR author has checked that all the criteria below are met
- The PR description includes an overview of the change
- The PR description articulates the motivation for the change
- The change includes tests OR the PR description describes a testing
strategy
- The PR description notes risks associated with the change, if any
- Newly-added code is easy to change
- The change follows the [library release note
guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html)
- The change includes or references documentation updates if necessary
- Backport labels are set (if
[applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting))

## Reviewer Checklist
- [ ] Reviewer has checked that all the criteria below are met 
- Title is accurate
- All changes are related to the pull request's stated goal
- Avoids breaking
[API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces)
changes
- Testing strategy adequately addresses listed risks
- Newly-added code is easy to change
- Release note makes sense to a user of the library
- If necessary, author has acknowledged and discussed the performance
implications of this PR as reported in the benchmarks PR comment
- Backport labels are set in a manner that is consistent with the
[release branch maintenance
policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)
nsrip-dd added a commit that referenced this pull request Jun 2, 2025
We added locking to memalloc, the memory profiler, in
#11460 in order to
address crashes. These locks made the crashes go away, but significantly
increased the baseline overhead of the profiler and introduced subtle
bugs. The locks we added turned out to be fundamentally incompatible
with the global interpreter lock (GIL), at least with the implementation
from #11460. This PR
refactors the profiler to use the GIL exclusively
for locking.

First, we should acknowledge no-GIL and subinterpreters. As of right
now, our module does not support either. A module has to explicitly
opt-in to support either, so there is no risk of those modes being
enabled under our feet. Supporting either mode is likely a repo-wide
project. For now, we can assume the GIL exists.

This work was motivated by overhead. We currently acquire and release
locks in every memory allocation and free. Even when the locks aren't
contended, allocations and frees are very frequent, and the extra works
adds up. We add about ~8x overhead to the baseline cost of allocation
just with our locking, not including the cost of actually sampling an
allocation. We can't get rid of this overhead just by reducing sampling
frequency.

There are a few rules to follow in order to use the GIL correctly for
locking:

1) The GIL is held when a C extension function is called, _except_
   possibly in the raw allocator, which we do not profile
2) The GIL may be released during C Python API calls. Even if it is
   released, though, it will be held again after the call
3) Thus, the GIL creates critical sections only between C Python API
calls, and the beginning and end of C extension functions. Modifications
   to shared state across those points are not atomic.
4) If we take a lock of our own in a C extension code (i.e. a
   pthread_mutex), and the extension code releases the GIL, then the
   program will deadlock due to lock order inversion. We can only safely
   take locks in C extension when the GIL is released.

The crashes that #11460
addresed were due to breaking the first three
rules. In particular, we could race on accessing the shared scratch
buffer used when collecting tracebacks, which lead to double-frees.
See #13185 for more details.

Our mitigation involved using C locks around any access to the shared
profiler state. We nearly broke rule 4 in the process. However, we used
try-locks specifically out of a fear of introducing deadlocks. Try-locks
mean that we attempt to acquire the lock, but return a failure if the
lock is already held. This stopped deadlocks, but introduced bugs: For
example:

- If we failed to take the lock when trying to report allocation
  profile events, we'd raise an exception when it was in fact not
reasonable for doing that to fail. See
#12075.
- memalloc_heap_untrack, which removes tracked allocations, was guarded
  with a try-lock. If we couldn't acquire the lock, we would fail to
  remove a record for an allocation and effectively leak memory.
  See #13317
- We attempted to make our locking fork-safe. The first attempt was
  inefficient; we made it less inefficient but the fix only "worked"
because of try-locks. See
#11848

Try-locks hide concurrency problems and we shouldn't use them. Using our
own locks requires releasing the GIL before acquisition, and then
re-acquiring the GIL. That adds unnecessary overhead. We don't
inherently need to do any off-GIL work. So, we should try to just use
the GIL as long as it is available.

The basic refactor is actually pretty simple. In a nutshell, we
rearrange the memalloc_add_event and memalloc_heap_track functions so
that they make the sampling decision, then take a traceback, then insert
the traceback into the appropriate data structure. Collecting a
traceback can release the GIL, so we make sure that modifying the data
structure happens completely after the traceback is collected. We also
safeguard against the possibility that the profiler was stopped during
sampling, if the GIL was released. This requires a small rearrangement
of memalloc_stop to make sure that the sampling functions don't see
partially-freed profiler data structures.

For testing, I have mainly used the code from
test_memealloc_data_race_regression.
I also added a debug mode, enabled by compiling with
MEMALLOC_TESTING_GIL_RELEASE, which releases the GIL at places where it
would be expected. For performance I examined the overhead of profiling
on a basic flask application.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG]: DD_PROFILING_MEMORY_ENABLED=true slows down application significantly
3 participants