Skip to content

Commit f9357cd

Browse files
committed
runtime: check for updated arena_end overflow
Currently, if an allocation is large enough that arena_end + size overflows (which is not hard to do on 32-bit), we go ahead and call sysReserve with the impossible base and length and depend on this to either directly fail because the kernel can't possibly fulfill the requested mapping (causing mheap.sysAlloc to return nil) or to succeed with a mapping at some other address which will then be rejected as outside the arena. In order to make this less subtle, less dependent on the kernel getting all of this right, and to eliminate the hopeless system call, add an explicit overflow check. Updates #13143. This real issue has been fixed by 0de59c2, but this is a belt-and-suspenders improvement on top of that. It was uncovered by my symbolic modeling of that bug. Change-Id: I85fa868a33286fdcc23cdd7cdf86b19abf1cb2d1 Reviewed-on: https://go-review.googlesource.com/16961 Run-TryBot: Austin Clements <[email protected]> TryBot-Result: Gobot Gobot <[email protected]> Reviewed-by: Ian Lance Taylor <[email protected]>
1 parent fbe855b commit f9357cd

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

src/runtime/malloc.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -392,8 +392,8 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
392392
// We are in 32-bit mode, maybe we didn't use all possible address space yet.
393393
// Reserve some more space.
394394
p_size := round(n+_PageSize, 256<<20)
395-
new_end := h.arena_end + p_size
396-
if new_end <= h.arena_start+_MaxArena32 {
395+
new_end := h.arena_end + p_size // Careful: can overflow
396+
if h.arena_end <= new_end && new_end <= h.arena_start+_MaxArena32 {
397397
// TODO: It would be bad if part of the arena
398398
// is reserved and part is not.
399399
var reserved bool

0 commit comments

Comments
 (0)