]> Cypherpunks repositories - gostls13.git/commit
runtime: allocate physical-page-aligned memory differently
authorMichael Anthony Knyszek <mknyszek@google.com>
Fri, 20 May 2022 16:30:11 +0000 (16:30 +0000)
committerMichael Knyszek <mknyszek@google.com>
Fri, 20 May 2022 21:54:20 +0000 (21:54 +0000)
commitb58067013eaa2f2bf0dc24f4d848e10bb758b6bd
tree6870a458dedd52c3126c5047dec0bbd11fbb5fcd
parent7ec6ef432a85a390365f2daed788f0d14c830c73
runtime: allocate physical-page-aligned memory differently

Currently, physical-page-aligned allocations for stacks (where the
physical page size is greater than the runtime page size) first
overallocates some memory, then frees the unaligned portions back to the
heap.

However, because allocating via h.pages.alloc causes scavenged bits to
get cleared, we need to account for that memory correctly in heapFree
and heapReleased. Currently that is not the case, leading to throws at
runtime.

Trying to get that accounting right is complicated, because information
about exactly which pages were scavenged needs to get plumbed up.
Instead, find the oversized region first, and then only allocate the
aligned part. This avoids any accounting issues.

However, this does come with some performance cost, because we don't
update searchAddr (which is safe, it just means the next allocation
potentially must look harder) and we skip the fast path that
h.pages.alloc has for simplicity.

Fixes #52682.

Change-Id: Iefa68317584d73b187634979d730eb30db770bb6
Reviewed-on: https://go-review.googlesource.com/c/go/+/407502
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
src/runtime/mheap.go