In CL
131450043, which raised it to 160,
I'd raise it to 192 if necessary.
Apparently it is necessary on windows/amd64.
One note for those concerned about the growth:
in the old segmented stack world, we wasted this much
space at the bottom of every stack segment.
In the new contiguous stack world, each goroutine has
only one stack segment, so we only waste this much space
once per goroutine. So even raising the limit further might
still be a net savings.
Fixes windows/amd64 build.
TBR=r
CC=golang-codereviews
https://golang.org/cl/
132480043
// After a stack split check the SP is allowed to be this
// many bytes below the stack guard. This saves an instruction
// in the checking sequence for tiny frames.
- stackSmall = 96
+ stackSmall = 64
// The maximum number of bytes that a chain of NOSPLIT
// functions can use.
a call to morestack. This sequence needs to fit in the bottom
section of the stack. On amd64, morestack's frame is 40 bytes, and
deferproc's frame is 56 bytes. That fits well within the
-StackGuard - StackSmall = 128 bytes at the bottom.
+StackGuard - StackSmall bytes at the bottom.
The linkers explore all possible call traces involving non-splitting
functions to make sure that this limit cannot be violated.
*/
// After a stack split check the SP is allowed to be this
// many bytes below the stack guard. This saves an instruction
// in the checking sequence for tiny frames.
- StackSmall = 96,
+ StackSmall = 64,
// The maximum number of bytes that a chain of NOSPLIT
// functions can use.
// See stack.h.
const (
StackGuard = 256
- StackSmall = 96
+ StackSmall = 64
StackLimit = StackGuard - StackSmall
)
name := m[1]
size, _ := strconv.Atoi(m[2])
- // CL 131450043 raised the limit from 128 to 160.
+ // The limit was originally 128 but is now 192.
// Instead of rewriting the test cases above, adjust
// the first stack frame to use up the extra 32 bytes.
if i == 0 {
- size += 32
+ size += 192 - 128
}
if goarch == "amd64" && size%8 == 4 {