Sysmon can actually get the RW lock execLock while holding the sysmon
lock (if no M is available), so there is an edge from lockRankSysmon to
lockRankRwmutexR. The stack trace is sysmon() [gets sched.sysmonlock] ->
startm() -> newm() -> newm1() -> execLock.runlock() [gets
execLock.rLock]
Change-Id: I9658659ba3899afb5219114d66b989abd50540db
Reviewed-on: https://go-review.googlesource.com/c/go/+/265721
Trust: Dan Scales <danscales@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com>
Cherry Zhang [Sat, 24 Oct 2020 17:14:36 +0000 (13:14 -0400)]
runtime: set up TLS without cgo on darwin/arm64
Currently, on darwin/arm64 we set up TLS using cgo. TLS is not
set for pure Go programs. As we use libc for syscalls on darwin,
we need to save the G register before the libc call. Otherwise it
is not signal-safe, as a signal may land during the execution of
a libc function, where the G register may be clobbered.
This CL initializes TLS in Go, by calling the pthread functions
directly without cgo. This makes it possible to save the G
register to TLS in pure Go programs (done in a later CL).
Inspired by Elias's CL 209197. Write the logic in Go instead of
assembly.
Updates #38485, #35853.
Change-Id: I257ba2a411ad387b2f4d50d10129d37fec7a226e
Reviewed-on: https://go-review.googlesource.com/c/go/+/265118
Trust: Cherry Zhang <cherryyz@google.com>
Trust: Elias Naur <mail@eliasnaur.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Use a standard "not-equal" label that we can jump to when we
detect that the arguments are not equal. This prevents the
recombination that was noticed in #39428.
Fixes #39428
Change-Id: Ib7c6b3539f4f6046043fd7148f940fcdcab70427
Reviewed-on: https://go-review.googlesource.com/c/go/+/255317
Trust: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Ian Lance Taylor [Wed, 28 Oct 2020 04:05:13 +0000 (21:05 -0700)]
runtime: move TestNeedmDeadlock to crash_cgo_test.go
It requires cgo. Also, skip the test on windows and plan9.
For #42207
Change-Id: I8522773f93bc3f9826506a41a08b86a083262e31
Reviewed-on: https://go-review.googlesource.com/c/go/+/265778
Trust: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Meng Zhuo [Tue, 27 Oct 2020 06:18:13 +0000 (14:18 +0800)]
cmd/link: remove all constants of elf
Use debug/elf instead.
Related:
CL 252478
CL 265317
Change-Id: If63b0458d9a6e825b40616bfb7a5a2c2e32402b4
Reviewed-on: https://go-review.googlesource.com/c/go/+/265318
Trust: Meng Zhuo <mzh@golangcn.org>
Run-TryBot: Meng Zhuo <mzh@golangcn.org>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Joel Sing <joel@sing.id.au> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Ian Lance Taylor [Tue, 27 Oct 2020 23:09:40 +0000 (16:09 -0700)]
runtime: block signals in needm before allocating M
Otherwise, if a signal occurs just after we allocated the M,
we can deadlock if the signal handler needs to allocate an M
itself.
Fixes #42207
Change-Id: I76f44547f419e8b1c14cbf49bf602c6e645d8c14
Reviewed-on: https://go-review.googlesource.com/c/go/+/265759
Trust: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Bryan C. Mills <bcmills@google.com>
George Tsilias [Thu, 4 Jun 2020 20:11:56 +0000 (23:11 +0300)]
runtime: handle signal 34 for musl setgid
It has been observed that setgid hangs when using cgo with musl.
This fix ensures that signal 34 gets handled in an appropriate way,
like signal 33 when using glibc.
Fixes #39343
Change-Id: I89565663e2c361f62cbccfe80aaedf290bd58d57
Reviewed-on: https://go-review.googlesource.com/c/go/+/236518
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Tobias Klauser <tobias.klauser@gmail.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Christopher Hlubek [Mon, 26 Oct 2020 12:44:44 +0000 (13:44 +0100)]
time: fix LoadLocationFromTZData with slim tzdata
The extend information of a time zone file with last transition < now
could result in a wrong cached zone because it used the zone of the
last transition.
This could lead to wrong zones in systems with slim zoneinfo.
Fixes #42216
Change-Id: I7c57c35b5cfa58482ac7925b5d86618c52f5444d
Reviewed-on: https://go-review.googlesource.com/c/go/+/264939
Trust: Tobias Klauser <tobias.klauser@gmail.com>
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Ian Lance Taylor [Wed, 30 Sep 2020 00:01:33 +0000 (17:01 -0700)]
runtime: don't always adjust timers
Some programs have a lot of timers that they adjust both forward and
backward in time. This can cause a large number of timerModifiedEarlier
timers. In practice these timers are used for I/O deadlines and are
rarely reached. The effect is that the runtime spends a lot of time
in adjusttimers making sure that there are no timerModifiedEarlier
timers, but the effort is wasted because none of the adjusted timers
are near the top of the timer heap anyhow.
Avoid much of this extra work by keeping track of the earliest known
timerModifiedEarlier timer. This lets us skip adjusttimers if we know
that none of the timers will be ready to run anyhow. We will still
eventually run it, when we reach the deadline of the earliest known
timerModifiedEarlier, although in practice that timer has likely
been removed. When we do run adjusttimers, we will reset all of the
timerModifiedEarlier timers, and clear our notion of when we need
to run adjusttimers again.
This effect should be to significantly reduce the number of times we
walk through the timer list in adjusttimers.
Fixes #41699
Change-Id: I38eb2be611fb34e3017bb33d0a9ed40d75fb414f
Reviewed-on: https://go-review.googlesource.com/c/go/+/258303
Trust: Ian Lance Taylor <iant@golang.org>
Trust: Emmanuel Odeke <emmanuel@orijtech.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
Keith Randall [Tue, 27 Oct 2020 21:15:00 +0000 (14:15 -0700)]
cmd/compile: print pointers to go:notinheap types without converting to unsafe.Pointer
Pretty minor concern, but after auditing the compiler/runtime for
conversions from pointers to go:notinheap types to unsafe.Pointer,
this is the only remaining one I found.
Update #42076
Change-Id: I81d5b893c9ada2fc19a51c2559262f2e9ff71c35
Reviewed-on: https://go-review.googlesource.com/c/go/+/265757
Trust: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Keith Randall [Thu, 22 Oct 2020 23:37:19 +0000 (16:37 -0700)]
cmd/compile, runtime: store pointers to go:notinheap types indirectly
pointers to go:notinheap types should be treated as scalars. That
means they shouldn't be stored directly in interfaces, or directly
in reflect.Value.ptr.
Also be sure to use uintpr to compare such pointers in reflect.DeepEqual.
Fixes #42076
Change-Id: I53735f6d434e9c3108d4940bd1bae14c61ef2a74
Reviewed-on: https://go-review.googlesource.com/c/go/+/264480
Trust: Keith Randall <khr@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Keith Randall [Thu, 22 Oct 2020 20:11:16 +0000 (13:11 -0700)]
cmd/compile: fix storeType to handle pointers to go:notinheap types
storeType splits compound stores up into a scalar parts and a pointer parts.
The scalar part happens unconditionally, and the pointer part happens
under the guard of a write barrier check.
Types which are declared as pointers, but are represented as scalars because
they might have "bad" values, were not handled correctly here. They ended
up not getting stored in either set.
Fixes #42032
Change-Id: I46f6600075c0c370e640b807066247237f93c7ac
Reviewed-on: https://go-review.googlesource.com/c/go/+/264300
Trust: Keith Randall <khr@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Ian Lance Taylor [Tue, 27 Oct 2020 19:59:54 +0000 (12:59 -0700)]
go/internal/gccgoimporter: support notinheap annotation
The gofrontend has started emitting a notinheap annotation for types
marked go:notinheap.
For #41761
Change-Id: Ic8f7ffc32dbfe98ec09b3d835957f1be8e6c1208
Reviewed-on: https://go-review.googlesource.com/c/go/+/265702
Trust: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Than McIntosh <thanm@google.com>
Alberto Donizetti [Tue, 27 Oct 2020 10:03:21 +0000 (11:03 +0100)]
cmd/compile: delete isPowerOfTwo, switch to isPowerOfTwo64
rewrite.go has two identical functions isPowerOfTwo and
isPowerOfTwo64; the former has been there for a while, while the
latter was added together with isPowerOfTwo{8,16,32} for use in typed
rules.
This change deletes isPowerOfTwo and switch to using isPowerOfTwo64
everywhere.
Change-Id: If26c94565d2393fac6f0ba117ee7ee2fc915f7cd
Reviewed-on: https://go-review.googlesource.com/c/go/+/265417
Trust: Alberto Donizetti <alb.donizetti@gmail.com>
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
Alberto Donizetti [Tue, 27 Oct 2020 08:38:52 +0000 (09:38 +0100)]
cmd/compile: clean up ValAndOff funcs after untyped aux removal
Changes:
- makeValAndOff is deleted in favour of MakeValAndOff{32,64}
- canAdd is renamed to canAdd64 to uniform with existing canAdd32
- addOffset{32,64} is simplified by directly using MakeValAndOff{32,64}
- ValAndOff.Int64 is removed
Change-Id: Ic01db7fa31ddfe0aaaf1d1d77af823d48a7bee84
Reviewed-on: https://go-review.googlesource.com/c/go/+/265357
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
Trust: Alberto Donizetti <alb.donizetti@gmail.com>
Alberto Donizetti [Sat, 24 Oct 2020 15:27:52 +0000 (17:27 +0200)]
cmd/compile: remove support for untyped ssa rules
This change removes support in rulegen for untyped -> ssa rules.
Change-Id: I202018e191fc74f027243351bc8cf96145f2482c
Reviewed-on: https://go-review.googlesource.com/c/go/+/264679
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
Trust: Alberto Donizetti <alb.donizetti@gmail.com>
Dan Peterson [Thu, 22 Oct 2020 22:25:56 +0000 (19:25 -0300)]
net/http: use exponential backoff for polling in Server.Shutdown
Instead of always polling 500ms, start with an interval of 1ms and
exponentially back off to at most 500ms. 10% jitter is added to each
interval.
This makes Shutdown more responsive when connections and listeners
close quickly.
Also removes the need for the polling interval to be changed in tests
since if tests' connections and listeners close quickly Shutdown will
also return quickly.
Paul E. Murphy [Fri, 23 Oct 2020 17:12:34 +0000 (12:12 -0500)]
cmd/compile: combine more 32 bit shift and mask operations on ppc64
Combine (AND m (SRWconst x)) or (SRWconst (AND m x)) when mask m is
and the shift value produce constant which can be encoded into an
RLWINM instruction.
Combine (CLRLSLDI (SRWconst x)) if the combining of the underling rotate
masks produces a constant which can be encoded into RLWINM.
Likewise for (SLDconst (SRWconst x)) and (CLRLSDI (RLWINM x)).
Combine rotate word + and operations which can be encoded as a single
RLWINM/RLWNM instruction.
The most notable performance improvements arise from the crypto
benchmarks below (GOARCH=power8 on a ppc64le/linux):
Chris Hines [Fri, 1 May 2020 21:04:36 +0000 (17:04 -0400)]
runtime: reduce timer latency
Change the scheduler to treat expired timers with the same approach it
uses to steal runnable G's.
Previously the scheduler ignored timers on P's not marked for
preemption. That had the downside that any G's waiting on those expired
timers starved until the G running on their P completed or was
preempted. That could take as long as 20ms if sysmon was in a 10ms
wake up cycle.
In addition, a spinning P that ignored an expired timer and found no
other work would stop despite there being available work, missing the
opportunity for greater parallelism.
With this change the scheduler no longer ignores timers on
non-preemptable P's or relies on sysmon as a backstop to start threads
when timers expire. Instead it wakes an idle P, if needed, when
creating a new timer because it cannot predict if the current P will
have a scheduling opportunity before the new timer expires. The P it
wakes will determine how long to sleep and block on the netpoller for
the required time, potentially stealing the new timer when it wakes.
This change also eliminates a race between a spinning P transitioning
to idle concurrently with timer creation using the same pattern used
for submission of new goroutines in the same window.
Benchmark analysis:
CL 232199, which was included in Go 1.15 improved timer latency over Go
1.14 by allowing P's to steal timers from P's not marked for preemption.
The benchmarks added in this CL measure that improvement in the
ParallelTimerLatency benchmark seen below. However, Go 1.15 still relies
on sysmon to notice expired timers in some situations and sysmon can
sleep for up to 10ms before waking to check timers. This CL fixes that
shortcoming with modest regression on other benchmarks.
xd [Wed, 14 Oct 2020 18:02:49 +0000 (11:02 -0700)]
cmd/dist: fix build failure of misc/cgo/test on arm64
misc/cgo/test fails in 'dist test' on arm64 if the C compiler is of GCC-9.4 or
above and its 'outline atomics' feature is enabled, since the internal linking
hasn't yet supported "__attribute__((constructor))" and also mis-handles hidden
visibility.
This change addresses the problem by skipping the internal linking cases of
misc/cgo/test on linux/arm64. It fixes 'dist test' failure only, user is expected to
pass a GCC option '-mno-outline-atomics' via CGO_CFLAGS if running into the same
problem when building cgo programs using internal linking.
Ayzat Sadykov [Sat, 24 Oct 2020 22:11:51 +0000 (22:11 +0000)]
database/sql: fix comment on DB.stop()
Previously, 2 goroutines were created in OpenDB and a comment in the DB.close() field indicated that they were canceled. Later, session Resetter () was removed, but the comment remained the same. This commit just fixes this message
Joel Sing [Sat, 24 Oct 2020 14:34:17 +0000 (01:34 +1100)]
cmd/compile: eliminate unnecessary sign/zero extension for riscv64
Add additional rules to eliminate unnecessary sign/zero extension for riscv64.
Also where possible, replace an extension following a load with a different typed
load. This removes almost another 8,000 instructions from the go binary.
Of particular note, change Eq16/Eq8/Neq16/Neq8 to zero extend each value before
subtraction, rather than zero extending after subtraction. While this appears to
double the number of zero extensions, it often lets us completely eliminate them
as the load can already be performed in a properly typed manner.
As an example, prior to this change runtime.memequal16 was:
Change-Id: I16321feb18381241cab121c0097a126104c56c2c
Reviewed-on: https://go-review.googlesource.com/c/go/+/264659
Trust: Joel Sing <joel@sing.id.au>
Run-TryBot: Joel Sing <joel@sing.id.au>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
Jason A. Donenfeld [Thu, 5 Dec 2019 17:48:21 +0000 (18:48 +0100)]
crypto/rand: generate random numbers using RtlGenRandom on Windows
CryptGenRandom appears to be unfavorable these days, whereas the classic
RtlGenRandom is still going strong.
This commit also moves the warnBlocked function into rand_unix, rather
than rand, because it's now only used on unix.
Fixes #33542
Change-Id: I5c02a5917572f54079d627972401efb6e1ce4057
Reviewed-on: https://go-review.googlesource.com/c/go/+/210057
Run-TryBot: Jason A. Donenfeld <Jason@zx2c4.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Filippo Valsorda <filippo@golang.org>
Trust: Jason A. Donenfeld <Jason@zx2c4.com>
Joel Sing [Sat, 24 Oct 2020 13:32:23 +0000 (00:32 +1100)]
cmd/compile: use MOV pseudo-instructions for sign/zero extension
Rather than handling sign and zero extension via rules, defer to the assembler
and use MOV pseudo-instructions. The instruction can also be omitted where the
type and size is already correct. This change results in more than 6,000
instructions being removed from the go binary (in part due to omitted
instructions, in part due to MOVBU having a more efficient implementation in
the assembler than what is used in the current ZeroExt8to{16,32,64} rules).
This will also allow for further rewriting to remove redundant sign/zero
extension.
Change-Id: I05e42fd9f09f40a69948be7de772cce8946c8744
Reviewed-on: https://go-review.googlesource.com/c/go/+/264658
Trust: Joel Sing <joel@sing.id.au> Reviewed-by: Cherry Zhang <cherryyz@google.com>
Bryan C. Mills [Tue, 13 Oct 2020 14:58:13 +0000 (10:58 -0400)]
cmd/go/internal/modload: embed PackageOpts in loaderParams
Instead of duplicating PackageOpts fields in the loaderParams struct,
embed the PackageOpts directly. Many of the fields are duplicated, and
further fields that would also be duplicated will be added in
subsequent changes.
For #36460
Change-Id: I3b0770d162e901d23ec1643183eb07c413d51e0a
Reviewed-on: https://go-review.googlesource.com/c/go/+/263138
Trust: Bryan C. Mills <bcmills@google.com>
Run-TryBot: Bryan C. Mills <bcmills@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Jay Conrod <jayconrod@google.com> Reviewed-by: Michael Matloob <matloob@golang.org>
Joel Sing [Tue, 19 May 2020 08:55:10 +0000 (18:55 +1000)]
cmd/link,cmd/internal/obj/riscv: add TLS support for linux/riscv64
Add support for Thread Local Storage (TLS) for linux/riscv64 with external
linking, using the initial-exec model.
Update #36641
Change-Id: I3106ef9a29cde73215830b00deff43dbec1c76e0
Reviewed-on: https://go-review.googlesource.com/c/go/+/263478
Trust: Joel Sing <joel@sing.id.au>
Run-TryBot: Joel Sing <joel@sing.id.au>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
Michael Anthony Knyszek [Tue, 14 Jul 2020 21:45:16 +0000 (21:45 +0000)]
runtime: implement addrRanges.findSucc with a binary search
This change modifies addrRanges.findSucc to more efficiently find the
successor range in an addrRanges by using a binary search to narrow down
large addrRanges and iterate over no more than 8 addrRanges.
This change makes the runtime more robust against systems that may
aggressively randomize the address space mappings it gives the runtime
(e.g. Fuchsia).
For #40191.
Change-Id: If529df2abd2edb1b1496d8690ddd284ecd7138c2
Reviewed-on: https://go-review.googlesource.com/c/go/+/242679
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 17 Sep 2020 21:19:28 +0000 (21:19 +0000)]
runtime: implement dumpmemstats in terms of readmemstats_m
Since MemStats is now populated directly and some values are derived,
avoid duplicating the logic by instead populating the heap dump directly
from MemStats (external version) instead of memstats (runtime internal
version).
Change-Id: I0bec96bfa02d2ffd1b56475779c124a760e64238
Reviewed-on: https://go-review.googlesource.com/c/go/+/255817
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Fri, 7 Aug 2020 16:37:29 +0000 (16:37 +0000)]
runtime,runtime/metrics: export goroutine count as a metric
For #37112.
Change-Id: I994dfe848605b95ef6aec24f53869e929247e987
Reviewed-on: https://go-review.googlesource.com/c/go/+/247049
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 6 Aug 2020 21:59:13 +0000 (21:59 +0000)]
runtime,runtime/metrics: add metric for distribution of GC pauses
For #37112.
Change-Id: Ibb0425c9c582ae3da3b2662d5bbe830d7df9079c
Reviewed-on: https://go-review.googlesource.com/c/go/+/247047
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 6 Aug 2020 20:36:49 +0000 (20:36 +0000)]
runtime: add timeHistogram type
This change adds a concurrent HDR time histogram to the runtime with
tests. It also adds a function to generate boundaries for use by the
metrics package.
For #37112.
Change-Id: Ifbef8ddce8e3a965a0dcd58ccd4915c282ae2098
Reviewed-on: https://go-review.googlesource.com/c/go/+/247046
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 6 Aug 2020 19:04:46 +0000 (19:04 +0000)]
runtime,runtime/metrics: add object size distribution metrics
This change adds metrics for the distribution of objects allocated and
freed by size, mirroring MemStats' BySize field.
For #37112.
Change-Id: Ibaf1812da93598b37265ec97abc6669c1a5efcbf
Reviewed-on: https://go-review.googlesource.com/c/go/+/247045
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Austin Clements [Mon, 26 Oct 2020 18:32:13 +0000 (14:32 -0400)]
cmd/go,cmd/compile,sync: remove special import case in cmd/go
CL 253748 introduced a special case in cmd/go to allow sync to import
runtime/internal/atomic. Besides introducing unnecessary complexity
into cmd/go, this breaks other packages (like gopls) that understand
how imports work, but don't understand this special case.
Fix this by using the more standard linkname-based approach to pull
the necessary functions from runtime/internal/atomic into sync. Since
these are compiler intrinsics, we also have to tell the compiler that
the linknamed symbols are intrinsics to get this optimization in sync.
Fixes #42196.
Change-Id: I1f91498c255c91583950886a89c3c9adc39a32f0
Reviewed-on: https://go-review.googlesource.com/c/go/+/265124
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Bryan C. Mills <bcmills@google.com> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Paul Murphy <murp@ibm.com>
TryBot-Result: Go Bot <gobot@golang.org>
Michael Anthony Knyszek [Thu, 6 Aug 2020 16:47:58 +0000 (16:47 +0000)]
runtime,runtime/metrics: add heap goal and GC cycle metrics
This change adds three new metrics: the heap goal, GC cycle count, and
forced GC count. These metrics are identical to their MemStats
counterparts.
For #37112.
Change-Id: I5a5e8dd550c0d646e5dcdbdf38274895e27cdd88
Reviewed-on: https://go-review.googlesource.com/c/go/+/247044
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Change-Id: Idd3dd5c84215ddd1ab05c2e76e848aa0a4d40fb0
Reviewed-on: https://go-review.googlesource.com/c/go/+/247043
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Wed, 5 Aug 2020 23:10:46 +0000 (23:10 +0000)]
runtime: add readMetrics latency benchmark
This change adds a new benchmark to the runtime tests for measuring the
latency of the new metrics implementation, based on the
ReadMemStats latency benchmark. readMetrics will have more metrics added
to it in the future, and this benchmark will serve as a way to measure
the cost of adding additional metrics.
Change-Id: Ib05e3ed4afa49a70863fc0c418eab35b72263e24
Reviewed-on: https://go-review.googlesource.com/c/go/+/247042
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Emmanuel Odeke <emmanuel@orijtech.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Wed, 1 Jul 2020 16:02:42 +0000 (16:02 +0000)]
runtime,runtime/metrics: add memory metrics
This change adds support for a variety of runtime memory metrics and
contains the base implementation of Read for the runtime/metrics
package, which lives in the runtime.
It also adds testing infrastructure for the metrics package, and a bunch
of format and documentation tests.
For #37112.
Change-Id: I16a2c4781eeeb2de0abcb045c15105f1210e2d8a
Reviewed-on: https://go-review.googlesource.com/c/go/+/247041
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
Michael Anthony Knyszek [Tue, 4 Aug 2020 17:29:03 +0000 (17:29 +0000)]
runtime: move malloc stats into consistentHeapStats
This change moves the mcache-local malloc stats into the
consistentHeapStats structure so the malloc stats can be managed
consistently with the memory stats. The one exception here is
tinyAllocs for which moving that into the global stats would incur
several atomic writes on the fast path. Microbenchmarks for just one CPU
core have shown a 50% loss in throughput. Since tiny allocation counnt
isn't exposed anyway and is always blindly added to both allocs and
frees, let that stay inconsistent and flush the tiny allocation count
every so often.
Change-Id: I2a4b75f209c0e659b9c0db081a3287bf227c10ca
Reviewed-on: https://go-review.googlesource.com/c/go/+/247039
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Mon, 3 Aug 2020 20:35:40 +0000 (20:35 +0000)]
runtime: replace some memstats with consistent stats
This change replaces stacks_inuse, gcWorkBufInUse and
gcProgPtrScalarBitsInUse with their corresponding consistent stats. It
also adds checks to make sure the rest of the sharded stats line up with
existing stats in updatememstats.
Change-Id: I17d0bd181aedb5c55e09c8dff18cef5b2a3a14e3
Reviewed-on: https://go-review.googlesource.com/c/go/+/247038
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Mon, 3 Aug 2020 20:11:04 +0000 (20:11 +0000)]
runtime: add consistent heap statistics
This change adds a global set of heap statistics which are similar
to existing memory statistics. The purpose of these new statistics
is to be able to read them and get a consistent result without stopping
the world. The goal is to eventually replace as many of the existing
memstats statistics with the sharded ones as possible.
The consistent memory statistics use a tailor-made synchronization
mechanism to allow writers (allocators) to proceed with minimal
synchronization by using a sequence counter and a global generation
counter to determine which set of statistics to update. Readers
increment the global generation counter to effectively grab a snapshot
of the statistics, and then iterate over all Ps using the sequence
counter to ensure that they may safely read the snapshotted statistics.
To keep statistics fresh, the reader also has a responsibility to merge
sets of statistics.
These consistent statistics are computed, but otherwise unused for now.
Upcoming changes will integrate them with the rest of the codebase and
will begin to phase out existing statistics.
Change-Id: I637a11f2439e2049d7dccb8650c5d82500733ca5
Reviewed-on: https://go-review.googlesource.com/c/go/+/247037
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Tue, 14 Apr 2020 21:06:26 +0000 (21:06 +0000)]
runtime/metrics: add package interface
This change creates the runtime/metrics package and adds the initial
interface as laid out in the design document.
For #37112.
Change-Id: I202dcee08ab008dd63bf96f7a4162f5b5f813637
Reviewed-on: https://go-review.googlesource.com/c/go/+/247040
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Mon, 3 Aug 2020 20:08:25 +0000 (20:08 +0000)]
runtime: add helper for getting an mcache in allocation contexts
This change adds a function getMCache which returns the current P's
mcache if it's available, and otherwise tries to get mcache0 if we're
bootstrapping. This function will come in handy as we need to replicate
this behavior in multiple places in future changes.
Change-Id: I536073d6f6dc6c6390269e613ead9f8bcb6e7f98
Reviewed-on: https://go-review.googlesource.com/c/go/+/246976
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Mon, 3 Aug 2020 19:31:23 +0000 (19:31 +0000)]
runtime: remove memstats.heap_alloc
memstats.heap_alloc is 100% a duplicate and unnecessary copy of
memstats.alloc which exists because MemStats used to be populated from
memstats via a memmove.
Change-Id: I995489f61be39786e573b8494a8ab6d4ea8bed9c
Reviewed-on: https://go-review.googlesource.com/c/go/+/246975
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Mon, 3 Aug 2020 19:27:59 +0000 (19:27 +0000)]
runtime: remove memstats.heap_idle
This statistic is updated in many places but for MemStats may be
computed from existing statistics. Specifically by definition
heap_idle = heap_sys - heap_inuse since heap_sys is all memory allocated
from the OS for use in the heap minus memory used for non-heap purposes.
heap_idle is almost the same (since it explicitly includes memory that
*could* be used for non-heap purposes) but also doesn't include memory
that's actually used to hold heap objects.
Although it has some utility as a sanity check, it complicates
accounting and we want fewer, orthogonal statistics for upcoming metrics
changes, so just drop it.
Change-Id: I40af54a38e335f43249f6e218f35088bfd4380d1
Reviewed-on: https://go-review.googlesource.com/c/go/+/246974
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Mon, 3 Aug 2020 19:23:30 +0000 (19:23 +0000)]
runtime: break down memstats.gc_sys
This change breaks apart gc_sys into three distinct pieces. Two of those
pieces are pieces which come from heap_sys since they're allocated from
the page heap. The rest comes from memory mapped from e.g.
persistentalloc which better fits the purpose of a sysMemStat. Also,
rename gc_sys to gcMiscSys.
Change-Id: I098789170052511e7b31edbcdc9a53e5c24573f7
Reviewed-on: https://go-review.googlesource.com/c/go/+/246973
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Fri, 31 Jul 2020 21:32:26 +0000 (21:32 +0000)]
runtime: copy in MemStats fields explicitly
Currently MemStats is populated via an unsafe memmove from memstats, but
this places unnecessary structural restrictions on memstats, is annoying
to reason about, and tightly couples the two. Instead, just populate the
fields of MemStats explicitly.
Change-Id: I96f6a64326b1a91d4084e7b30169a4bbe6a331f9
Reviewed-on: https://go-review.googlesource.com/c/go/+/246972
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Wed, 29 Jul 2020 20:25:05 +0000 (20:25 +0000)]
runtime: delineate which memstats are system stats with a type
This change modifies the type of several mstats fields to be a new type:
sysMemStat. This type has the same structure as the fields used to have.
The purpose of this change is to make it very clear which stats may be
used in various functions for accounting (usually the platform-specific
sys* functions, but there are others). Currently there's an implicit
understanding that the *uint64 value passed to these functions is some
kind of statistic whose value is atomically managed. This understanding
isn't inherently problematic, but we're about to change how some stats
(which currently use mSysStatInc and mSysStatDec) work, so we want to
make it very clear what the various requirements are around "sysStat".
This change also removes mSysStatInc and mSysStatDec in favor of a
method on sysMemStat. Note that those two functions were originally
written the way they were because atomic 64-bit adds required a valid G
on ARM, but this hasn't been the case for a very long time (since
golang.org/cl/14204, but even before then it wasn't clear if mutexes
required a valid G anymore). Today we implement 64-bit adds on ARM with
a spinlock table.
Change-Id: I4e9b37cf14afc2ae20cf736e874eb0064af086d7
Reviewed-on: https://go-review.googlesource.com/c/go/+/246971
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Wed, 29 Jul 2020 19:00:37 +0000 (19:00 +0000)]
runtime: make the span allocation purpose more explicit
This change modifies mheap's span allocation API to have each caller
declare a purpose, defined as a new enum called spanAllocType.
The purpose behind this change is two-fold:
1. Tight control over who gets to allocate heap memory is, generally
speaking, a good thing. Every codepath that allocates heap memory
places additional implicit restrictions on the allocator. A notable
example of a restriction is work bufs coming from heap memory: write
barriers are not allowed in allocation paths because then we could
have a situation where the allocator calls into the allocator.
2. Memory statistic updating is explicit. Instead of passing an opaque
pointer for statistic updating, which places restrictions on how that
statistic may be updated, we use the spanAllocType to determine which
statistic to update and how.
We also take this opportunity to group all the statistic updating code
together, which should make the accounting code a little easier to
follow.
Change-Id: Ic0b0898959ba2a776f67122f0e36c9d7d60e3085
Reviewed-on: https://go-review.googlesource.com/c/go/+/246970
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Fri, 24 Jul 2020 19:58:31 +0000 (19:58 +0000)]
runtime: rename mcache fields to match Go style
This change renames a bunch of malloc statistics stored in the mcache
that are all named with the "local_" prefix. It also renames largeAlloc
to allocLarge to prevent a naming conflict, and next_sample because it
would be the last mcache field with the old C naming style.
Change-Id: I29695cb83b397a435ede7e9ad5c3c9be72767ea3
Reviewed-on: https://go-review.googlesource.com/c/go/+/246969
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 23 Jul 2020 22:36:58 +0000 (22:36 +0000)]
runtime: flush local_scan directly and more often
Now that local_scan is the last mcache-based statistic that is flushed
by purgecachedstats, and heap_scan and gcController.revise may be
interacted with concurrently, we don't need to flush heap_scan at
arbitrary locations where the heap is locked, and we don't need
purgecachedstats and cachestats anymore. Instead, we can flush
local_scan at the same time we update heap_live in refill, so the two
updates may share the same revise call.
Clean up unused functions, remove code that would cause the heap to get
locked in the allocSpan when it didn't need to (other than to flush
local_scan), and flush local_scan explicitly in a few important places.
Notably we need to flush local_scan whenever we flush the other stats,
but it doesn't need to be donated anywhere, so have releaseAll do the
flushing. Also, we need to flush local_scan before we set heap_scan at
the end of a GC, which was previously handled by cachestats. Just do so
explicitly -- it's not much code and it becomes a lot more clear why we
need to do so.
Change-Id: I35ac081784df7744d515479896a41d530653692d
Reviewed-on: https://go-review.googlesource.com/c/go/+/246968
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 23 Jul 2020 22:16:46 +0000 (22:16 +0000)]
runtime: don't flush local_tinyallocs
This change makes local_tinyallocs work like the rest of the malloc
stats and doesn't flush local_tinyallocs, instead making that the
source-of-truth.
Change-Id: I3e6cb5f1b3d086e432ce7d456895511a48e3617a
Reviewed-on: https://go-review.googlesource.com/c/go/+/246967
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 23 Jul 2020 22:07:44 +0000 (22:07 +0000)]
runtime: remove mcentral.nmalloc and add mcache.local_nsmallalloc
This change removes mcentral.nmalloc and adds mcache.local_nsmallalloc
which fulfills the same role but may be accessed non-atomically. It also
moves responsibility for updating heap_live and local_nsmallalloc into
mcache functions.
As a result of this change, mcache is now the sole source-of-truth for
malloc stats. It is also solely responsible for updating heap_live and
performing the various operations required as a result of updating
heap_live. The overall improvement here is in code organization:
previously malloc stats were fairly scattered, and now they have one
single home, and nearly all the required manipulations exist in a single
file.
Change-Id: I7e93fa297c1debf17e3f2a0d68aeed28a9c6af00
Reviewed-on: https://go-review.googlesource.com/c/go/+/246966
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 23 Jul 2020 21:10:29 +0000 (21:10 +0000)]
runtime: make nlargealloc and largealloc mcache fields
This change makes nlargealloc and largealloc into mcache fields just
like nlargefree and largefree. These local fields become the new
source-of-truth. This change also moves the accounting for these fields
out of allocSpan (which is an inappropriate place for it -- this
accounting generally happens much closer to the point of allocation) and
into largeAlloc. This move is partially possible now that we can call
gcController.revise at that point.
Furthermore, this change moves largeAlloc into mcache.go and makes it a
method of mcache. While there's a little bit of a mismatch here because
largeAlloc barely interacts with the mcache, it helps solidify the
mcache as the first allocation layer and provides a clear place to
aggregate and manage statistics.
Change-Id: I37b5e648710733bb4c04430b71e96700e438587a
Reviewed-on: https://go-review.googlesource.com/c/go/+/246965
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 23 Jul 2020 21:02:05 +0000 (21:02 +0000)]
runtime: make distributed/local malloc stats the source-of-truth
This change makes it so that various local malloc stats (excluding
heap_scan and local_tinyallocs) are no longer written first to mheap
fields but are instead accessed directly from each mcache.
This change is part of a move toward having stats be distributed, and
cleaning up some old code related to the stats.
Note that because there's no central source-of-truth, when an mcache
dies, it must donate its stats to another mcache. It's always safe to
donate to the mcache for the 0th P, so do that.
Change-Id: I2556093dbc27357cb9621c9b97671f3c00aa1173
Reviewed-on: https://go-review.googlesource.com/c/go/+/246964
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 23 Jul 2020 20:48:06 +0000 (20:48 +0000)]
runtime: access the assist ratio atomically
This change makes it so that the GC assist ratio (the pair of
gcControllerState fields assistBytesPerWork and assistWorkPerByte) is
updated atomically. Note that the pair of fields are not updated
together atomically, but that's OK. The code here was already racy for
some time and in practice the assist ratio moves very slowly.
The purpose of this change is so that we can document
gcController.revise to be safe for concurrent use, which will be useful
in further changes.
Change-Id: Ie25d630207c88e4f85f2b8953f6a0051ebf1b4ea
Reviewed-on: https://go-review.googlesource.com/c/go/+/246963
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 23 Jul 2020 20:24:56 +0000 (20:24 +0000)]
runtime: make next_gc atomically accessed
next_gc is mostly updated only during a STW, but may occasionally be
updated by calls to e.g. debug.SetGCPercent. In this case the update is
supposed to be protected by the heap lock, but in reality it's accessed
by gcController.revise which may be called without the heap lock held
(despite its documentation, which will be updated in a later change).
Change the synchronization policy on next_gc so that it's atomically
accessed when the world is not stopped to aid in making revise safe for
concurrent use.
Change-Id: I79657a72f91563f3241aaeda66e8a7757d399529
Reviewed-on: https://go-review.googlesource.com/c/go/+/246962
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 23 Jul 2020 20:17:40 +0000 (20:17 +0000)]
runtime: load gcControllerState.scanWork atomically in revise
gcControllerState.scanWork's docs state that it must be accessed
atomically during a GC cycle, but gcControllerState.revise does not do
this (even when called with the heap lock held).
This change makes it so that gcControllerState.revise accesses scanWork
atomically and explicitly.
Note that we don't update gcControllerState.revise's erroneous doc
comment here because this change isn't about revise's guarantees, just
about heap_scan. The comment is updated in a later change.
Change-Id: Iafc3ad214e517190bfd8a219896d23da19f7659d
Reviewed-on: https://go-review.googlesource.com/c/go/+/246961
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
Michael Anthony Knyszek [Thu, 23 Jul 2020 20:13:49 +0000 (20:13 +0000)]
runtime: define and enforce synchronization on heap_scan
Currently heap_scan is mostly protected by the heap lock, but
gcControllerState.revise sometimes accesses it without a lock. In an
effort to make gcControllerState.revise callable from more contexts (and
have its synchronization guarantees actually respected), make heap_scan
atomically read from and written to, unless the world is stopped.
Note that we don't update gcControllerState.revise's erroneous doc
comment here because this change isn't about revise's guarantees, just
about heap_scan. The comment is updated in a later change.
Change-Id: Iddbbeb954767c704c2bd1d221f36e6c4fc9948a6
Reviewed-on: https://go-review.googlesource.com/c/go/+/246960
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Emmanuel Odeke <emmanuel@orijtech.com> Reviewed-by: Michael Pratt <mpratt@google.com>
Austin Clements [Sat, 17 Oct 2020 22:42:03 +0000 (18:42 -0400)]
runtime: fix sub-uintptr-sized Windows callback arguments
The Windows callback support accepts Go functions with arguments that
are uintptr-sized or smaller. However, it doesn't implement smaller
arguments correctly. It assumes the Windows arguments layout is
equivalent to the Go argument layout. This is often true, but because
Windows C ABIs pad arguments to word size, while Go packs arguments,
the layout is different if there are multiple sub-word-size arguments
in a row. For example, a function with two uint16 arguments will have
a two-word C argument frame, but only a 4 byte Go argument frame.
There are also subtleties surrounding floating-point register
arguments that it doesn't handle correctly.
To fix this, when constructing a callback, we examine the Go
function's signature to construct a mapping between the C argument
frame and the Go argument frame. When the callback is invoked, we use
this mapping to build the Go argument frame and copy the result back.
This adds several test cases to TestStdcallAndCDeclCallbacks that
exercise more complex function signatures. These all fail with the
current code, but work with this CL.
In addition to fixing these callback types, this is also a step toward
the Go register ABI (#40724), which is going to make the ABI
translation more complex.
Change-Id: I19fb1681b659d9fd528ffd5e88912bebb95da052
Reviewed-on: https://go-review.googlesource.com/c/go/+/263271
Trust: Austin Clements <austin@google.com>
Trust: Alex Brainman <alex.brainman@gmail.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Austin Clements [Sat, 17 Oct 2020 02:22:20 +0000 (22:22 -0400)]
runtime: tidy Windows callback test
This simplifies the systematic test of Windows callbacks with
different signatures and prepares it for expanded coverage of function
signatures.
It now returns a result from the Go function and threads it back
through C. This simplifies things, but also previously the code could
have succeeded by simply not calling the callbacks at all (though
other tests would have caught that).
It bundles together the C function description and the Go function
it's intended to call. Now the test source generation and the test
running both loop over a single slice of test functions.
Since the C function and Go function are now bundled, it generates the
C function by reflectively inspecting the signature of the Go
function. For the moment, we keep the same test suite, which is
entirely functions with "uintptr" arguments, but we'll expand this
shortly.
It now use sub-tests. This way tests automatically get useful
diagnostic labels in failures and the tests don't have to catch panics
on their own.
It eliminates the DLL function argument. I honestly couldn't figure
out what the point of this was, and it added what appeared to be an
unnecessary loop level to the tests.
Change-Id: I120dfd4785057cc2c392bd2c821302f276bd128e
Reviewed-on: https://go-review.googlesource.com/c/go/+/263270
Trust: Austin Clements <austin@google.com>
Trust: Alex Brainman <alex.brainman@gmail.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Austin Clements [Thu, 8 Oct 2020 02:53:52 +0000 (22:53 -0400)]
runtime: tidy cgocallback
On amd64 and 386, we have a very roundabout way of remembering that we
need to dropm on return that currently involves saving a zero to
needm's argument slot and later bringing it back. Just store the zero.
This also makes amd64 and 386 more consistent with cgocallback on all
other platforms: rather than saving the old M to the G stack, they now
save it to a named slot on the G0 stack.
The needm function no longer needs a dummy argument to get the SP, so
we drop that.
Austin Clements [Thu, 1 Oct 2020 21:22:38 +0000 (17:22 -0400)]
runtime,cmd/cgo: simplify C -> Go call path
This redesigns the way calls work from C to exported Go functions. It
removes several steps from the call path, makes cmd/cgo no longer
sensitive to the Go calling convention, and eliminates the use of
reflectcall from cgo.
In order to avoid generating a large amount of FFI glue between the C
and Go ABIs, the cgo tool has long depended on generating a C function
that marshals the arguments into a struct, and then the actual ABI
switch happens in functions with fixed signatures that simply take a
pointer to this struct. In a way, this CL simply pushes this idea
further.
Currently, the cgo tool generates this argument struct in the exact
layout of the Go stack frame and depends on reflectcall to unpack it
into the appropriate Go call (even though it's actually
reflectcall'ing a function generated by cgo).
In this CL, we decouple this struct from the Go stack layout. Instead,
cgo generates a Go function that takes the struct, unpacks it, and
calls the exported function. Since this generated function has a
generic signature (like the rest of the call path), we don't need
reflectcall and can instead depend on the Go compiler itself to
implement the call to the exported Go function.
One complication is that syscall.NewCallback on Windows, which
converts a Go function into a C function pointer, depends on
cgocallback's current dynamic calling approach since the signatures of
the callbacks aren't known statically. For this specific case, we
continue to depend on reflectcall. Really, the current approach makes
some overly simplistic assumptions about translating the C ABI to the
Go ABI. Now we're at least in a much better position to do a proper
ABI translation.
For comparison, the current cgo call path looks like:
GoF (generated C function) ->
crosscall2 (in cgo/asm_*.s) ->
_cgoexp_GoF (generated Go function) ->
cgocallback (in asm_*.s) ->
cgocallback_gofunc (in asm_*.s) ->
cgocallbackg (in cgocall.go) ->
cgocallbackg1 (in cgocall.go) ->
reflectcall (in asm_*.s) ->
_cgoexpwrap_GoF (generated Go function) ->
p.GoF
Now the call path looks like:
GoF (generated C function) ->
crosscall2 (in cgo/asm_*.s) ->
cgocallback (in asm_*.s) ->
cgocallbackg (in cgocall.go) ->
cgocallbackg1 (in cgocall.go) ->
_cgoexp_GoF (generated Go function) ->
p.GoF
Notably:
1. We combine _cgoexp_GoF and _cgoexpwrap_GoF and move the combined
operation to the end of the sequence. This combined function also
handles reflectcall's previous role.
2. We combined cgocallback and cgocallback_gofunc since the only
purpose of having both was to convert a raw PC into a Go function
value. We instead construct the Go function value in cgocallbackg1.
3. cgocallbackg1 no longer reaches backwards through the stack to get
the arguments to cgocallback_gofunc. Instead, we just pass the
arguments down.
4. Currently, we need an explicit msanwrite to mark the results struct
as written because reflectcall doesn't do this. Now, the results are
written by regular Go assignments, so the Go compiler generates the
necessary MSAN annotations. This also means we no longer need to track
the size of the arguments frame.
Updates #40724, since now we don't need to teach cgo about the
register ABI or change how it uses reflectcall.
Cherry Zhang [Sat, 24 Oct 2020 17:05:31 +0000 (13:05 -0400)]
cmd/link: preserve alignment for stackmap symbols
Stackmap symbols are content-addressable, so it may be dedup'd
with another symbol with same content. We want stackmap symbols
4-byte aligned. But if it dedup's with another symbol with larger
alignment, preserve that alignment.
Fixes #42071.
Change-Id: I1616dd2b0c175b2aac8f68782a5c7a62053c0b57
Reviewed-on: https://go-review.googlesource.com/c/go/+/264897
Trust: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Joel Sing <joel@sing.id.au> Reviewed-by: Than McIntosh <thanm@google.com>
Natanael Copa [Fri, 16 Oct 2020 16:23:54 +0000 (16:23 +0000)]
net: prefer /etc/hosts over DNS when no /etc/nsswitch.conf is present
Do not mimic glibc behavior if /etc/nsswitch.conf is missing. This will
will likely be missing on musl libc systems and glibc systems will likely
always have it, resulting in localhost lookup being done over DNS rather
than from /etc/hosts.
Do what makes most sense rather than making any assumption about the
libc.
Fixes #35305
Change-Id: I20bd7e24131bba8eaa39a20c8950fe552364784d
GitHub-Last-Rev: 119409839d37c8c7268f5f6db19c1789d9d96074
GitHub-Pull-Request: golang/go#39685
Reviewed-on: https://go-review.googlesource.com/c/go/+/238629
Run-TryBot: Dan Peterson <dpiddy@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Dan Peterson <dpiddy@gmail.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Trust: Emmanuel Odeke <emmanuel@orijtech.com>
Kevin Parsons [Tue, 20 Oct 2020 15:15:23 +0000 (15:15 +0000)]
path/filepath: allow EvalSymlinks to work on UNC share roots on Windows
Fixes #42079
Previously, EvalSymlinks returned an error when called with the root of
a UNC share (e.g. \\server\share). This was due to Windows's
FindFirstFile function not supporting a share root path.
To resolve this, now return early from toNorm in the case where the path
after the volume name is empty. Skipping the later path component
resolution shouldn't have any negative impact in this case, as if the
path is empty, there aren't any path components to resolve anyways.
The test case uses the localhost admin share (c$), as it should be
present in most situations. This allows testing without setting up an
external file share. However, this fix applies to all UNC share root
paths.
Change-Id: I05035bd86be93662d7bea34fab4b75fc8e918206
GitHub-Last-Rev: bd3db2cda65aae1cdf8d94b03bc7197dff68dc44
GitHub-Pull-Request: golang/go#42096
Reviewed-on: https://go-review.googlesource.com/c/go/+/263917
Trust: Alex Brainman <alex.brainman@gmail.com>
Trust: Giovanni Bajo <rasky@develer.com>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com> Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Tobias Klauser [Sat, 24 Oct 2020 18:21:48 +0000 (20:21 +0200)]
cmd/dist: document why test fails on incomplete ports
It might not be obvious from reading the code why we consider the test
as failed on incomplete ports even though it passed. Add a comment
documenting this behavior, as suggested by Dmitri in CL 155839.
Tobias Klauser [Sat, 24 Oct 2020 18:14:33 +0000 (20:14 +0200)]
lib/time, time/tzdata: update tz data to 2020d
See http://mm.icann.org/pipermail/tz-announce/2020-October/000060.html
and http://mm.icann.org/pipermail/tz-announce/2020-October/000062.html
for a description of the changes.
Updates #22487
Change-Id: I4b2717d2d642284889345a0125eb6c614575c0ea
Reviewed-on: https://go-review.googlesource.com/c/go/+/264681
Trust: Tobias Klauser <tobias.klauser@gmail.com>
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Constantin Konstantinidis [Thu, 24 Sep 2020 16:54:58 +0000 (18:54 +0200)]
cmd/compile: enforce strongly typed rules for ARM (read)
Add type casting to offset.
L246-L247
L1473-L1475
toolstash-check successful.
Change-Id: I816c7556609ca6dd67bff8007c2d006cab89ee2b
Reviewed-on: https://go-review.googlesource.com/c/go/+/257639
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
Trust: Alberto Donizetti <alb.donizetti@gmail.com>
Joel Sing [Fri, 23 Oct 2020 16:53:53 +0000 (03:53 +1100)]
cmd/internal/obj/riscv: support additional register to register moves
Add support for signed and unsigned register to register moves of various
sizes. This makes it easier to handle zero and sign extension and will allow
for further changes that improve the compiler optimisations for riscv64.
While here, change the existing register to register moves from obj.Prog
rewriting to instruction generation.
Change-Id: Id21911019b76922367a134da13c3449a84a1fb08
Reviewed-on: https://go-review.googlesource.com/c/go/+/264657
Trust: Joel Sing <joel@sing.id.au> Reviewed-by: Cherry Zhang <cherryyz@google.com>
Alessandro Arzilli [Thu, 4 Jun 2020 14:59:06 +0000 (16:59 +0200)]
debug/dwarf: add support for DWARFv5 to (*Data).Ranges
Updates the (*Data).Ranges method to work with DWARFv5 which uses the
new debug_rnglists section instead of debug_ranges.
This does not include supporting DW_FORM_rnglistx.
General support for DWARFv5 was added by CL 175138.
Change-Id: I01f919a865616a3ff12f5bf649c2c9abf89fcf52
Reviewed-on: https://go-review.googlesource.com/c/go/+/236657
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Trust: Emmanuel Odeke <emm.odeke@gmail.com>