Current implementation use EPIPE as an error for Seek on pipes.
According to http://pubs.opengroup.org/onlinepubs/009695399/functions/lseek.html,
it should use ESPIPE instead.
Fixes #20066
Change-Id: I24c3b95be946bc19a287d6b10f447b034a9a1283
Reviewed-on: https://go-review.googlesource.com/41311 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Change-Id: I9d399db8ac26ad44adeace3bf1e5b11cbfe3e0d3
Reviewed-on: https://go-review.googlesource.com/41313 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
cmd/go/internal/work: more TestRespectSetgidDir fixes
Hopefully the last refactoring of TestRespectGroupSticky:
* Properly tested (+simplified) FreeBSD fix
* Tested on Darwin (10.12.4)
* Rename to TestRespectSetgidDir (I believe this is the accepted
terminology)
Fixes golang/go#19596.
Change-Id: I8d689ac3e245846cb3f1338ea13e35be512ccb9c
Reviewed-on: https://go-review.googlesource.com/41430 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
- Add new BranchStmt.Target field: It's the destination for break,
continue, or goto statements.
- When parsing with CheckBranches enabled, set the BranchStmt.Target
field. We get the information practically for free from the branch
checker, so keep it for further use.
- Fix a couple of comments.
- This could use a test, but the new Target field is currently not
used, and writing a test is tedious w/o a general tree visitor.
Do it later. For now, visually verified output from syntax dump.
Change-Id: Id691d89efab514ad885e19ac9759506106579520
Reviewed-on: https://go-review.googlesource.com/40988 Reviewed-by: Matthew Dempsky <mdempsky@google.com>
ARM's udiv function is nosplit and it shouldn't be preemptied
(passing args in registers). It is in some sense like DUFFCOPY,
which we don't mark as safepoint.
cmd/compile: provide a way to auto-discover -d debug keys
Currently one needs to refer to the sources to have a list of accepted
debug keys. We can copy what 'ssa/help' does and introspect the list of
debug keys to print a more detailed help:
$ go tool compile -d help
usage: -d arg[,arg]* and arg is <key>[=<value>]
<key> is one of:
append print information about append compilation
closure print information about closure compilation
disablenil disable nil checks
dclstack run internal dclstack check
gcprog print dump of GC programs
nil print information about nil checks
panic do not hide any compiler panic
slice print information about slice compilation
typeassert print information about type assertion inlining
wb print information about write barriers
export print export data
pctab print named pc-value table
ssa/help print help about SSA debugging
Keith Randall [Thu, 9 Jun 2016 05:02:08 +0000 (22:02 -0700)]
cmd/compile: experiment which clobbers all dead pointer fields
The experiment "clobberdead" clobbers all pointer fields that the
compiler thinks are dead, just before and after every safepoint.
Useful for debugging the generation of live pointer bitmaps.
Helped find the following issues:
Update #15936
Update #16026
Update #16095
Update #18860
Change-Id: Id1d12f86845e3d93bae903d968b1eac61fc461f9
Reviewed-on: https://go-review.googlesource.com/23924
Run-TryBot: Keith Randall <khr@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
runtime/debug: increase threshold on TestSetGCPercent
Currently TestSetGCPercent checks that NextGC is within 10 MB of the
expected value. For some reason it's much noisier on some of the
builders. To get these passing again, raise the threshold to 20 MB.
Robert Griesemer [Fri, 21 Apr 2017 18:11:15 +0000 (11:11 -0700)]
cmd/compile/internal/types: remove unused lineno arguments for PushDcl/MarkDcl
More steps towards simpler symbol handling:
- Pushdcl's incoming pos argument, saved in a newly pushed *Sym, was always
immediately overwritten by the Lastlineno value of the saved *Sym.
- Markdcl's incoming pos argument, saved in the stack mark *Sym, was not
restored when the stack mark was popped.
- Popdcl always maintained the most recent Lastlineno for a *Sym given
by package and name, making it unnecessary to save Lastlineno in the
first place. Removed Lastlineno from the set of fields that need saving,
and simplified Popdcl.
Change-Id: Ie93da1fbd780dcafc2703044e781c0c6298df569
Reviewed-on: https://go-review.googlesource.com/41390
Run-TryBot: Robert Griesemer <gri@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Currently SetGCPercent forces a GC in order to recompute GC pacing.
Since we can now recompute pacing on the fly using gcSetTriggerRatio,
change SetGCPercent (really runtime.setGCPercent) to go through
gcSetTriggerRatio and not trigger a GC.
This changes gcSetTriggerRatio so it can be called even during
concurrent mark or sweep. In this case, it will adjust the pacing of
the current phase, accounting for progress that has already been made.
To make this work for concurrent sweep, this introduces a "basis" for
the pagesSwept count, much like the basis we just introduced for
heap_live. This lets gcSetTriggerRatio shift the basis to the current
heap_live and pagesSwept and compute a slope from there to completion.
This avoids creating a discontinuity where, if the ratio has
increased, there has to be a flurry of sweep activity to catch up.
Instead, this creates a continuous, piece-wise linear function as
adjustments are made.
runtime: drive proportional sweep directly off heap_live
Currently, proportional sweep maintains its own count of how many
bytes have been allocated since the beginning of the sweep cycle so it
can compute how many pages need to be swept for a given allocation.
However, this requires a somewhat complex reimbursement scheme since
proportional sweep must be done before a span is allocated, but we
don't know how many bytes to charge until we've allocated a span. This
means that the allocated byte count used by proportional sweep can go
up and down, which has led to underflow bugs in the past (#18043) and
is going to interfere with adjusting sweep pacing on-the-fly (for #19076).
This approach also means we're maintaining a statistic that is very
closely related to heap_live, but has a different 0 value. This is
particularly confusing because the sweep ratio is computed based on
heap_live, so you have to understand that these two statistics are
very closely related.
Replace all of this and compute the sweep debt directly from the
current value of heap_live. To make this work, we simply save the
value of heap_live when the sweep ratio is computed to use as a
"basis" for later computing the sweep debt.
This eliminates the need for reimbursement as well as the code for
maintaining the sweeper's version of the live heap size.
For #19076.
Coincidentally fixes #18043, since this eliminates sweep reimbursement
entirely.
runtime: consolidate all trigger-derived computations
Currently, the computations that derive controls from the GC trigger
are spread across several parts of the mark termination code.
Consolidate computing the absolute trigger, the heap goal, and sweep
pacing into a single function called at the end of mark termination.
Unlike the code being consolidated, this has to be more careful about
negative gcpercent. Many of the consolidated code paths simply didn't
execute if GC was off.
This is a step toward being able to change the GC trigger ratio in the
middle of concurrent sweeping and marking. For this commit, we try to
stick close to the original structure of the code that's being
consolidated, so it doesn't yet support mid-cycle adjustments.
Austin Clements [Fri, 31 Mar 2017 21:09:41 +0000 (17:09 -0400)]
runtime: rationalize triggerRatio
gcController.triggerRatio is the only field in gcController that
persists across cycles. As global mutable state, the places where it
written and read are spread out, making it difficult to see that
updates and downstream calculations are done correctly.
Improve this situation by doing two things:
1) Move triggerRatio to memstats so it lives with the other
trigger-related fields and makes gcController entirely transient
state.
2) Commit the new trigger ratio during mark termination when we
compute other next-cycle controls, including the absolute trigger.
This forces us to explicitly thread the new trigger ratio from
gcController.endCycle to mark termination, so we're not just pulling
it out of global state.
runtime: consistently use atomic loads for heap_live
heap_live is updated atomically without locking, so we should also use
atomic loads to read it. Fix the reads of heap_live that happen
outside of STW to be atomic.
Change-Id: I0f8c695146b39cff72ca2374f861f3e9f72b0f77
Reviewed-on: https://go-review.googlesource.com/41314 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The inliner's ishairy passes a budget and a reason down through the
walk. Lift these into a visitor object and turn ishairy and its
helpers into methods.
David Lazar [Thu, 20 Apr 2017 20:19:55 +0000 (16:19 -0400)]
testing: use function names to identify helpers
Previously, helpers were identified by entry PC, but this breaks if the
helper is inlined (as in notHelperCallingHelper). Instead, identify
helpers by function name (with package path). Now TestTBHelper and
TestTBHelperParallel pass with -l=4.
To keep the code unified, this change makes it so that the runner
is also identified by function name instead of entry PC.
David Lazar [Wed, 19 Apr 2017 16:57:52 +0000 (12:57 -0400)]
cmd/compile: don't inline functions that call runtime.getcaller{pc,sp}
runtime.getcaller{pc,sp} expect their argument to be a pointer to the
caller's first function argument. This assumption breaks when the caller
is inlined. For example, with -l=4, calls to runtime.entersyscall (which
calls getcallerpc) are inlined and that breaks multiple cgo tests.
This change modifies the compiler to refuse to inline functions that
call runtime.getcaller{pc,sp}. Alternatively, we could mark these
functions //go:noinline but that limits optimization opportunities if
the calls to getcaller{pc,sp} are eliminated as dead code.
Previously TestCgoPprofPIE, TestCgoPprof, and TestCgoCallbackGC failed
with -l=4. Now all of the runtime tests pass with -l=4.
Change-Id: I258bca9025e20fc451e673a18f862b5da1e07ae7
Reviewed-on: https://go-review.googlesource.com/40998
Run-TryBot: David Lazar <lazard@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com> Reviewed-by: Austin Clements <austin@google.com>
On 32-bit architectures (or if we fail to map a 64-bit-style arena),
we try to map the heap arena just above the end of the process image.
While we can accept any address, using lower addresses is preferable
because lower addresses cause us to map less of the heap bitmap.
However, if a program is linked against C code that has global
constructors, those constructors may call brk/sbrk to allocate memory
(e.g., many C malloc implementations do this for small allocations).
The brk also starts just above the process image, so this may adjust
the brk past the beginning of where we want to put the heap arena. In
this case, the kernel will pick a different address for the arena and
it will usually be very high (at least, as these things go in a 32-bit
address space).
Fix this by consulting the current value of the brk and using this in
addition to the end of the process image to compute the initial arena
placement.
This is implemented only on Linux currently, since we have no evidence
that it's an issue on any other OSes.
Fixes #19831.
Change-Id: Id64b45d08d8c91e4f50d92d0339146250b04f2f8
Reviewed-on: https://go-review.googlesource.com/39810
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
cmd/go/internal/work: fix TestRespectGroupSticky on FreeBSD
FreeBSD doesn't allow non-root users to enable the SetGID bit on
files or directories in /tmp, however it does allow this in
subdirectories, so create the test directory one level deeper.
Followup to golang/go#19596.
Change-Id: I30e71c6d6a156badc863e8068df10ef6ed817e26
Reviewed-on: https://go-review.googlesource.com/41216 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Keith Randall [Wed, 19 Apr 2017 18:19:53 +0000 (11:19 -0700)]
cmd/compile: zero ambiguously live variables at VARKILLs
At VARKILLs, zero a variable if it is ambiguously live.
After the VARKILL anything this variable references
might be collected. If it were to become live again later,
the GC will see references to already-collected objects.
We don't know a variable is ambiguously live until very
late in compilation (after lowering, register allocation, ...),
so it is hard to generate the code in an arch-independent way.
We also have to be careful not to clobber any registers.
Fortunately, this almost never happens so performance is ~irrelevant.
There are only 2 instances where this triggers in the stdlib.
Fixes #20029
Change-Id: Ia9585a91d7b823fad4a9d141d954464cc7af31f4
Reviewed-on: https://go-review.googlesource.com/41076
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
Make poolLocal size multiple of 128, so it aligns to CPU cache line
on the most common architectures.
This also has the following benefits:
- It may help compiler substituting integer multiplication
by bit shift inside indexLocal.
- It shrinks poolLocal size from 176 bytes to 128 bytes on amd64,
so now it fits two cache lines (or a single cache line on certain
Intel CPUs - see https://software.intel.com/en-us/articles/optimizing-application-performance-on-intel-coret-microarchitecture-using-hardware-implemented-prefetchers).
No measurable performance changes on linux/amd64 and linux/386.
Robert Griesemer [Thu, 20 Apr 2017 19:50:41 +0000 (12:50 -0700)]
go/gcimporter: fix importing of anonymous interfaces
Imported interfaces must be completed, whether they are named or not.
The original code was collecting all types (including anonymous ones)
in the importer's typList. That list was used in the end to complete
interface types. When we introduced tracking of named types only, we
lost anonymous interfaces. Use an independent list of interface types
so the completion code is independent of which types are tracked.
Added test and factored some of the existing tests.
Fixes #20046.
Change-Id: Icd1329032aec33f96890380dd5042de3bef8cdc7
Reviewed-on: https://go-review.googlesource.com/41198 Reviewed-by: Matthew Dempsky <mdempsky@google.com>
David Lazar [Wed, 19 Apr 2017 16:04:18 +0000 (12:04 -0400)]
runtime: make test independent of inlining
TestBreakpoint expects to see "runtime.Breakpoint()" in the stack trace.
If runtime.Breakpoint() is inlined, then the stack trace prints
"runtime.Breakpoint(...)" since the runtime does not have information
about arguments (or lack thereof) to inlined functions. This change
makes the test independent of inlining by looking for the string
"runtime.Breakpoint(". Now TestBreakpoint passes with -l=4.
TestBlockProfile matches samples against a regexp that accepts "," in
profile PCs. I suspect this was just a syntax mistake. Remove "," from
the character class.
TestBlockProfile currently requires exactly five PCs in each sample.
With more aggressive inlining there may be fewer, so change this test
to use the same pattern as TestMutexProfile, which accepts one or more
PCs. With this change, this test passes when compiled with -l=4.
CL 40876 changed ExampleFrames so that the output
was stable with and without mid-stack inlining.
However, that change lost some of the
pedagogical and copy/paste value of the example.
It was unclear why both more and i were being tracked,
and whether the 5 in i < 5 is related to len(pc),
and if so, why and how.
This CL rewrites the example with lots more comments,
and such that the core structure more closely matches
normal usage, and such that it is obvious
which lines of code should be deleted when copying.
As a bonus, it also now illustrates Frame.File.
The period recorded in CPU profiles is in nanoseconds, but was being
computed incorrectly as hz * 1000. As a result, many absolute times
displayed by pprof were incorrect.
Samuel Tan [Thu, 13 Apr 2017 17:57:04 +0000 (10:57 -0700)]
html/template: ignore case when handling type attribute in script element
Convert the parsed attribute name to lowercase before checking its value in
the HTML parser state machine. This ensures that the type attribute in
the script element is handled in a case-sensitive manner, just like all
other attribute names.
Instead of populating the aux symbol
of CALLudiv during rewrite rules,
populate it during genssa.
This simplifies the rewrite rules.
It also removes all remaining calls
to ctxt.Lookup from any rewrite rules.
This is a first step towards removing
ctxt from ssa.Cache entirely,
and also a first step towards converting
the obj.LSym.Version field into a boolean.
It should also speed up compilation.
Also, move func udiv into package runtime.
That's where it is anyway,
and it lets udiv look and act like the rest of
the runtime support functions.
cmd/internal/obj: split Link.hash into version 0 and 1
Though LSym.Version is an int, it can only have the value 0 or 1.
Using that, split Link.hash into two maps, one for version 0
(which is far more common) and one for version 1.
This lets use just the name for lookups,
which is both faster and more compact.
This matters because Link.hash map lookups are frequent,
and will be contended once the backend is concurrent.
Dave Cheney [Thu, 20 Apr 2017 00:45:01 +0000 (10:45 +1000)]
cmd/link/internal/ld: remove C style gotos from ldelf
ld.ldelf contained a mixture of normal and C style, goto bad, error
handling. The use of goto requires many variables to be declared well
before their use which inhibited further refactoring to this method.
This CL removes the gotos in this function. Future CLs will address
remainder of the C style function scoped declarations in this function.
Change-Id: Ib9def495209a2f8deb11dcf30ee954bca95390c6
Reviewed-on: https://go-review.googlesource.com/41172
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
- PkgMap was only needed to test import/export in a "cleanroom"
environment, with debugFormat set. Provided helper function
instead.
- PkgList was only used to identify directly imported packages.
Instead, compute that list explicitly from the package map.
It happens only once, the list is small, and it's more robust
than keeping two data structures in sync.
Change-Id: I82dce3c0b5cb816faae58708e877799359c20fcb
Reviewed-on: https://go-review.googlesource.com/41078 Reviewed-by: Matthew Dempsky <mdempsky@google.com>
runtime: record swept and reclaimed bytes in sweep trace
This extends the GCSweepDone event with counts of swept and reclaimed
bytes. These are useful for understanding the duration and
effectiveness of sweep events.
Change-Id: I3c97a4f0f3aad3adbd188adb264859775f54e2df
Reviewed-on: https://go-review.googlesource.com/40811
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
runtime: make sweep trace events encompass entire sweep loop
Currently, each individual span sweep emits a span to the trace. But
sweeps are generally done in loops until some condition is satisfied,
so this tracing is lower-level than anyone really wants any hides the
fact that no other work is being accomplished between adjacent sweep
events. This is also high overhead: enabling tracing significantly
impacts sweep latency.
Replace this with instead tracing around the sweep loops used for
allocation. This is slightly tricky because sweep loops don't
generally know if any sweeping will happen in them. Hence, we make the
tracing lazy by recording in the P that we would like to start tracing
the sweep *if* one happens, and then only closing the sweep event if
we started it.
This does mean we don't get tracing on every sweep path, which are
legion. However, we get much more informative tracing on the paths
that block allocation, which are the paths that matter.
Change-Id: I73e14fbb250acb0c9d92e3648bddaa5e7d7e271c
Reviewed-on: https://go-review.googlesource.com/40810
Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Michael Munday [Wed, 19 Apr 2017 17:48:33 +0000 (13:48 -0400)]
runtime: avoid restricting GOARCH values in documentation
Changes the text to match GOOS which appends 'and so on' at the
end to avoid restricting the set of possible values.
Change-Id: I54bcde71334202cf701662cdc2582c974ba8bf53
Reviewed-on: https://go-review.googlesource.com/41074 Reviewed-by: Ian Lance Taylor <iant@golang.org>
When allocating a non-small array of buckets for a map,
also preallocate some overflow buckets.
The estimate of the number of overflow buckets
is based on a simulation of putting mid=(low+high)/2 elements
into a map, where low is the minimum number of elements
needed to reach this value of b (according to overLoadFactor),
and high is the maximum number of elements possible
to put in this value of b (according to overLoadFactor).
This estimate is surprisingly reliable and accurate.
The number of overflow buckets needed is quadratic,
for a fixed value of b.
Using this mid estimate means that we will overallocate a few
too many overflow buckets when the actual number of elements is near low,
and underallocate significantly too few overflow buckets
when the actual number of elements is near high.
The mechanism introduced in this CL can be re-used for
other overflow bucket optimizations.
For example, given an initial size hint,
we could estimate quite precisely the number of overflow buckets.
This is #19931.
We could also change from "non-nil means end-of-list"
to "pointer-to-hmap.buckets means end-of-list",
and then create a linked list of reusable overflow buckets
when they are freed by map growth.
That is #19992.
We could also use a similar mechanism to do bulk allocation
of overflow buckets.
All these uses can co-exist with only the one additional pointer
in mapextra, given a little care.
Any change to how we allocate overflow buckets
will require some extra hmap storage,
but we don't want hmap to grow,
particular as small maps usually don't need overflow buckets.
This CL converts the existing hmap overflow field,
which is usually used for pointer-free maps,
into a generic extra field.
This extra field can be used to hold data that is optional.
If it is valuable enough to do have special
handling of overflow buckets, which are medium-sized,
it is valuable enough to pay an extra alloc and two extra words for.
Adding fields to extra would entail adding overhead to pointer-free maps;
any mapextra fields added would need to be weighed against that.
This CL is just rearrangement, though.
cmd/internal: remove duplicate pathToPrefix function
goobj.importPathToPrefix is 3x faster than gc.pathToPrefix so rename and
move it to cmd/internal/objabi which is already imported by both goobj and
gc.