cmd/compile: fix uninitialized memory during type switch assertE2I2
Fixes arm64 builder crash.
The bug is possible on all architectures; you just have to get lucky
and hit a preemption or a stack growth on entry to assertE2I2.
The test stacks the deck.
Change-Id: I8419da909b06249b1ad15830cbb64e386b6aa5f6
Reviewed-on: https://go-review.googlesource.com/12890 Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Rob Pike <r@golang.org>
The skips added in CL 12579, based on incorrect time stamps,
should be sufficient to identify and exclude all the time-related
flakiness on these systems.
runtime/trace: record event sequence numbers explicitly
Nearly all the flaky failures we've seen in trace tests have been
due to the use of time stamps to determine relative event ordering.
This is tricky for many reasons, including:
- different cores might not have exactly synchronized clocks
- VMs are worse than real hardware
- non-x86 chips have different timer resolution than x86 chips
- on fast systems two events can end up with the same time stamp
Stop trying to make time reliable. It's clearly not going to be for Go 1.5.
Instead, record an explicit event sequence number for ordering.
Using our own counter solves all of the above problems.
The trace still contains time stamps, of course. The sequence number
is just used for ordering.
Should alleviate #10554 somewhat. Then tickDiv can be chosen to
be a useful time unit instead of having to be exact for ordering.
Separating ordering and time stamps lets the trace parser diagnose
systems where the time stamp order and actual order do not match
for one reason or another. This CL adds that check to the end of
trace.Parse, after all other sequence order-based checking.
If that error is found, we skip the test instead of failing it.
Putting the check in trace.Parse means that cmd/trace will pick
up the same check, refusing to display a trace where the time stamps
do not match actual ordering.
Using net/http's BenchmarkClientServerParallel4 on various CPU counts,
not tracing vs tracing:
The layout code has to date insisted on stack frames that are 16-aligned
including the saved LR, and it ensured this by growing the frame itself.
This breaks code that refers to values near the top of the frame by positive
offset from SP, and in general it's too magical: if you see TEXT xxx, $N,
you expect that the frame size is actually N, not sometimes N and sometimes N+8.
This led to a serious bug in the compiler where ambiguously live values
were not being zeroed correctly, which in turn triggered an assertion
in the GC about finding only valid pointers. The compiler has been
fixed to always emit aligned frames, and the hand-written assembly
has also been fixed.
Now that everything is aligned, make unaligned an error instead of
something to "fix" silently.
The nosplit stack overflow checks were confused about morestack.
The comment about not having correct SP information at the call
to morestack was true, but that was a real bug, not something to
work around. I fixed that problem in CL 12144. With that fixed,
no need to special-case morestack in the way done here.
This cleanup and simplification of the code was the first step
to fixing a bug that happened when I started working on the
arm64 frame size adjustments, but the cleanup was sufficient
to make the bug go away.
runtime, reflect: use correctly aligned stack frame sizes on arm64
arm64 requires either no stack frame or a frame with a size that is 8 mod 16
(adding the saved LR will make it 16-aligned).
The cmd/internal/obj/arm64 has been silently aligning frames, but it led to
a terrible bug when the compiler and obj disagreed on the frame size,
and it's just generally confusing, so we're going to make misaligned frames
an error instead of something that is silently changed.
This CL prepares by updating assembly files.
Note that the changes in this CL are already being done silently by
cmd/internal/obj/arm64, so there is no semantic effect here,
just a clarity effect.
If the compiler doesn't do it, cmd/internal/obj/arm64 will,
and that will break the zeroing of ambiguously live values
done in zerorange, which in turn produces uninitialized
pointer cells that the GC trips over.
This adds a GCCPUFraction field to MemStats that reports the
cumulative fraction of the program's execution time spent in the
garbage collector. This is equivalent to the utilization percent shown
in the gctrace output and makes this available programmatically.
This does make one small effect on the gctrace output: we now report
the duration of mark termination up to just before the final
start-the-world, rather than up to just after. However, unlike
stop-the-world, I don't believe there's any way that start-the-world
can block, so it should take negligible time.
While there are many statistics one might want to expose via MemStats,
this is one of the few that will undoubtedly remain meaningful
regardless of future changes to the memory system.
The diff for this change is larger than the actual change. Mostly it
lifts the code for computing the GC CPU utilization out of the
debug.gctrace path.
Currently we only capture GC phase transition times if
debug.gctrace>0, but we're about to compute GC CPU utilization
regardless of whether debug.gctrace is set, so we need these
regardless of debug.gctrace.
runtime: avoid race between SIGPROF traceback and stack barriers
The following sequence of events can lead to the runtime attempting an
out-of-bounds access on a stack barrier slice:
1. A SIGPROF comes in on a thread while the G on that thread is in
_Gsyscall. The sigprof handler calls gentraceback, which saves a
local copy of the G's stkbar slice. Currently the G has no stack
barriers, so this slice is empty.
2. On another thread, the GC concurrently scans the stack of the
goroutine being profiled (it considers it stopped because it's in
_Gsyscall) and installs stack barriers.
3. Back on the sigprof thread, gentraceback comes across a stack
barrier in the stack and attempts to look it up in its (zero
length) copy of G's old stkbar slice, which causes an out-of-bounds
access.
This commit fixes this by adding a simple cas spin to synchronize the
SIGPROF handler with stack barrier insertion.
In general I would prefer that this synchronization be done through
the G status, since that's how stack scans are otherwise synchronized,
but adding a new lock is a much smaller change and G statuses are full
of subtlety.
Rick Hudson [Wed, 29 Jul 2015 16:03:54 +0000 (12:03 -0400)]
runtime: force mutator to give work buffer to GC
The scheduler, work buffer's dispose, and write barriers
can conspire to hide the a pointer from the GC's concurent
mark phase. If this pointer is the only path to a large
amount of marking the STW mark termination phase may take
a lot of time.
Consider the following:
1) dispose places a work buffer on the partial queue
2) the GC is busy so it does not immediately remove and
process the work buffer
3) the scheduler runs a mutator whose write barrier dequeues the
work buffer from the partial queue so the GC won't see it
This repeats until the GC reaches the mark termination
phase where the GC finally discovers the pointer along
with a lot of work to do.
This CL fixes the problem by having the mutator
dispose of the buffer to the full queue instead of
the partial queue. Since the write buffer never asks for full
buffers the conspiracy described above is not possible.
Ian Lance Taylor [Tue, 28 Jul 2015 14:42:09 +0000 (07:42 -0700)]
test: don't run issue10607.go on ppc64
This is a reprise of https://golang.org/cl/12623. In that a CL I made
a suggestion which forgot that the +build constraints in the test
directory are not the same as those supported by the go tool: in the
test directory, if a single +build line fails, the test is skipped.
(In my defense, the code I was commenting on was also wrong.)
Change-Id: I8f29392a80b1983027f9a33043c803578409d678
Reviewed-on: https://go-review.googlesource.com/12776
Run-TryBot: Ian Lance Taylor <iant@golang.org> Reviewed-by: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Mikio Hara [Tue, 9 Jun 2015 08:30:00 +0000 (17:30 +0900)]
net: don't return DNS query results including the second best records unconditionally
This change prevents DNS query results using domain search list
overtaking results not using the list unconditionally, which only
happens when using builtin DNS stub resolver.
The previous internal lookup function lookup is split into lookup and
goLookupIPOrder for iteration over a set of names: FQDN or absolute
FQDN, with domain label suffixes in search list, without domain label
suffixes, and for concurrent A and AAAA record queries.
Ian Lance Taylor [Mon, 27 Jul 2015 17:41:41 +0000 (10:41 -0700)]
runtime: correct implementation of raiseproc on Solaris
I forgot that the libc raise function only sends the signal to the
current thread. We need to actually use kill and getpid here, as we
do on other systems.
cmd/go: prefer <meta> tags on launchpad.net to the hard-coded logic
Fixes #11436.
Change-Id: I5c4455e9b13b478838f23ac31e6343672dfc60af
Reviewed-on: https://go-review.googlesource.com/12143 Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
David Crawshaw [Mon, 27 Jul 2015 22:02:45 +0000 (18:02 -0400)]
cmd/go: import runtime/cgo into darwin/arm64 tests
Until cl/12721 and cl/12574, all standard library tests included
runtime/cgo on darwin/arm64 by virtue of package os including it. Now
that is no longer true, runtime/cgo needs to be added by the go tool
just as it is for darwin/arm. (This installs the Mach exception
handler used to properly handle EXC_BAD_ACCESS.)
encoding/json: take new decoder code off Decode path completely
The new Token API is meant to sit on the side of the Decoder,
so that you only get the new code (and any latent bugs in it)
if you are actively using the Token API.
The unconditional use of dec.peek in dec.tokenPrepareForDecode
violates that intention.
Change tokenPrepareForDecode not to call dec.peek unless needed
(because the Token API has advanced the state).
This restores the old code path behavior, no peeking allowed.
I checked by patching in the new tests from CL 12726 that
this change suffices to "fix" the error handling bug in dec.peek.
Obviously that bug should be fixed too, but the point is that
with this CL, bugs in dec.peek do not affect plain use of Decode
or Unmarshal.
I also checked by putting a panic in dec.peek that the only
tests that now invoke peek are:
net/http: pause briefly after closing Server connection when body remains
From https://github.com/golang/go/issues/11745#issuecomment-123555313
this implements option (b), having the server pause slightly after
sending the final response on a TCP connection when we're about to close
it when we know there's a request body outstanding. This biases the
client (which might not be Go) to prefer our response header over the
request body write error.
David Crawshaw [Mon, 27 Jul 2015 20:40:40 +0000 (16:40 -0400)]
runtime/cgo: remove TMPDIR logic for iOS
Seems like the simplest solution for 1.5. All the parts of the test
suite I can run on my current device (for which my exception handler
fix no longer works, apparently) pass without this code. I'll move it
into x/mobile/app.
Fixes #11884
Change-Id: I2da40c8c7b48a4c6970c4d709dd7c148a22e8727
Reviewed-on: https://go-review.googlesource.com/12721 Reviewed-by: Ian Lance Taylor <iant@golang.org>
runtime: close window that hides GC work from concurrent mark
Currently we enter mark 2 by first flushing all existing gcWork caches
and then setting gcBlackenPromptly, which disables further gcWork
caching. However, if a worker or assist pulls a work buffer in to its
gcWork cache after that cache has been flushed but before caching is
disabled, that work may remain in that cache until mark termination.
If that work represents a heap bottleneck (e.g., a single pointer that
is the only way to reach a large amount of the heap), this can force
mark termination to do a large amount of work, resulting in a long
STW.
Fix this by reversing the order of these steps: first disable caching,
then flush all existing caches.
Rick Hudson <rlh> did the hard work of tracking this down. This CL
combined with CL 12672 and CL 12646 distills the critical parts of his
fix from CL 12539.
Fixes #11694.
Change-Id: Ib10d0a21e3f6170a80727d0286f9990df049fed2
Reviewed-on: https://go-review.googlesource.com/12688 Reviewed-by: Rick Hudson <rlh@golang.org>
Currently the GC coordinator enables GC assists at the same time it
enables background mark workers, after the concurrent scan phase is
done. However, this means a rapidly allocating mutator has the entire
scan phase during which to allocate beyond the heap trigger and
potentially beyond the heap goal with no back-pressure from assists.
This prevents the feedback system that's supposed to keep the heap
size under the heap goal from doing its job.
Fix this by enabling mutator assists during the scan phase. This is
safe because the write barrier is already enabled and globally
acknowledged at this point.
There's still a very small window between when the heap size reaches
the heap trigger and when the GC coordinator is able to stop the world
during which the mutator can allocate unabated. This allows *very*
rapidly allocator mutators like TestTraceStress to still occasionally
exceed the heap goal by a small amount (~20 MB at most for
TestTraceStress). However, this seems like a corner case.
runtime: allow GC drain whenever write barrier is enabled
Currently we hand-code a set of phases when draining is allowed.
However, this set of phases is conservative. The critical invariant is
simply that the write barrier must be enabled if we're draining.
Shortly we're going to enable mutator assists during the scan phase,
which means we may drain during the scan phase. In preparation, this
commit generalizes these assertions to check the fundamental condition
that the write barrier is enabled, rather than checking that we're in
any particular phase.
Currently we clear both the mark 1 and mark 2 signals at the beginning
of concurrent mark. If either if these is clear, it acts as a signal
to the scheduler that it should start background workers. However,
this means that in the interim *between* mark 1 and mark 2, the
scheduler basically loops starting up new workers only to have them
return with nothing to do. In addition to harming performance and
delaying mutator work, this approach has a race where workers started
for mark 1 can mistakenly signal mark 2, causing it to complete
prematurely. This approach also interferes with starting assists
earlier to fix #11677.
Fix this by initially setting both mark 1 and mark 2 to "signaled".
The scheduler will not start background mark workers, though assists
can still run. When we're ready to enter mark 1, we clear the mark 1
signal and wait for it. Then, when we're ready to enter mark 2, we
clear the mark 2 signal and wait for it.
This structure also lets us deal cleanly with the situation where all
work is drained *prior* to the mark 2 wait, meaning that there may be
no workers to signal completion. Currently we deal with this using a
racy (and possibly incorrect) check for work in the coordinator itself
to skip the mark 2 wait if there's no work. This change makes the
coordinator unconditionally wait for mark completion and makes the
scheduler itself signal completion by slightly extending the logic it
already has to determine that there's no work and hence no use in
starting a new worker.
This is a prerequisite to fixing the remaining component of #11677,
which will require enabling assists during the scan phase. However, we
don't want to enable background workers until the mark phase because
they will compete with the scan. This change lets us use bgMark1 and
bgMark2 to indicate when it's okay to start background workers
independent of assists.
This is also a prerequisite to fixing #11694. It significantly reduces
the occurrence of long mark termination pauses in #11694 (from 64 out
of 1000 to 2 out of 1000 in one experiment).
Coincidentally, this also reduces the final heap size (and hence run
time) of TestTraceStress from ~100 MB and ~1.9 seconds to ~14 MB and
~0.4 seconds because it significantly shortens concurrent mark
duration.
Rick Hudson <rlh> did the hard work of tracking this down.
Currently, there are three ways to satisfy a GC assist: 1) the mutator
steals credit from background GC, 2) the mutator actually does GC
work, and 3) there is no more work available. 3 was never really
intended as a way to satisfy an assist, and it causes problems: there
are periods when it's expected that the GC won't have any work, such
as when transitioning from mark 1 to mark 2 and from mark 2 to mark
termination. During these periods, there's no back-pressure on rapidly
allocating mutators, which lets them race ahead of the heap goal.
For example, test/init1.go and the runtime/trace test both have small
reachable heaps and contain loops that rapidly allocate large garbage
byte slices. This bug lets these tests exceed the heap goal by several
orders of magnitude.
Fix this by forcing the assist (and hence the allocation) to block
until it can satisfy its debt via either 1 or 2, or the GC cycle
terminates.
This fixes one the causes of #11677. It's still possible to overshoot
the GC heap goal, but with this change the overshoot is almost exactly
by the amount of allocation that happens during the concurrent scan
phase, between when the heap passes the GC trigger and when the GC
enables assists.
runtime: yield to GC coordinator after assist completion
Currently it's possible for the GC assist to signal completion of the
mark phase, which puts the GC coordinator goroutine on the current P's
run queue, and then return to mutator code that delays until the next
forced preemption before actually yielding control to the GC
coordinator, dragging out completion of the mark phase. This delay can
be further exacerbated if the mutator makes other goroutines runnable
before yielding control, since this will push the GC coordinator on
the back of the P's run queue.
To fix this, this adds a Gosched to the assist if it completed the
mark phase. This immediately and directly yields control to the GC
coordinator. This already happens implicitly in the background mark
workers because they park immediately after completing the mark.
This is one of the reasons completion of the mark phase is being
dragged out and allowing the mutator to allocate without assisting,
leading to the large heap goal overshoot in issue #11677. This is also
a prerequisite to making the assist block when it can't pay off its
debt.
runtime: disallow GC assists in non-preemptible contexts
Currently it's possible to perform GC work on a system stack or when
locks are held if there's an allocation that triggers an assist. This
is generally a bad idea because of the fragility of these contexts,
and it's incompatible with two changes we're about to make: one is to
yield after signaling mark completion (which we can't do from a
non-preemptible context) and the other is to make assists block if
there's no other way for them to pay off the assist debt.
This commit simply skips the assist if it's called from a
non-preemptible context. The allocation will still count toward the
assist debt, so it will be paid off by a later assist. There should be
little allocation from non-preemptible contexts, so this shouldn't
harm the overall assist mechanism.
runtime: fix mark 2 completion in fractional/idle workers
Currently fractional and idle mark workers dispose of their gcWork
cache during mark 2 after incrementing work.nwait and after checking
whether there are any workers or any work available. This creates a
window for two races:
1) If the only remaining work is in this worker's gcWork cache, it
will see that there are no more workers and no more work on the
global lists (since it has not yet flushed its own cache) and
prematurely signal mark 2 completion.
2) After this worker has incremented work.nwait but before it has
flushed its cache, another worker may observe that there are no
more workers and no more work and prematurely signal mark 2
completion.
We can fix both of these by simply moving the cache flush above the
increment of nwait and the test of the completion condition.
This is probably contributing to #11694, though this alone is not
enough to fix it.
runtime: steal the correct amount of GC assist credit
GC assists are supposed to steal at most the amount of background GC
credit available so that background GC credit doesn't go negative.
However, they are instead stealing the *total* amount of their debt
but only claiming up to the amount of credit that was available. This
results in draining the background GC credit pool too quickly, which
results in unnecessary assist work.
The fix is trivial: steal the amount of work we meant to steal (which
is already computed).
Ian Lance Taylor [Mon, 27 Jul 2015 17:30:26 +0000 (10:30 -0700)]
cmd/dist: run misc/cgo/testsovar on darwin and netbsd
CL https://golang.org/cl/12470 has reportedly fixed the problems that
the misc/cgo/testsovar test encountered on darwin and netbsd. Let's
actually run the test.
encoding/xml: fix race using finfo.parents in s.trim
This race was identified in #9796, but a sequence of fixes
proposed in golang.org/cl/4152 were rolled into
golang.org/cl/5910 which both fixed the race and
modified the name space behavior.
We rolled back the name space changes and lost the race fix.
Fix the race separate from the name space changes,
following the suggestion made by Roger Peppe in
https://go-review.googlesource.com/#/c/4152/7/src/encoding/xml/marshal.go@897
Fixes #9796.
Fixes #11885.
Change-Id: Ib2b68982da83dee9e04db8b8465a8295259bba46
Reviewed-on: https://go-review.googlesource.com/12687 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
runtime: always report starting heap size in gctrace
Currently the gctrace output reports the trigger heap size, rather
than the actual heap size at the beginning of GC. Often these are the
same, or at least very close. However, it's possible for the heap to
already have exceeded this trigger when we first check the trigger and
start GC; in this case, this output is very misleading. We've
encountered this confusion a few times when debugging and this
behavior is difficult to document succinctly.
Change the gctrace output to report the actual heap size when GC
starts, rather than the trigger.
Whenever someone pastes gctrace output into GitHub, it helpfully turns
the GC cycle number into a link to some unrelated issue. Prevent this
by removing the pound before the cycle number. The fact that this is a
cycle number is probably more obvious at a glance than most of the
other numbers.
Change-Id: Ie092a3a512a2d35967364b41081a066ab3a6aab4
Reviewed-on: https://go-review.googlesource.com/12571 Reviewed-by: Ian Lance Taylor <iant@golang.org>
cmd/go: use hg repo for code.google.com shutdown check
svn dies due to not being able to validate the googlecode.com certificate.
hg does not even attempt to validate it.
Fixes #11806.
Change-Id: I84ced5aa84bb1e4a4cdb2254f2d08a64a1ef23f6
Reviewed-on: https://go-review.googlesource.com/12558 Reviewed-by: Ian Lance Taylor <iant@golang.org>
cmd/go: fix custom import path wildcards (go get rsc.io/pdf/...)
Fixes TestGoGetWorksWithVanityWildcards,
but that test uses the network and is not run
on the builders.
For #11806.
Change-Id: I35c6677deaf84e2fa9bdb98b62d80d388b5248ae
Reviewed-on: https://go-review.googlesource.com/12557 Reviewed-by: Ian Lance Taylor <iant@golang.org>
net/http: on Transport body write error, wait briefly for a response
From https://github.com/golang/go/issues/11745#issuecomment-123555313 :
The http.RoundTripper interface says you get either a *Response, or an
error.
But in the case of a client writing a large request and the server
replying prematurely (e.g. 403 Forbidden) and closing the connection
without reading the request body, what does the client want? The 403
response, or the error that the body couldn't be copied?
This CL implements the aforementioned comment's option c), making the
Transport give an N millisecond advantage to responses over body write
errors.
go/build: make TestDependencies check all systems at once
We used to use build.Import to get the dependencies, but that meant
we had to repeat the check for every possible GOOS/GOARCH/cgo
combination, which took too long. So we made the test in short mode
only check the current GOOS/GOARCH/cgo combination.
But some combinations can't run the test at all. For example darwin/arm64
does not run tests with a full source file systems, so it cannot test itself,
so nothing was testing darwin/arm64. This led to bugs like #10455
not being caught.
Rewrite the test to read the imports out of the source files ourselves,
so that we can look at all source files in a directory in one pass,
regardless of which GOOS/GOARCH/cgo/etc they require.
This one complete pass runs in the same amount of time as the
old single combination check ran, so we can now test all systems,
even in short mode.
Change-Id: Ie216303c2515bbf1b6fb717d530a0636e271cb6d
Reviewed-on: https://go-review.googlesource.com/12576 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Peter Waldschmidt [Sat, 18 Apr 2015 07:23:32 +0000 (03:23 -0400)]
encoding/json: add JSON streaming parse API
This change adds new methods to Decoder.
* Decoder.Token steps through a JSON document, returning a value for each token.
* Decoder.Decode unmarshals the entire value at the token stream's current
position (in addition to its existing function in a stream of JSON values)
A while back we discovered that the dependencies test allowed
arbitrary dependencies for packages we forgot to list.
To stop the damage we added a grandfathered list and fixed
the code to expect unlisted packages to have no dependencies.
This CL replaces the grandfathered list with some more
careful placement of dependency rules.
Thankfully, there were no terrible inversions.
Fixes #10487.
Change-Id: I5a6f92435bd2c66c47ec8ab629edbd88b189f028
Reviewed-on: https://go-review.googlesource.com/12575 Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Rob Pike <r@golang.org>
It was not running because of invalid use of ArchChar.
I didn't catch this when I scrubbed ArchChar from the tree
because this code wasn't in the tree yet.
The test seems to pass, which is nice.
Change-Id: I59761a7a04a73681e147e25c1e7f010068276aa8
Reviewed-on: https://go-review.googlesource.com/12573 Reviewed-by: Robert Griesemer <gri@golang.org>
There is clearly work to do here with respect to xml name spaces,
but I don't believe the changes in this cycle are clearly correct.
The changes in this cycle have visible impact on the generated xml,
possibly breaking existing programs, and yet it's not clear that they
are the end of the story: there is still significant confusion about how
name spaces work or should work (see #9519, #9775, #8167, #7113).
I would like to wait to make breaking changes until we completely
understand what the behavior should be and can evaluate the benefit
of those breaking changes. My main concern here is that we will break
programs in Go 1.5 for the sake of name space adjustments and then
while trying to fix those other bugs we'll break programs in Go 1.6 too.
Let's wait until we know all the changes we want to make before we
decide whether or how to break existing programs.
This CL reverts:
5ae822b encoding/xml: minor changes bb7e665 encoding/xml: fix xmlns= behavior 9f9d66d encoding/xml: fix default namespace of tags b69ea01 encoding/xml: fix namespaces in a>b tags 3be158d encoding/xml: encoding name spaces correctly
I have confirmed that the name space parts of the test suite
as of this CL passes against the Go 1.4 encoding/xml package,
indicating that this CL successfully restores the Go 1.4 behavior.
(Other tests do not, but that's because there were some real
bug fixes in this cycle that are being kept. Specifically, the
tests that don't pass in Go 1.4 are TestMarshal's NestedAndComment
case, TestEncodeToken's encoding of newlines, and
TestSimpleUseOfEncodeToken returning an error for invalid
token types.)
I also checked that the Go 1.4 tests pass when run against
this copy of the sources.
Fixes #11841.
Change-Id: I97de06761038b40388ef6e3a55547ff43edee7cb
Reviewed-on: https://go-review.googlesource.com/12570 Reviewed-by: Nigel Tao <nigeltao@golang.org>
Rob Pike [Fri, 24 Jul 2015 04:37:58 +0000 (14:37 +1000)]
cmd/go: mention go tool compile etc. in the help text for build
Not everyone is aware that go build is a wrapper for other
tools. Mention this in the text for go help build so people using
other build systems won't just wrap go build, which is usually a
mistake (it doesn't do incremental builds by default, for instance).
Update #11854.
Change-Id: I759f91f23ccd3671204c39feea12a3bfaf9f0114
Reviewed-on: https://go-review.googlesource.com/12625 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Michael Hudson-Doyle [Fri, 24 Jul 2015 02:26:29 +0000 (14:26 +1200)]
runtime: pass a smaller buffer to sched_getaffinity on ARM
The system stack is only around 8kb on ARM so one can't put an 8kb buffer on
the stack. More than 1024 ARM cores seems sufficiently unlikely for the
foreseeable future.
Currently TestDoDupSuppress can fail if the goroutines created by its
loop run sequentially. This is rare, but it has caused failures on the
dashboard and in stress testing.
While I think there's no way to eliminate all possible thread
schedules that could make this test fail because it depends on waiting
until a Group.Do blocks, it is possible to make it much more robust.
This commit deflakes this test by forcing at least one invocation of
fn to start and all goroutines to reach the line just before the Do
call before allowing fn to proceed. fn then waits 10 milliseconds
before returning to allow the goroutines to pass through the Do.
With this change, in 50,000 runs of the stress testing configuration,
the number of calls to fn never even exceeded 1.
Ian Lance Taylor [Fri, 24 Jul 2015 05:40:30 +0000 (22:40 -0700)]
runtime: require gdb version 7.9 for gdb test
Issue 11214 reports problems with older versions of gdb. It does work
with gdb 7.9 on my Ubuntu Trusty system, so take that as the minimum
required version.
In https://go-review.googlesource.com/#/c/8611/ , these tests were
supposed to be skipped only for linux and darwin, as the comment says.
This patch fixes the logic in the if test.
Change-Id: Iff0a32186267457a414912c4c3ee4495650891a2
Reviewed-on: https://go-review.googlesource.com/12517 Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Robert Griesemer <gri@golang.org>
Ian Lance Taylor [Wed, 15 Jul 2015 16:05:33 +0000 (09:05 -0700)]
cmd/link: don't generate .exe extension for external Windows link
On Windows, gcc -o foo will generate foo.exe. Prevent that from
happening by adding a final '.' if necessary so that GCC thinks that
the file already has an extension.
Also remove the initial output file when doing an external link, and
use mayberemoveoutfile, not os.Remove, when building an archive
(otherwise we will do the wrong thing for -buildmode=c-archive -o
/dev/null).
I didn't add a test, as it requires using cgo and -o on Windows.
encoding/xml: EncodeToken silently eats tokens with invalid type
EncodeToken takes a Token (i.e. an interface{}) as a parameter,
and expects a value of type StartElement, EndElement, CharData,
Comment, ProcInst, or Directive.
If a pointer is passed instead, or any type which does not match
this list, the token is silently ignored.
Added a default case in the type switch to issue a proper error
when the type is invalid.
The behavior could be later improved by allowing pointers to
token to be accepted as well, but not for go1.5.