Tim Cooper [Fri, 3 Nov 2017 23:17:08 +0000 (20:17 -0300)]
hash: add marshaling, unmarshaling example
Example usage of functionality implemented in CL 66710.
Change-Id: I87d6e4d2fb7a60e4ba1e6ef02715480eb7e8f8bd
Reviewed-on: https://go-review.googlesource.com/76011
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Russ Cox [Fri, 3 Nov 2017 05:24:19 +0000 (01:24 -0400)]
cmd/go: do not install dependencies during "go install"
This CL makes "go install" behave the way many users expect:
install only the things named on the command line.
Future builds still run as fast, thanks to the new build cache (CL 75473).
To install dependencies as well (the old behavior), use "go install -i".
Actual definitions aside, what most users know and expect of "go install"
is that (1) it installs what you asked, and (2) it's fast, unlike "go build".
It was fast because it installed dependencies, but installing dependencies
confused users repeatedly (see for example #5065, #6424, #10998, #12329,
"go build" and "go test" so that they could be "fast" too, but that only
created new opportunities for confusion. We also had to add -installsuffix
and then -pkgdir, to allow "fast" even when dependencies could not be
installed in the usual place.
The recent introduction of precise content-based staleness logic means that
the go command detects the need for rebuilding packages more often than it
used to, with the consequence that "go install" rebuilds and reinstalls
dependencies more than it used to. This will create more new opportunities
for confusion and will certainly lead to more issues filed like the ones
listed above.
CL 75743 introduced a build cache, separate from the install locations.
That cache makes all operations equally incremental and fast, whether or
not the operation is "install" or "build", and whether or not "-i" is used.
Installing dependencies is no longer necessary for speed, it has confused
users in the past, and the more accurate rebuilds mean that it will confuse
users even more often in the future. This CL aims to end all that confusion
by not installing dependencies by default.
By analogy with "go build -i" and "go test -i", which still install
dependencies, this CL introduces "go install -i", which installs
dependencies in addition to the things named on the command line.
Russ Cox [Tue, 31 Oct 2017 17:00:51 +0000 (13:00 -0400)]
cmd/go: run vet automatically during go test
This CL adds an automatic, limited "go vet" to "go test".
If the building of a test package fails, vet is not run.
If vet fails, the test is not run.
The goal is that users don't notice vet as part of the "go test"
process at all, until vet speaks up and says something important.
This should help users find real problems in their code faster
(vet can just point to them instead of needing to debug a
test failure) and expands the scope of what kinds of things
vet can help with.
The "go vet" runs in parallel with the linking of the test binary,
so for incremental builds it typically does not slow the overall
"go test" at all: there's spare machine capacity during the link.
all.bash has less spare machine capacity. This CL increases
the time for all.bash on my laptop from 4m41s to 4m48s (+2.5%)
To opt out for a given run, use "go test -vet=off".
The vet checks used during "go test" are a subset of the full set,
restricted to ones that are 100% correct and therefore acceptable
to make mandatory. In this CL, that set is atomic, bool, buildtags,
nilfunc, and printf. Including printf is debatable, but I want to
include it for now and find out what needs to be scaled back.
(It already found one real problem in package os's tests that
previous go vet os had not turned up.)
Now that we can rely on type information it may be that printf
should make its function-name-based heuristic less aggressive
and have a whitelist of known print/printf functions.
Determining the exact set for Go 1.10 is #18085.
Running vet also means that programs now have to type-check
with both cmd/compile and go/types in order to pass "go test".
We don't start vet until cmd/compile has built the test package,
so normally the added go/types check doesn't find anything.
However, there is at least one instance where go/types is more
precise than cmd/compile: declared and not used errors involving
variables captured into closures.
This CL includes a printf fix to os/os_test.go and many declared
and not used fixes in the race detector tests.
Fixes #18084.
Change-Id: I353e00b9d1f9fec540c7557db5653e7501f5e1c9
Reviewed-on: https://go-review.googlesource.com/74356
Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Rob Pike <r@golang.org> Reviewed-by: David Crawshaw <crawshaw@golang.org>
Russ Cox [Wed, 1 Nov 2017 02:13:04 +0000 (22:13 -0400)]
cmd/go: cache successful test results
This CL adds caching of successful test results, keyed by the
action ID of the test binary and its command line arguments.
Suppose you run:
go test -short std
<edit a typo in a comment in math/big/float.go>
go test -short std
Before this CL, the second go test would re-run all the tests
for the std packages. Now, the second go test will use the cached
result immediately (without any compile or link steps) for any
packages that do not transitively import math/big, and then
it will, after compiling math/big and seeing that the .a file didn't
change, reuse the cached test results for the remaining packages
without any additional compile or link steps.
Suppose that instead of editing a typo you made a substantive
change to one function, but you left the others (including their
line numbers) unchanged. Then the second go test will re-link
any of the tests that transitively depend on math/big, but it still
will not re-run the tests, because the link will result in the same
test binary as the first run.
The only cacheable test arguments are:
-cpu
-list
-parallel
-run
-short
-v
Using any other test flag disables the cache for that run.
The suggested argument to mean "turn off the cache" is -count=1
(asking "please run this 1 time, not 0").
There's an open question about re-running tests when inputs
like environment variables and input files change. For now we
will assume that users will bypass the test cache when they
need to do so, using -count=1 or "go test" with no arguments.
This CL documents the new cache but also documents the
previously-undocumented distinction between "go test" with
no arguments (now called "local directory mode") and with
arguments (now called "package list mode"). It also cleans up
a minor detail of package list mode buffering that used to change
whether test binary stderr was sent to go command stderr based
on details like exactly how many packages were listed or
how many CPUs the host system had. Clearly the file descriptor
receiving output should not depend on those, so package list mode
now consistently merges all output to stdout, where before it
mostly did that but not always.
Hugues Bruant [Fri, 3 Nov 2017 02:54:46 +0000 (19:54 -0700)]
cmd/compile: fix reassignment check
CL 65071 enabled inlining for local closures with no captures.
To determine safety of inlining a call sites, we check whether the
variable holding the closure has any assignments after its original
definition.
Unfortunately, that check did not catch OAS2MAPR and OAS2DOTTYPE,
leading to incorrect inlining when a variable holding a closure was
subsequently reassigned through a type conversion or a 2-valued map
access.
There was another more subtle issue wherein reassignment check would
always return a false positive for closure calls inside other
closures. This was caused by the Name.Curfn field of local variables
pointing to the OCLOSURE node instead of the corresponding ODCLFUNC,
which resulted in reassigned walking an empty Nbody and thus never
seeing any reassignments.
This CL fixes these oversights and adds many more tests for closure
inlining which ensure not only that inlining triggers but also the
correctness of the resulting code.
Keith Randall [Fri, 3 Nov 2017 17:30:14 +0000 (10:30 -0700)]
bytes: add more page boundary tests
Make sure Index and IndexByte don't read past the queried byte slice.
Hopefully will be helpful for CL 33597.
Also remove the code which maps/unmaps the Go heap.
Much safer to play with protection bits off-heap.
Change-Id: I50d73e879b2d83285e1bc7c3e810efe4c245fe75
Reviewed-on: https://go-review.googlesource.com/75890 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Lynn Boger [Mon, 30 Oct 2017 16:30:45 +0000 (12:30 -0400)]
cmd/compile: add rules to improve consecutive byte loads and stores on ppc64le
This adds new rules to recognize consecutive byte loads and
stores and lowers them to loads and stores such as lhz, lwz, ld,
sth, stw, std. This change only covers the little endian cases
on little endian machines, such as is found in encoding/binary
UintXX or PutUintXX for little endian. Big endian will be done
later.
Updates were also made to binary_test.go to allow the benchmark
for Uint and PutUint to actually use those functions because
the way they were written, those functions were being
optimized out.
Testcases were also added to cmd/compile/internal/gc/asm_test.go.
Updates #22496
The following improvement can be found in golang.org/x/crypto
Hana (Hyang-Ah) Kim [Mon, 18 Sep 2017 17:54:21 +0000 (13:54 -0400)]
runtime/pprof: use new profile format for block/mutex profiles
Unlike the legacy text format that outputs the count and the number of
cycles, the pprof tool expects contention profiles to include the count
and the delay time measured in nanoseconds. printCountCycleProfile
performs the conversion from cycles to nanoseconds.
(See parseContention function in
cmd/vendor/github.com/google/pprof/profile/legacy_profile.go)
Fixes #21474
Change-Id: I8e8fb6ea803822d7eaaf9ecf1df3e236ad225a7b
Reviewed-on: https://go-review.googlesource.com/64410
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
Russ Cox [Wed, 1 Nov 2017 01:50:48 +0000 (21:50 -0400)]
cmd/go: cache built packages
This CL adds caching of built package files in $GOCACHE, so that
a second build with a particular configuration will be able to reuse
the work done in the first build of that configuration, even if the
first build was only "go build" and not "go install", or even if there
was an intervening "go install" that wiped out the installed copy of
the first build.
The benchjuju benchmark runs go build on a specific revision of jujud 10 times.
Before this CL:
102.72u 15.29s 21.98r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
105.99u 15.55s 22.71r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
106.49u 15.70s 22.82r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
107.09u 15.72s 22.94r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
108.19u 15.85s 22.78r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
108.92u 16.00s 23.02r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
109.25u 15.82s 23.05r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
109.57u 15.96s 23.11r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
109.86u 15.97s 23.17r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
110.50u 16.05s 23.37r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
After this CL:
113.66u 17.00s 24.17r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
3.85u 0.68s 3.49r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
3.98u 0.72s 3.63r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
4.07u 0.72s 3.57r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
3.98u 0.70s 3.43r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
4.58u 0.70s 3.58r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
3.90u 0.70s 3.46r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
3.85u 0.71s 3.52r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
3.70u 0.69s 3.64r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
3.79u 0.68s 3.41r go build -o /tmp/jujud github.com/juju/juju/cmd/jujud ...
This CL reduces the overall all.bash time from 4m22s to 4m17s on my laptop.
Not much faster, but also not slower.
Russ Cox [Thu, 2 Nov 2017 03:18:34 +0000 (23:18 -0400)]
cmd/go: add README and access log to cache directory
The README is there to help people who stumble across the directory.
The access log is there to help us evaluate potential algorithms for
managing and pruning cache directories. For now the management
is manual: users have to run "go clean -cache" if they want the cache
to get smaller.
As a low-resolution version of the access log, we also update the
mtime on each cache file as they are used by the go command.
A simple refinement of go clean -cache would be to delete
(perhaps automatically) cache files that have not been used in more
than one day, or some suitable time period.
Gabriel Aszalos [Fri, 3 Nov 2017 08:40:52 +0000 (09:40 +0100)]
runtime: clarify GOROOT return value in documentation
The current GOROOT documentation could indicate that changing the
environment variable at runtime would affect the return value of
GOROOT. This is false as the returned value is the one used for the
build. This CL aims to clarify the confusion.
isharipo [Thu, 2 Nov 2017 17:17:59 +0000 (20:17 +0300)]
cmd/internal/obj/x86: add most missing AVX1/2 insts
This change applies x86avxgen output.
https://go-review.googlesource.com/c/arch/+/66972
As an effect, many new AVX instructions are now available.
One of the side-effects of this patch is
sorted AXXX (A-enum) constants.
Some AVX1/2 instructions still not added due to:
1. x86.csv V0.2 does not list them;
2. partially because of (1), test suite does not contain tests for
these instructions.
Zhengyu He [Thu, 2 Nov 2017 20:39:14 +0000 (13:39 -0700)]
runtime: fix GNU/Linux getproccount if sched_getaffinity does not return a multiple of 8
The current code can potentially return a smaller processor count on a
linux kernel when its cpumask_size (controlled by both kernel config and
boot parameter) is not a multiple of the pointer size, because
r/sys.PtrSize will be rounded down. Since sched_getaffinity returns the
size in bytes, we can just allocate the buf as a byte array to avoid the
extra calculation with the pointer size and roundups.
Change-Id: I0c21046012b88d8a56b5dd3dde1d158d94f8eea9
Reviewed-on: https://go-review.googlesource.com/75591
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
griesemer [Thu, 2 Nov 2017 00:38:07 +0000 (17:38 -0700)]
cmd/compile: avoid spurious errors for invalid map key types
Instead of trying to validate map key types eagerly in some
cases, delay their validation to the end of type-checking,
when we all type information is present.
Passes go build -toolexec 'toolstash -cmp' -a std .
Fixes #21273.
Fixes #21657.
Change-Id: I532369dc91c6adca1502d6aa456bb06b57e6c7ff
Reviewed-on: https://go-review.googlesource.com/75310 Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Tom Bergan [Thu, 2 Nov 2017 22:41:10 +0000 (15:41 -0700)]
net/http: clarify when it is safe to reuse a request
The godoc for RoundTrip already specifies when it's ok to reuse a
request that contains a body: the caller must wait until RoundTrip
calls Close on Request.Body.
This CL adds a small clarification: If the request does not have a
body, it can be reused as long as the caller does not mutate the
Request until RoundTrip fails or the Response.Body is closed.
Joe Tsai [Thu, 2 Nov 2017 22:03:28 +0000 (15:03 -0700)]
io: fix Pipe regression with differing error types
Usage of atomic.Value has a subtle requirement that the
value be of the same concrete type. In prior usage, the intention
was to consistently store a value of the error type.
Since error is an interface, the underlying concrete can differ.
Fix this by creating a type-safe abstraction over atomic.Value
that wraps errors in a struct{error} type to ensure consistent types.
Change-Id: Ica74f2daba15e4cff48d2b4f830d2cb51c608fb6
Reviewed-on: https://go-review.googlesource.com/75594
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Daniel Martí [Tue, 24 Oct 2017 16:15:30 +0000 (17:15 +0100)]
cmd/compile: turn some pointer params into results
These are likely from the time when gc was written in C. There is no
need for any of these to be passed pointers, as the previous values are
not kept in any way, and the pointers are never nil. Others were left
untouched as they fell into one of these useful cases.
While at it, also turn some 0/1 integers into booleans.
Passes toolstash -cmp on std cmd.
Change-Id: Id3a9c9e84ef89536c4dc69a7fdbacd0fd7a76a9b
Reviewed-on: https://go-review.googlesource.com/72990
Run-TryBot: Daniel Martí <mvdan@mvdan.cc> Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Brad Fitzpatrick [Thu, 2 Nov 2017 19:17:13 +0000 (19:17 +0000)]
net/http: remove some log spam in test, add missing error detail
Updates #22540
Change-Id: I26e79c25652976fac6f2e5a7afb4fd1240996d74
Reviewed-on: https://go-review.googlesource.com/75531 Reviewed-by: Tom Bergan <tombergan@google.com>
Run-TryBot: Tom Bergan <tombergan@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Joe Tsai [Fri, 20 Oct 2017 00:31:31 +0000 (17:31 -0700)]
go/printer: forbid empty line before first comment in block
To improve readability when exported fields are removed,
forbid the printer from emitting an empty line before the first comment
in a const, var, or type block.
Also, when printing the "Has filtered or unexported fields." message,
add an empty line before it to separate the message from the struct
or interfact contents.
Before the change:
<<<
type NamedArg struct {
// Name is the name of the parameter placeholder.
//
// If empty, the ordinal position in the argument list will be
// used.
//
// Name must omit any symbol prefix.
Name string
// Value is the value of the parameter.
// It may be assigned the same value types as the query
// arguments.
Value interface{}
// contains filtered or unexported fields
}
>>>
After the change:
<<<
type NamedArg struct {
// Name is the name of the parameter placeholder.
//
// If empty, the ordinal position in the argument list will be
// used.
//
// Name must omit any symbol prefix.
Name string
// Value is the value of the parameter.
// It may be assigned the same value types as the query
// arguments.
Value interface{}
// contains filtered or unexported fields
}
>>>
Fixes #18264
Change-Id: I9fe17ca39cf92fcdfea55064bd2eaa784ce48c88
Reviewed-on: https://go-review.googlesource.com/71990
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
Martin Möhrmann [Thu, 7 Sep 2017 03:57:44 +0000 (05:57 +0200)]
runtime: refactor insertion slot tracking for fast hashmap functions
* Avoid calculating insertk until needed.
* Avoid a pointer into b.tophash and just track the insertion index.
This avoids b.tophash being marked as escaping to heap.
* Calculate val only once at the end of the mapassign functions.
Function sizes decrease slightly, e.g. for mapassign_faststr:
before "".mapassign_faststr STEXT size=1166 args=0x28 locals=0x78
after "".mapassign_faststr STEXT size=1080 args=0x28 locals=0x68
Martin Möhrmann [Sat, 2 Sep 2017 16:46:59 +0000 (18:46 +0200)]
cmd/compile: specialize map creation for small hint sizes
Handle make(map[any]any) and make(map[any]any, hint) where
hint <= BUCKETSIZE special to allow for faster map initialization
and to improve binary size by using runtime calls with fewer arguments.
Given hint is smaller or equal to BUCKETSIZE in which case
overLoadFactor(hint, 0) is false and no buckets would be allocated by makemap:
* If hmap needs to be allocated on the stack then only hmap's hash0
field needs to be initialized and no call to makemap is needed.
* If hmap needs to be allocated on the heap then a new special
makehmap function will allocate hmap and intialize hmap's
hash0 field.
Reduces size of the godoc by ~36kb.
AMD64
name old time/op new time/op delta
NewEmptyMap 16.6ns ± 2% 5.5ns ± 2% -66.72% (p=0.000 n=10+10)
NewSmallMap 64.8ns ± 1% 56.5ns ± 1% -12.75% (p=0.000 n=9+10)
Updates #6853
Change-Id: I624e90da6775afaa061178e95db8aca674f44e9b
Reviewed-on: https://go-review.googlesource.com/61190
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
Blixt [Wed, 1 Nov 2017 20:25:24 +0000 (16:25 -0400)]
time: fix incorrect "zero pad" comment in example
The comment currently implies that a zero will be added, but the
underscore is used to add a space for single-digit dates.
Change-Id: Ib3bac8a16bc2d1fcb26ab3bb7ad172b89e1a4a24
Reviewed-on: https://go-review.googlesource.com/75230 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Tobias Klauser [Wed, 1 Nov 2017 16:06:52 +0000 (17:06 +0100)]
runtime/pprof: use switch for GOOS check in testCPUProfile
Since CL 33071, testCPUProfile is only one user of the badOS map.
Replace it by the corresponding switch, with the "plan9" case removed
because it is already checked earlier in the same function.
Change-Id: Id647b8ee1fd37516bb702b35b3c9296a4f56b61b
Reviewed-on: https://go-review.googlesource.com/75110
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Russ Cox [Wed, 1 Nov 2017 23:57:50 +0000 (19:57 -0400)]
cmd/go: add cache verification mode
If GODEBUG=gocacheverify=1, then instead of using the cache to
avoid computations, the go command will do the computations and
double-check that they match any existing cache entries.
This is handled entirely in the cache implementation; there's no
complexity added to any of the cache usage sites.
(As of this CL there aren't any cache usage sites, but soon there will be.)
Also change GOCMDDEBUGHASH to the more usual GODEBUG=gocachehash=1.
Russ Cox [Wed, 1 Nov 2017 23:22:56 +0000 (19:22 -0400)]
cmd/dist: set GOCACHE during make.bash/run.bash
Use a build cache separate from the default user cache,
one that will be wiped out during startup, so that make.bash
continues to start from a clean slate.
The cache is stored in $GOCACHE, which is printed by go env and
defaults to a subdirectory named "go-build" in the standard user cache
directory for the host operating system.
This CL only implements the cache. Future CLs will store data in it.
griesemer [Wed, 1 Nov 2017 21:24:06 +0000 (14:24 -0700)]
go/types: avoid repeated "declared but not used" errors for closure variables
At the end of type-checking a function or closure, unused local variables
are reported by looking at all variables in the function scope and its
nested children scopes. If a nested scope belonged to a nested function
(closure), that scope would be searched twice, leading to multiple error
messages for unused variables.
This CL introduces an internal-only marker to identify function scopes
so that they can be ignored where needed.
Fixes #22524.
Change-Id: If58cc17b2f0615a16f33ea262f50dffd0e86d0f0
Reviewed-on: https://go-review.googlesource.com/75251 Reviewed-by: Alan Donovan <adonovan@google.com>
Evan Jones [Wed, 1 Nov 2017 16:44:14 +0000 (12:44 -0400)]
syscall: use setattrlist for UtimesNano on Darwin for ns resolution
Mac OS X 10.13 introduced APFS which stores nanosecond resolution
timestamps. The implementation of os.Stat already returns full
resolution timestamps, but os.Chtimes only sets timestamps with
microsecond resolution.
Fix this by using setattrlist on Darwin, which takes a struct timeval
with nanosecond resolution. This is what Mac OS X 10.13 appears uses
to implement utimensat, according to dtruss.
Fixes #22528
Change-Id: I397dabef6b2b73a081382999aa4c4405ab8c6015
Reviewed-on: https://go-review.googlesource.com/74952
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Tim Cooper [Wed, 27 Sep 2017 22:46:58 +0000 (19:46 -0300)]
crypto, hash: implement BinaryMarshaler, BinaryUnmarshaler in hash implementations
The marshal method allows the hash's internal state to be serialized and
unmarshaled at a later time, without having the re-write the entire stream
of data that was already written to the hash.
Fixes #20573
Change-Id: I40bbb84702ac4b7c5662f99bf943cdf4081203e5
Reviewed-on: https://go-review.googlesource.com/66710 Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Leigh McCulloch [Fri, 27 Oct 2017 07:04:57 +0000 (07:04 +0000)]
encoding/xml: ignore whitespace in values and attrs
Whitespace is ignored in bool values and attrs. It is convenient and
relatively safe since whitespace around a bool value is often
unimportant. The same logic can be applied to numeric values of types
int, uint, and float.
Fixes #22146
Change-Id: Ie0462def90304af144b8e2e72d85b644857c27cc
Reviewed-on: https://go-review.googlesource.com/73891 Reviewed-by: Sam Whited <sam@samwhited.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Sam Whited <sam@samwhited.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Mikio Hara [Fri, 27 Oct 2017 09:13:17 +0000 (18:13 +0900)]
net/smtp: don't call testing.T.Fatal{,f} from goroutines not running Test function
Also replaces verbs for error message from %s to %v. In general, low
level IO APIs return an error value containing non-string types and
there's no guarantee that all the types implement fmt.Stringer
interface.
Change-Id: I8a6e2a80d5c721c772a83b9556bac16556eaa771
Reviewed-on: https://go-review.googlesource.com/73931
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Dmitri Shuralyov [Thu, 21 Sep 2017 05:53:13 +0000 (01:53 -0400)]
net/http: set Content-Type header for HEAD as well
In CL 50510, the Content-Type header started to be set in Redirect when
request method is GET. (Prior to that, it wasn't set at all, which is
what said CL was fixing.) However, according to HTTP specification,
the expected response for a HEAD request is identical to that of a
GET request, but without the response body.
This CL updates the behavior to set the Content-Type header for HEAD
method in addition to GET.
This actually allows a simpler implementation than before. This change
largely reverts CL 50510, and applies the simpler implementation.
Add a test for Content-Type header and body for GET, HEAD requests.
Updates CL 50510.
Change-Id: If33ea3f4bbc5246bb5dc751458004828cfe681b9
Reviewed-on: https://go-review.googlesource.com/65190
Run-TryBot: Dmitri Shuralyov <shurcool@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com> Reviewed-by: Tom Bergan <tombergan@google.com>
Radek Sohlich [Tue, 31 Oct 2017 16:59:29 +0000 (17:59 +0100)]
os/signal: improve documentation for the Notify function
It is easy to miss the documentation information that no arguments
in the Notify function means that the Notify will catch all possible signals.
So the example was added with explicit comment above the Notify usage.
Fixes #22257
Change-Id: Ia6a16dd4a419f7c77d89020ca5db85979b5b474e
Reviewed-on: https://go-review.googlesource.com/74730
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Cherry Zhang [Fri, 6 Oct 2017 19:00:02 +0000 (15:00 -0400)]
cmd/internal/goobj: accept int64 in readInt
The counter part, writeInt in cmd/internal/obj, writes int64s.
So the reader side should also read int64s. This may cause a
larger range of values being accepted, some of which should
not be that large. This is probably ok: for example, for
size/index/length, the very large value (due to corruption)
may be well past the end and causes other errors. And we did
not do much bound check anyway.
One exmaple where this matters is ARM32's object file. For one
type of relocation it encodes the instruction into Reloc.Add
field (which itself may be problematic and worth fix) and the
instruction encoding overflows int32, causing ARM32 object
file being rejected by goobj (and so objdump and nm) before.
Unskip ARM32 object file tests in goobj, nm, and objdump.
Alberto Donizetti [Wed, 1 Nov 2017 11:36:56 +0000 (12:36 +0100)]
math/big: avoid unnecessary Newton iteration in Float.Sqrt
An initial draft of the Newton code for Float.Sqrt was structured like
this:
for condition
// do Newton iteration..
prec *= 2
since prec, at the end of the loop, was double the precision used in
the last Newton iteration, the termination condition was set to
2*limit. The code was later rewritten in the form
for condition
prec *= 2
// do Newton iteration..
but condition was not updated, and it's still 2*limit, which is about
double what we actually need, and is triggering the execution of an
additional, and unnecessary, Newton iteration.
This change adjusts the Newton termination condition to the (correct)
value of z.prec, plus 32 guard bits as a safety margin.
Joe Kyo [Tue, 31 Oct 2017 09:24:44 +0000 (09:24 +0000)]
net/http: fix typo in doc string
Change-Id: I4542f6c095a35a4dec03c67c45a75a155197eb56
Reviewed-on: https://go-review.googlesource.com/74650 Reviewed-by: Tom Bergan <tombergan@google.com>
Alessandro Arzilli [Sat, 21 Oct 2017 10:45:23 +0000 (12:45 +0200)]
compile, link: remove base address selector from DWARF range lists
Dsymutil, an utility used on macOS when externally linking executables,
does not support base address selector entries in debug_ranges.
To work around this deficiency this commit removes base address
selectors from debug_ranges and emits instead a list composed only of
compile unit relative addresses.
A new type of relocation is introduced, R_ADDRCUOFF, similar to
R_ADDROFF, that relocates an address to its offset from the low_pc of
the symbol's compile unit.
Russ Cox [Tue, 31 Oct 2017 16:59:47 +0000 (12:59 -0400)]
cmd/go: pass package config to vet during "go vet"
After this CL, "go vet" can be guaranteed to have complete type information
about the packages being checked, even if cgo or swig is in use,
which will in turn make it reasonable for vet checks to insist on type
information. It also fixes vet's understanding of unusual import paths
like relative paths and vendored packages.
For now "go tool vet" will continue to cope without type information,
but the eventual plan is for "go tool vet" to query the go command for
what it needs, and also to be able to query alternate build systems
like bazel. But that's future work.
Change-Id: I932626ee7da649b302cd269b82eb6fe5d7b9f0f2
Reviewed-on: https://go-review.googlesource.com/74750 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Russ Cox [Mon, 30 Oct 2017 15:16:09 +0000 (11:16 -0400)]
cmd/vet: accept package config from go command
This CL adds support for accepting package config from
the go command. Paired with CL 74356 this lets us make
sure vet has complete information about package sources.
This fixes many issues (see CL 74356 for the list), including
mishandling of cgo and vendoring.
Change-Id: Ia4a1dce6f9b1b0a8ef5fdf9005a20a8b294969f1
Reviewed-on: https://go-review.googlesource.com/74355
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Rob Pike <r@golang.org>
Martin Möhrmann [Sat, 12 Aug 2017 15:37:13 +0000 (17:37 +0200)]
runtime: protect growslice against newcap*et.size overflow
The check of uintptr(newcap) > maxSliceCap(et.size) in addition
to capmem > _MaxMem is needed to prevent a reproducible overflow
on 32bit architectures.
On 64bit platforms this problem is less likely to occur as allocation
of a sufficiently large array or slice to be append is likely to
already exhaust available memory before the call to append can be made.
Example program that without the fix in this CL does segfault on 386:
Russ Cox [Tue, 31 Oct 2017 19:42:17 +0000 (15:42 -0400)]
cmd/link: do not store compilation directory in DWARF info
This makes 'go install cmd/compile' in one directory produce
a different binary from running it in another directory,
which is problematic for reproducible builds.
Michael Fraenkel [Sun, 29 Oct 2017 00:50:57 +0000 (20:50 -0400)]
encoding/json: Include the offset of a SyntaxError
When a SyntaxError occurs, report the current offset within the stream.
The code already accounted for the offset within the current buffer
being scanned. By including how much data was already scanned, the
current offset can be computed.
Fixes #22478
Change-Id: I91ecd4cad0b85a5c1556bc597f3ee914e769af01
Reviewed-on: https://go-review.googlesource.com/74251 Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Ivan Bertona [Tue, 31 Oct 2017 20:16:38 +0000 (13:16 -0700)]
encoding/json: disallow unknown fields in Decoder
Add a DisallowUnknownFields flag to Decoder.
DisallowUnknownFields causes the Decoder to return an error when
the the decoding destination is a struct and the input contains
object keys which do not match any non-ignored, public field the
destination, including keys whose value is set to null.
Note: this fix has already been worked on in 27231, which seems
to be abandoned. This version is a slightly simpler implementation
and is up to date with the master branch.
Fixes #15314
Change-Id: I987a5857c52018df334f4d1a2360649c44a7175d
Reviewed-on: https://go-review.googlesource.com/74830 Reviewed-by: Joe Tsai <joetsai@google.com>
Run-TryBot: Joe Tsai <joetsai@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Hana (Hyang-Ah) Kim [Mon, 24 Jul 2017 15:51:03 +0000 (11:51 -0400)]
runtime/trace: fix corrupted trace during StartTrace
Since Go1.8, different types of GC mark workers were annotated and the
annotation strings were recorded during StartTrace. This change fixes
two issues around the use of traceString from StartTrace here.
1) "failed to parse trace: no consistent ordering of events possible"
This issue is a result of a missing 'batch' event entry. For efficient
tracing, tracer maintains system allocated buffers and once a buffer
is full, it is Flushed out for writing. Moreover, tracing assumes all
the records in the same buffer (batch) are already ordered and implements
more optimization in encoding and defers the completing order
reconstruction till the trace parsing time. Thus, when a Flush happens
and a new buffer is used, the new buffer should contain an event to
indicate the start of a new batch. Before this CL, the batch entry was
written only by traceEvent only when the buffer position is 0 and
wasn't written when flush occurs during traceString.
This CL fixes it by moving the batch entry write to the traceFlush.
2) crash during tracing due to invalid memory access, or during parsing
due to duplicate string entries
This issue is a result of memory allocation during traceString calls.
Execution tracer traces some memory allocation activities. Before this
CL, traceString took the buffer address (*traceBuf) and mutated the buffer.
If memory tracing occurs in the meantime from the same P, the allocation
tracing (traceEvent) will take the same buffer address through the pointer
to the buffer address (**traceBuf), and mutate the buffer.
As a result, one of the followings can happen:
- the allocation record is overwritten by the following trace string
record (data loss)
- if buffer flush occurs during the allocation tracing, traceString
will attempt to write the string record to the old buffer and
eventually causes invalid memory access crash.
- or flush on the same buffer can occur twice (once from the memory
allocation, and once from the string record write), and in this case
the trace can contain the same data twice and the parse will complain
about duplicate string record entries.
This CL fixes the second issue by making the traceString take
**traceBuf (*traceBufPtr).
runtime: allow 5% mutator assist over 25% background mark
Currently, both the background mark worker and the goal GC CPU are
both fixed at 25%. The trigger controller's goal is to achieve the
goal CPU usage, and with the previous commit it can actually achieve
this. But this means there are *no* assists, which sounds ideal but
actually causes problems for the trigger controller. Since the
controller can't lower CPU usage below the background mark worker CPU,
it saturates at the CPU goal and no longer gets feedback, which
translates into higher variability in heap growth.
This commit fixes this by allowing assists 5% CPU beyond the 25% fixed
background mark. This avoids saturating the trigger controller, since
it can now get feedback from both sides of the CPU goal. This leads to
low variability in both CPU usage and heap growth, at the cost of
reintroducing a low rate of mark assists.
We also experimented with 20% background plus 5% assist, but 25%+5%
clearly performed better in benchmarks.
Updates #14951.
Updates #14812.
Updates #18534.
Combined with the previous CL, this significantly improves tail
mutator utilization in the x/bechmarks garbage benchmark. On a sample
trace, it increased the 99.9%ile mutator utilization at 10ms from 26%
to 59%, and at 5ms from 17% to 52%. It reduced the 99.9%ile zero
utilization window from 2ms to 700µs. It also helps the mean mutator
utilization: it increased the 10s mutator utilization from 83% to 94%.
The minimum mutator utilization is also somewhat improved, though
there is still some unknown artifact that causes a miniscule fraction
of mutator assists to take 5--10ms (in fact, there was exactly one
10ms mutator assist in my sample trace).
This has no significant effect on the throughput of the
github.com/dr2chase/bent benchmarks-50.
This has little effect on the go1 benchmarks (and the slight overall
improvement makes up for the slight overall slowdown from the previous
commit):
Currently, GC pacing is based on a single hard heap limit computed
based on GOGC. In order to achieve this hard limit, assist pacing
makes the conservative assumption that the entire heap is live.
However, in the steady state (with GOGC=100), only half of the heap is
live. As a result, the garbage collector works twice as hard as
necessary and finishes half way between the trigger and the goal.
Since this is a stable state for the trigger controller, this repeats
from cycle to cycle. Matters are even worse if GOGC is higher. For
example, if GOGC=200, only a third of the heap is live in steady
state, so the GC will work three times harder than necessary and
finish only a third of the way between the trigger and the goal.
Since this causes the garbage collector to consume ~50% of the
available CPU during marking instead of the intended 25%, about 25% of
the CPU goes to mutator assists. This high mutator assist cost causes
high mutator latency variability.
This commit improves the situation by separating the heap goal into
two goals: a soft goal and a hard goal. The soft goal is set based on
GOGC, just like the current goal is, and the hard goal is set at a 10%
larger heap than the soft goal. Prior to the soft goal, assist pacing
assumes the heap is in steady state (e.g., only half of it is live).
Between the soft goal and the hard goal, assist pacing switches to the
current conservative assumption that the entire heap is live.
In benchmarks, this nearly eliminates mutator assists. However, since
background marking is fixed at 25% CPU, this causes the trigger
controller to saturate, which leads to somewhat higher variability in
heap size. The next commit will address this.
The lower CPU usage of course leads to longer mark cycles, though
really it means the mark cycles are as long as they should have been
in the first place. This does, however, lead to two potential
down-sides compared to the current pacing policy: 1. the total
overhead of the write barrier is higher because it's enabled more of
the time and 2. the heap size may be larger because there's more
floating garbage. We addressed 1 by significantly improving the
performance of the write barrier in the preceding commits. 2 can be
demonstrated in intense GC benchmarks, but doesn't seem to be a
problem in any real applications.
It does increase the heap size of the garbage benchmarks, but seems to
have relatively little impact on more realistic programs. Also, we'll
gain some of this back with the next commit.
Cherry Zhang [Tue, 31 Oct 2017 15:48:13 +0000 (11:48 -0400)]
cmd/compile: on ARM, make sure *const's AuxInt fit into int32
Previously some of the AuxInt are uint32, which may not fit into
int32. This CL convert them to int32. This does not change the
generated code, but make ssacheck happy.
Daniel Martí [Sat, 28 Oct 2017 16:35:27 +0000 (17:35 +0100)]
all: unindent some if bodies by exiting early
All of these had a return or break in the else body, so flipping the
condition means we can unindent and simplify.
Change-Id: If93e97504480d18a0dac3f2c8ffe57ab8bcb929c
Reviewed-on: https://go-review.googlesource.com/74190
Run-TryBot: Daniel Martí <mvdan@mvdan.cc> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Previously, anytime we exported a function or method declaration
(which includes methods for every type transitively exported), we
included the inline function bodies, if any. However, in many cases,
it's impossible (or at least very unlikely) for the importing package
to call the method.
For example:
package p
type T int
func (t T) M() { t.u() }
func (t T) u() {}
func (t T) v() {}
T.M and T.u are inlineable, and they're both reachable through calls
to T.M, which is exported. However, t.v is also inlineable, but cannot
be reached.
Exception: if p.T is embedded in another type q.U, p.T.v will be
promoted to q.U.v, and the generated wrapper function could have
inlined the call to p.T.v. However, in practice, this doesn't happen,
and a missed inlining opportunity doesn't affect correctness.
To implement this, this CL introduces an extra flood fill pass before
exporting to mark inline bodies that are actually reachable, so the
exporter can skip over methods like t.v.
This reduces Kubernetes build time (as measured by "time go build -a
k8s.io/kubernetes/cmd/...") on an HP Z620 measurably:
== before ==
real 0m44.658s
user 11m19.136s
sys 0m53.844s
== after ==
real 0m41.702s
user 10m29.732s
sys 0m50.908s
It also significantly cuts down the cost of enabling mid-stack
inlining (-l=4):
== before (-l=4) ==
real 1m19.236s
user 20m6.528s
sys 1m17.328s
== after (-l=4) ==
real 0m59.100s
user 13m12.808s
sys 0m58.776s
Updates #19348.
Change-Id: Iade58233ca42af823a1630517a53848b5d3c7a7e
Reviewed-on: https://go-review.googlesource.com/74110
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
Cherry Zhang [Thu, 5 Oct 2017 20:14:14 +0000 (16:14 -0400)]
test: fix and re-enable nosplit.go
The test was skipped because it did not work on AMD64 with
frame pointer enabled, and accidentally skipped on other
architectures. Now frame pointer is the default on AMD64.
Update the test to work with frame pointer. Now the test
is skipped only when frame pointer is NOT enabled on AMD64.
Joe Kyo [Mon, 16 Oct 2017 06:25:14 +0000 (07:25 +0100)]
crypto/tls: remove bookkeeping code from pHash function
Since copy function can figure out how many bytes of data to copy when
two slices have different length, it is not necessary to check how many
bytes need to copy each time before copying the data.
Cherry Zhang [Wed, 4 Oct 2017 18:33:10 +0000 (14:33 -0400)]
cmd/compile: don't fold address of global into load/store on PPC64
On PPC64 (and a few other architectures), accessing global
requires multiple instructions and use of temp register.
The compiler emits a single MOV prog, and the assembler
expands it to multiple instructions. If globals are accessed
multiple times, each time it generates a reload of the temp
register. As this is done by the assembler, the compiler
cannot optimize it.
This CL makes the compiler not fold address of global into load
and store. If a global is accessed multiple times, or multiple
fields of a struct are accessed, the compiler can CSE the
address. Currently, this doesn't help the case where different
globals are accessed, even though they may be close to each
other in the address space (which we don't know at compile time).
It also helps in the case mentioned in issue #17110, main.main
in package math's test. Now it generates 4 loads of R31 instead
of 10, for the same piece of code.
This causes a slight increase of binary size: cmd/go increases
0.66%.
If this is a good idea, we should do it on other architectures
where accessing global is expensive.