David Chase [Mon, 8 Feb 2016 17:07:39 +0000 (12:07 -0500)]
[dev.ssa] cmd/compile: fix for bug in cse speed improvements
Problem was caused by use of Args[].Aux differences
in early partitioning. This artificially separated
two equivalent expressions because sort ignores the
Aux field, hence things can end with equal things
separated by unequal things and thus the equal things
are split into more than one partition. For example:
SliceLen(a), SliceLen(b), SliceLen(a).
Fix: don't use Args[].Aux in initial partitioning.
Left in a debugging flag and some debugging Fprintf's;
not sure if that is house style or not. We'll probably
want to be more systematic in our naming conventions,
e.g. ssa.cse, ssa.scc, etc.
Change-Id: Ib1412539cc30d91ea542c0ac7b2f9b504108ca7f
Reviewed-on: https://go-review.googlesource.com/19316 Reviewed-by: Keith Randall <khr@golang.org>
Todd Neal [Sun, 7 Feb 2016 02:56:50 +0000 (20:56 -0600)]
[dev.ssa] cmd/compile: speed up cse
Examine both Aux and AuxInt to form more precise initial partitions.
Restructure loop to avoid repeated type.Equal() call. Speeds up
compilation of testdata/gen/arithConst_ssa by 25%.
Alexandru Moșoi [Thu, 4 Feb 2016 18:52:10 +0000 (19:52 +0100)]
[dev.ssa] cmd/compile/internal/ssa/gen: enclose rules' code in a for loop
* Enclose each rule's code in a for with no condition
* The loop is ran at most once because it's always terminated by a return.
* Use break when matching condition fails
* Drop rule hashes
* Shaves about 3 lines of code per rule
The binary size is not afected.
Change-Id: I27c3e40dc8cae98dcd50739342dc38db2ef9c247
Reviewed-on: https://go-review.googlesource.com/19220 Reviewed-by: Keith Randall <khr@golang.org>
Keith Randall [Thu, 4 Feb 2016 19:21:31 +0000 (11:21 -0800)]
[dev.ssa] cmd/internal/obj/x86: don't clobber flags with dynlink rewrite
LEAQ symbol+100(SB), AX
Under dynamic link, rewrites to
MOVQ symbol@GOT(SB), AX
ADDQ $100, AX
but ADDQ clobbers flags, whereas the original LEAQ (when not dynamic
linking) doesn't.
Use LEAQ instead of ADDQ to add that constant in so we preserve flags.
Change-Id: Ibb055403d94a4c5163e1c7d2f45da633ffd0b6a3
Reviewed-on: https://go-review.googlesource.com/19230 Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
Alexandru Moșoi [Wed, 3 Feb 2016 18:43:46 +0000 (19:43 +0100)]
[dev.ssa] cmd/compile/internal/ssa: simplify comparisons with constants
* Simplify comparisons of form a + const1 == const2 or a + const1 != const2.
* Canonicalize Eq, Neq, Add, Sub to have a constant as first argument.
Needed for the above new rules and helps constant folding.
Change-Id: I8078702a5daa706da57106073a3e9f640a67f486
Reviewed-on: https://go-review.googlesource.com/19192 Reviewed-by: Keith Randall <khr@golang.org>
Todd Neal [Tue, 2 Feb 2016 11:35:34 +0000 (06:35 -0500)]
[dev.ssa] cmd/compile: cache sparse sets in Config
Move the cached sparse sets to the Config. I tested make.bash with
pre-allocating sets of size 150 and not caching very small sets, but the
difference between this implementation (no min size, no preallocation)
and a min size with preallocation was fairly negligible:
Number of sparse sets allocated:
Cached in Config w/none preallocated no min size 3684 *this CL*
Cached in Config w/three preallocated no min size 3370
Cached in Config w/three preallocated min size=150 3370
Cached in Config w/none preallocated min size=150 15947
Cached in Func, w/no min 96996 *previous code*
Change-Id: I7f9de8a7cae192648a7413bfb18a6690fad34375
Reviewed-on: https://go-review.googlesource.com/19152 Reviewed-by: Keith Randall <khr@golang.org>
David Chase [Sat, 30 Jan 2016 22:37:38 +0000 (17:37 -0500)]
[dev.ssa] cmd/compile: reducing alloc footprint of dominator calc
Converted working slices of pointer into slices of pointer
index. Half the size (on 64-bit machine) and no pointers
to trace if GC occurs while they're live.
TODO - could expose slice mapping ID->*Block; some dom
clients also construct these.
Minor optimization in regalloc that cuts allocation count.
Minor optimization in compile.go that cuts calls to Sprintf.
Keith Randall [Tue, 26 Jan 2016 23:55:05 +0000 (15:55 -0800)]
[dev.ssa] cmd/compile: tweak init function prologue
We used to compare the init state with == to 0 and 2, which
requires 2 comparisons. Instead, compare with 1 and use
<, ==. That requires only one comparison.
This isn't a big deal performance-wise, as it is just init
code. But there is a fair amount of init code, so this
should help a bit with code size.
Change-Id: I4a2765f1005776f0edce28ac143f4b7596d95a68
Reviewed-on: https://go-review.googlesource.com/18948 Reviewed-by: David Chase <drchase@google.com>
Keith Randall [Tue, 26 Jan 2016 01:06:54 +0000 (17:06 -0800)]
[dev.ssa] cmd/compile: fix write barriers for SSA
The old write barriers used _nostore versions, which
don't work for Ian's cgo checker. Instead, we adopt the
same write barrier pattern as the default compiler.
It's a bit trickier to code up but should be more efficient.
Change-Id: I6696c3656cf179e28f800b0e096b7259bd5f3bb7
Reviewed-on: https://go-review.googlesource.com/18941
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
For each value that needs to be in a fixed register at the end of the
block, and try to pick that fixed register when the instruction
generating that value is scheduled (or restored from a spill).
Just used for end-of-block register requirements for now.
Fixed-register instruction requirements (e.g. shift in ecx) can be
added later. Also two-instruction constraints (input reg == output
reg) might be recorded in a similar manner.
Change-Id: I59916e2e7f73657bb4fc3e3b65389749d7a23fa8
Reviewed-on: https://go-review.googlesource.com/18774
Run-TryBot: Keith Randall <khr@golang.org> Reviewed-by: David Chase <drchase@google.com>
Keith Randall [Fri, 22 Jan 2016 21:44:58 +0000 (13:44 -0800)]
[dev.ssa] cmd/compile: disable xor clearing when flags must be preserved
The x86 backend automatically rewrites MOV $0, AX to
XOR AX, AX. That rewrite isn't ok when the flags register
is live across the MOV. Keep track of which moves care
about preserving flags, then disable this rewrite for them.
On x86, Prog.Mark was being used to hold the length of the
instruction. We already store that in Prog.Isize, so no
need to store it in Prog.Mark also. This frees up Prog.Mark
to hold a bitmask on x86 just like all the other architectures.
Keith Randall [Wed, 13 Jan 2016 19:14:57 +0000 (11:14 -0800)]
[dev.ssa] cmd/compile: report better line numbers for Unimplemented/Fatal
If a failure occurs in SSA processing, we always report the
last line of the function we're compiling. Modify the callbacks
from SSA to the GC compiler so we can pass a line number back
and use it in Fatalf.
Change-Id: Ifbfad50d5e167e997e0a96f0775bcc369f5c397e
Reviewed-on: https://go-review.googlesource.com/18599
Run-TryBot: Keith Randall <khr@golang.org> Reviewed-by: David Chase <drchase@google.com>
Brad Fitzpatrick [Mon, 18 Jan 2016 19:43:32 +0000 (11:43 -0800)]
net/http: panic on bogus use of CloseNotifier or Hijacker
Fixes #14001
Change-Id: I6f9bc3028345081758d8f537c3aaddb2e254e69e
Reviewed-on: https://go-review.googlesource.com/18708 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Mikio Hara [Sun, 17 Jan 2016 00:04:58 +0000 (09:04 +0900)]
net: disable TestInterfaceAddrsWithNetsh on windows
Updates #13981.
Change-Id: Id8f3cd56a81a7a993cea5c757e619407da491fed
Reviewed-on: https://go-review.googlesource.com/18710 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Austin Clements [Fri, 15 Jan 2016 18:28:41 +0000 (13:28 -0500)]
runtime: fix sleep/wakeup race for GC assists
GC assists check gcBlackenEnabled under the assist queue lock to avoid
going to sleep after gcWakeAllAssists has already woken all assists.
However, currently we clear gcBlackenEnabled shortly *after* waking
all assists, which opens a window where this exact race can happen.
Fix this by clearing gcBlackenEnabled before waking blocked assists.
However, it's unlikely this actually matters because the world is
stopped between waking assists and clearing gcBlackenEnabled and there
aren't any obvious allocations during this window, so I don't think an
assist could actually slip in to this race window.
Mikio Hara [Fri, 15 Jan 2016 07:37:47 +0000 (16:37 +0900)]
runtime: readjust signal code for dragonfly-2.6 and above
Also adds missing nosplit to unminit.
Fixes #13964.
Change-Id: I07d93a8c872a255a89f91f808b66c889f0a6a69c
Reviewed-on: https://go-review.googlesource.com/18658 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Brad Fitzpatrick [Fri, 15 Jan 2016 00:29:24 +0000 (16:29 -0800)]
net/http: update bundled http2
If a user starts two HTTP requests when no http2 connection is
available, both end up creating new TCP connections, since the
server's protocol (h1 or h2) isn't yet known. Once it turns out that
the server supports h2, one of the connections is useless. Previously
we kept upgrading both TLS connections to h2 (SETTINGS frame exchange,
etc). Now the unnecessary connections are closed instead, before the
h2 preface/SETTINGS.
Updates x/net/http2 to git rev a8e212f3d for https://golang.org/cl/18675
This CL contains the tests for https://golang.org/cl/18675
Semi-related change noticed while writing the tests: now that we have
TLSNextProto in Go 1.6, which consults the TLS
ConnectionState.NegotiatedProtocol, we have to gurantee that the TLS
handshake has been done before we look at the ConnectionState. So add
that check after the DialTLS hook. (we never documented that users
have to call Handshake, so do it for them, now that it matters)
Austin Clements [Thu, 14 Jan 2016 21:43:40 +0000 (16:43 -0500)]
runtime: use at least "system" traceback level for runtime tests
While the default behavior of eliding runtime frames from tracebacks
usually makes sense, this is not the case when you're trying to test
the runtime itself. Fix this by forcing the traceback level to at
least "system" in the runtime tests.
This will specifically help with debugging issue #13645, which has
proven remarkably resistant to reproduction outside of the build
dashboard itself.
Ian Lance Taylor [Thu, 14 Jan 2016 17:10:57 +0000 (09:10 -0800)]
runtime: remove erroneous go:noescape declaration
Change-Id: I6b1dc789e54a385c958961e7ba16bfd9d0f3b313
Reviewed-on: https://go-review.googlesource.com/18629 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Ian Lance Taylor [Tue, 12 Jan 2016 23:34:03 +0000 (15:34 -0800)]
runtime: minimize time between lockextra/unlockextra
This doesn't fix a bug, but may improve performance in programs that
have many concurrent calls from C to Go. The old code made several
system calls between lockextra and unlockextra. That could be happening
while another thread is spinning acquiring lockextra. This changes the
code to not make any system calls while holding the lock.
Austin Clements [Wed, 13 Jan 2016 20:14:26 +0000 (15:14 -0500)]
runtime: fix several issues in TestFutexsleep
TestFutexsleep is supposed to clean up before returning by waking up
the goroutines it started and left blocked in futex sleeps. However,
it currently fails at this in several ways:
1. Both the sleep and wakeup are done on the address of tt.mtx, but in
both cases tt is a *local copy* of the futexsleepTest created by a
loop, so the sleep and wakeup happen on completely different
addresses. Fix this by making them both use the address of the
global tt.mtx.
2. If the sleep happens after the wakeup (not likely, but not
impossible), it won't wake up. Fix this by using the futex protocol
properly: sleep if the mutex's value is 0, and set the mutex's
value to non-zero before doing the wakeup.
3. If TestFutexsleep runs more than once, channels and mutex values
left over from the first run will interfere with later runs. Fix
this by clearing the mutex value and creating a new channel for
each test and waiting for goroutines to finish before returning
(lest they send their completion to the channel for the next run).
As an added bonus, this test now actually tests that futex
sleep/wakeup work. Previously this test would have been satisfied if
futexsleep was an infinite loop and futexwakeup was a no-op.
Change-Id: I1cbc6871cc9dcb8f4601b3621913bec2b79b0fc3
Reviewed-on: https://go-review.googlesource.com/18617 Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
Austin Clements [Fri, 8 Jan 2016 21:25:29 +0000 (16:25 -0500)]
debug/dwarf: fix nil pointer dereference in cyclic type structures
Currently readType simultaneously constructs a type graph and resolves
the sizes of the types. However, these two operations are
fundamentally at odds: the order we parse a cyclic structure in may be
different than the order we need to resolve type sizes in. As a
result, it's possible that when readType attempts to resolve the size
of a typedef, it may dereference a nil Type field of another typedef
retrieved from the type cache that's only partially constructed.
To fix this, we delay resolving typedef sizes until the end of the
readType recursion, when the full type graph is constructed.
Russ Cox [Tue, 5 Jan 2016 14:27:40 +0000 (09:27 -0500)]
cmd/internal/obj: separate code layout from object writing
This will allow the compiler to crunch Prog lists down to code as each
function is compiled, instead of waiting until the end, which should
reduce the working set of the compiler. But not until Go 1.7.
This also makes it easier to write some machine code output tests
for the assembler, which is why it's being done now.
For #13822.
Change-Id: I0811123bc6e5717cebb8948f9cea18e1b9baf6f7
Reviewed-on: https://go-review.googlesource.com/18311 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Russ Cox [Wed, 13 Jan 2016 05:46:28 +0000 (00:46 -0500)]
cmd/compile: recognize Syscall-like functions for liveness analysis
Consider this code:
func f(*int)
func g() {
p := new(int)
f(p)
}
where f is an assembly function.
In general liveness analysis assumes that during the call to f, p is dead
in this frame. If f has retained p, p will be found alive in f's frame and keep
the new(int) from being garbage collected. This is all correct and works.
We use the Go func declaration for f to give the assembly function
liveness information (the arguments are assumed live for the entire call).
Here syscall.Syscall is taking the place of f, but because its arguments
are uintptr, the liveness analysis and the garbage collector ignore them.
Since p is no longer live in h once the call starts, if the garbage collector
scans the stack while the system call is blocked, it will find no reference
to the new(int) and reclaim it. If the kernel is going to write to *p once
the call finishes, reclaiming the memory is a mistake.
We can't change the arguments or the liveness information for
syscall.Syscall itself, both for compatibility and because sometimes the
arguments really are integers, and the garbage collector will get quite upset
if it finds an integer where it expects a pointer. The problem is that
these arguments are fundamentally untyped.
The solution we have taken in the syscall package's wrappers in past
releases is to insert a call to a dummy function named "use", to make
it look like the argument is live during the call to syscall.Syscall:
Keeping p alive during the call means that if the garbage collector
scans the stack during the system call now, it will find the reference to p.
Unfortunately, this approach is not available to users outside syscall,
because 'use' is unexported, and people also have to realize they need
to use it and do so. There is much existing code using syscall.Syscall
without a 'use'-like function. That code will fail very occasionally in
mysterious ways (see #13372).
This CL fixes all that existing code by making the compiler do the right
thing automatically, without any code modifications. That is, it takes h1
above, which is incorrect code today, and makes it correct code.
Specifically, if the compiler sees a foreign func definition (one
without a body) that has uintptr arguments, it marks those arguments
as "unsafe uintptrs". If it later sees the function being called
with uintptr(unsafe.Pointer(x)) as an argument, it arranges to mark x
as having escaped, and it makes sure to hold x in a live temporary
variable until the call returns, so that the garbage collector cannot
reclaim whatever heap memory x points to.
For now I am leaving the explicit calls to use in package syscall,
but they can be removed early in a future cycle (likely Go 1.7).
The rule has no effect on escape analysis, only on liveness analysis.
Fixes #13372.
Change-Id: I2addb83f70d08db08c64d394f9d06ff0a063c500
Reviewed-on: https://go-review.googlesource.com/18584 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Robert Griesemer [Wed, 13 Jan 2016 18:52:56 +0000 (10:52 -0800)]
go/importer: fix field/method package for binary importer
This is the equivalent of https://golang.org/cl/18549 for
the binary importer (which is usually not used because by
default the gc compiler produces the traditional textual
export format).
For #13898.
Change-Id: Idb6b515f2ee49e6d0362c71846994b0bd4dae8f7
Reviewed-on: https://go-review.googlesource.com/18598 Reviewed-by: Alan Donovan <adonovan@google.com>
Run-TryBot: Robert Griesemer <gri@golang.org>
Robert Griesemer [Wed, 13 Jan 2016 18:41:37 +0000 (10:41 -0800)]
go/importer: revert incorrect change that slipped in prior CL
The package of anonymous fields is the package in which they were
declared, not the package of the anonymous field's type. Was correct
before and incorrectly changed with https://golang.org/cl/18549.
Change-Id: I9fd5bfbe9d0498c8733b6ca7b134a85defe16113
Reviewed-on: https://go-review.googlesource.com/18596 Reviewed-by: Alan Donovan <adonovan@google.com>
Russ Cox [Wed, 13 Jan 2016 14:59:16 +0000 (09:59 -0500)]
cmd/link: add LC_VERSION_MIN_MACOSX to linkmode=internal OS X binaries
This makes lldb willing to debug them.
The minimum version is hard-coded at OS X 10.7,
because that is the minimum that Go requires.
For more control over the version, users can
use linkmode=external and pass the relevant flags to the host linker.
Fixes #12941.
Change-Id: I20027be8aa034d07dd2a3326828f75170afe905f
Reviewed-on: https://go-review.googlesource.com/18588 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Russ Cox [Wed, 13 Jan 2016 17:23:44 +0000 (12:23 -0500)]
runtime: allow for C pointers between arena_start and arena_used in cgo check
Fixes #13928.
Change-Id: Ia04c6bdef5ae6924d03982682ee195048f8f387f
Reviewed-on: https://go-review.googlesource.com/18611 Reviewed-by: Ian Lance Taylor <iant@golang.org>
Keith Randall [Tue, 5 Jan 2016 22:56:26 +0000 (14:56 -0800)]
[dev.ssa] cmd/compile: clean up comparisons
Add new constant-flags opcodes. These can be generated from
comparisons that we know the result of, like x&31 < 32.
Constant-fold the constant-flags opcodes into all flag users.
Reorder some CMPxconst args so they read in the comparison direction.
Reorg deadcode removal a bit - it needs to remove the OpCopy ops it
generates when strength-reducing Phi ops. So it needs to splice out all
the dead blocks and do a copy elimination before it computes live
values.
Change-Id: Ie922602033592ad8212efe4345394973d3b94d9f
Reviewed-on: https://go-review.googlesource.com/18267
Run-TryBot: Keith Randall <khr@golang.org> Reviewed-by: David Chase <drchase@google.com>
Keith Randall [Mon, 4 Jan 2016 21:34:54 +0000 (13:34 -0800)]
[dev.ssa] cmd/compile: fix spill sizes
In code that does:
var x, z int32
var y int64
z = phi(x, int32(y))
We silently drop the int32 cast because truncation is a no-op.
The phi operation needs to make sure it uses the size of the
phi, not the size of its arguments, when generating spills.
Change-Id: I1f7baf44f019256977a46fdd3dad1972be209042
Reviewed-on: https://go-review.googlesource.com/18390 Reviewed-by: David Chase <drchase@google.com>
Robert Griesemer [Wed, 13 Jan 2016 01:02:32 +0000 (17:02 -0800)]
go/importer: associate exported field and interface methods with correct package
In gc export data, exported struct field and interface method names appear
in unqualified form (i.e., w/o package name). The (gc)importer assumed that
unqualified exported names automatically belong to the package being imported.
This is not the case if the field or method belongs to a struct or interface
that was declared in another package and re-exported.
The issue becomes visible if a type T (say an interface with a method M)
is declared in a package A, indirectly re-exported by a package B (which
imports A), and then imported in C. If C imports both A and B, if A is
imported before B, T.M gets associated with the correct package A. If B
is imported before A, T.M appears to be exported by B (even though T itself
is correctly marked as coming from A). If T.M is imported again via the
import of A if gets dropped (as it should) because it was imported already.
The fix is to pass down the parent package when we parse imported types
so that the importer can use the correct package when creating fields
and methods.
Fixes #13898.
Change-Id: I7ec2ee2dda15859c582b65db221c3841899776e1
Reviewed-on: https://go-review.googlesource.com/18549 Reviewed-by: Alan Donovan <adonovan@google.com>
Brad Fitzpatrick [Wed, 13 Jan 2016 16:30:00 +0000 (16:30 +0000)]
net/http: fix Transport crash when abandoning dial which upgrades protos
When the Transport was creating an bound HTTP connection (protocol
unknown initially) and then ends up deciding it doesn't need it, a
goroutine sits around to clean up whatever the result was. That
goroutine made the false assumption that the result was always an
HTTP/1 connection or an error. It may also be an alternate protocol
in which case the *persistConn.conn net.Conn field is nil, and the
alt field is non-nil.
Keith Randall [Tue, 12 Jan 2016 20:42:28 +0000 (12:42 -0800)]
cmd/compile: stop using fucomi* ops for 387 builds
The fucomi* opcodes were only introduced for the Pentium Pro.
They do not exist for an MMX Pentium. Use the fucom* instructions
instead and move the condition codes from the fp flags register to
the integer flags register explicitly.
The use of fucomi* opcodes in ggen.go was introduced in 1.5 (CL 8738).
The bad ops were generated for 64-bit floating-point comparisons.
The use of fucomi* opcodes in gsubr.go dates back to at least 1.1.
The bad ops were generated for float{32,64} to uint64 conversions.
Brad Fitzpatrick [Tue, 12 Jan 2016 21:15:51 +0000 (21:15 +0000)]
crypto/tls: don't block in Conn.Close if Writes are in-flight
Conn.Close sends an encrypted "close notify" to signal secure EOF.
But writing that involves acquiring mutexes (handshake mutex + the
c.out mutex) and writing to the network. But if the reason we're
calling Conn.Close is because the network is already being
problematic, then Close might block, waiting for one of those mutexes.
Instead of blocking, and instead of introducing new API (at least for
now), distinguish between a normal Close (one that sends a secure EOF)
and a resource-releasing destructor-style Close based on whether there
are existing Write calls in-flight.
Because io.Writer and io.Closer aren't defined with respect to
concurrent usage, a Close with active Writes is already undefined, and
should only be used during teardown after failures (e.g. deadlines or
cancelations by HTTP users). A normal user will do a Write then
serially do a Close, and things are unchanged for that case.
This should fix the leaked goroutines and hung net/http.Transport
requests when there are network errors while making TLS requests.
David Chase [Mon, 4 Jan 2016 21:44:20 +0000 (16:44 -0500)]
cmd/compile: better modeling of escape across loop levels
Brief background on "why heap allocate". Things can be
forced to the heap for the following reasons:
1) address published, hence lifetime unknown.
2) size unknown/too large, cannot be stack allocated
3) multiplicity unknown/too large, cannot be stack allocated
4) reachable from heap (not necessarily published)
The bug here is a case of failing to enforce 4) when an
object Y was reachable from a heap allocation X forced
because of 3). It was found in the case of a closure
allocated within a loop (X) and assigned to a variable
outside the loop (multiplicity unknown) where the closure
also captured a map (Y) declared outside the loop (reachable
from heap). Note the variable declared outside the loop (Y)
is not published, has known size, and known multiplicity
(one). The only reason for heap allocation is that it was
reached from a heap allocated item (X), but because that was
not forced by publication, it has to be tracked by loop
level, but escape-loop level was not tracked and thus a bug
results.
The fix is that when a heap allocation is newly discovered,
use its looplevel as the minimum loop level for downstream
escape flooding.
Every attempt to generalize this bug to X-in-loop-
references-Y-outside loop succeeded, so the fix was aimed
to be general. Anywhere that loop level forces heap
allocation, the loop level is tracked. This is not yet
tested for all possible X and Y, but it is correctness-
conservative and because it caused only one trivial
regression in the escape tests, it is probably also
performance-conservative.
The new test checks the following:
1) in the map case, that if fn escapes, so does the map.
2) in the map case, if fn does not escape, neither does the map.
3) in the &x case, that if fn escapes, so does &x.
4) in the &x case, if fn does not escape, neither does &x.