h_spans can be accessed concurrently without synchronization from
other threads, which means it needs the appropriate memory barriers on
weakly ordered machines. It happens to already have the necessary
memory barriers because all accesses to h_spans are currently
protected by the heap lock and the unlocks happen in exactly the
places where release barriers are needed, but it's easy to imagine
that this could change in the future. Document the fact that we're
depending on the barrier implied by the unlock.
Related to issue #9984.
Change-Id: I1bc3c95cd73361b041c8c95cd4bb92daf8c1f94a
Reviewed-on: https://go-review.googlesource.com/11361
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
if trace.enabled {
traceHeapAlloc()
}
+
+ // h_spans is accessed concurrently without synchronization
+ // from other threads. Hence, there must be a store/store
+ // barrier here to ensure the writes to h_spans above happen
+ // before the caller can publish a pointer p to an object
+ // allocated from s. As soon as this happens, the garbage
+ // collector running on another processor could read p and
+ // look up s in h_spans. The unlock acts as the barrier to
+ // order these writes. On the read side, the data dependency
+ // between p and the index in h_spans orders the reads.
unlock(&h.lock)
return s
}
s.ref = 0
memstats.stacks_inuse += uint64(s.npages << _PageShift)
}
+
+ // This unlock acts as a release barrier. See mHeap_Alloc_m.
unlock(&h.lock)
return s
}