From 829f2ce3ba33a09a7975139a0a33d462bb3114db Mon Sep 17 00:00:00 2001 From: Michael Anthony Knyszek Date: Thu, 1 Feb 2024 05:10:32 +0000 Subject: [PATCH] runtime: clear trace map without write barriers Currently the trace map is cleared with an assignment, but this ends up invoking write barriers. Theoretically, write barriers could try to write a trace event and eventually try to acquire the same lock. The static lock ranking expresses this constraint. This change replaces the assignment with a call to memclrNoHeapPointer to clear the map, removing the write barriers. Note that technically this problem is purely theoretical. The way the trace maps are used today is such that reset is only ever called when the tracer is no longer writing events that could emit data into a map. Furthermore, reset is never called from an event-writing context. Therefore another way to resolve this is to simply not hold the trace map lock over the reset operation. However, this makes the trace map implementation less robust because it needs to be used in a very specific way. Furthermore, the rest of the trace map code avoids write barriers already since its internal structures are all notinheap, so it's actually more consistent to just avoid write barriers in the reset method. Fixes #56554. Change-Id: Icd86472e75e25161b2c10c1c8aaae2c2fed4f67f Reviewed-on: https://go-review.googlesource.com/c/go/+/560216 Reviewed-by: Michael Pratt LUCI-TryBot-Result: Go LUCI --- src/runtime/trace2map.go | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/src/runtime/trace2map.go b/src/runtime/trace2map.go index 4a5a7ecba4..195ec0bbe7 100644 --- a/src/runtime/trace2map.go +++ b/src/runtime/trace2map.go @@ -141,5 +141,11 @@ func (tab *traceMap) reset() { assertLockHeld(&tab.lock) tab.mem.drop() tab.seq.Store(0) - tab.tab = [1 << 13]atomic.UnsafePointer{} + // Clear table without write barriers. The table consists entirely + // of notinheap pointers, so this is fine. + // + // Write barriers may theoretically call into the tracer and acquire + // the lock again, and this lock ordering is expressed in the static + // lock ranking checker. + memclrNoHeapPointers(unsafe.Pointer(&tab.tab), unsafe.Sizeof(tab.tab)) } -- 2.48.1