From: Daniel Martí
Date: Sat, 16 Jan 2021 18:03:31 +0000 (+0000)
Subject: cmd/gofmt: format files in parallel
X-Git-Tag: go1.18beta1~1196
X-Git-Url: http://www.git.cypherpunks.su/?a=commitdiff_plain;h=1ce6fd03b8a72fd8346fb23a975124edf977d25e;p=gostls13.git
cmd/gofmt: format files in parallel
gofmt is pretty heavily CPU-bound, since parsing and formatting 1MiB
of Go code takes much longer than reading that amount of bytes from
disk. However, parsing and manipulating a large Go source file is very
difficult to parallelize, so we continue to process each file in its
own goroutine.
A Go module may contain a large number of Go source files, so we need
to bound the amount of work in flight. However, because the
distribution of sizes for Go source files varies widely — from tiny
doc.go files containing a single package comment all the way up to
massive API wrappers generated by automated tools — the amount of
time, work, and memory overhead needed to process each file also
varies. To account for this variability, we limit the in-flight work
by bytes of input rather than by number of files. That allows us to
make progress on many small files while we wait for work on a handful
of large files to complete.
The gofmt tool has a well-defined output format on stdout, which was
previously deterministic. We keep it deterministic by printing the
results of each file in order, using a lazily-synchronized io.Writer
(loosly inspired by Haskell's IO monad). After a file has been
formatted in memory, we keep it in memory (again, limited by the
corresponding number of input bytes) until the output for all previous
files has been flushed. This adds a bit of latency compared to
emitting the output in nondeterministic order, but a little extra
latency seems worth the cost to preserve output stability.
This change is based on Daniel Martí's work in CL 284139, but using a
weighted semaphore and ephemeral goroutines instead of a worker pool
and batches. Benchmark results are similar, and I find the concurrency
in this approach a bit easier to reason about.
In the batching-based approach, the batch size allows us to "look
ahead" to find large files and start processing them early. To keep
the CPUs saturated and prevent stragglers, we would need to tune the
batch size to be about the same as the largest input files. If the
batch size is set too high, a large batch of small files could turn
into a straggler, but if the batch size is set too low, the largest
files in the data set won't be started early enough and we'll end up
with a large-file straggler.
One possible alternative would be to sort by file size instead of
batching: identify all of the files to be processed, sort from largest
to smallest, and then process the largest files first so that the
"tail" of processing covers the smallest files. However, that approach
would still fail to saturate available CPU when disk latency is high,
would require buffering an arbitrary amount of metadata in order to
sort by size, and (perhaps most importantly!) would not allow the
`gofmt` binary to preserve the same (deterministic) output order that
it has today.
In contrast, with a semaphore we can produce the same deterministic
output as ever using only one tuning parameter: the memory footprint,
expressed as a rough lower bound on the amount of RAM available per
thread. While we're below the memory limit, we can run arbitrarily
many disk operations arbitrarily far ahead, and process the results of
those operations whenever they become avaliable. Then it's up to the
kernel (not us) to schedule the disk operations for throughput and
latency, and it's up to the runtime (not us) to schedule the
goroutines so that they complete quickly.
In practice, even a modest assumption of a few megabytes per thread
seems to provide a nice speedup, and it should scale reasonably even
to machines with vastly different ratios of CPU to disk. (In practice,
I expect that most 'gofmt' invocations will work with files on at most
one physical disk, so the CPU:disk ratio should vary more-or-less
directly with the thread count, whereas the CPU:memory ratio is
more-or-less independent of thread count.)
name \ time/op baseline.txt 284139.txt simplified.txt
GofmtGorootCmd 11.9s ± 2% 2.7s ± 3% 2.8s ± 5%
name \ user-time/op baseline.txt 284139.txt simplified.txt
GofmtGorootCmd 13.5s ± 2% 14.4s ± 1% 14.7s ± 1%
name \ sys-time/op baseline.txt 284139.txt simplified.txt
GofmtGorootCmd 465ms ± 8% 229ms ±28% 232ms ±31%
name \ peak-RSS-bytes baseline.txt 284139.txt simplified.txt
GofmtGorootCmd 77.7MB ± 4% 162.2MB ±10% 192.9MB ±15%
For #43566
Change-Id: I4ba251eb4d188a3bd1901039086be57f0b341910
Reviewed-on: https://go-review.googlesource.com/c/go/+/317975
Trust: Bryan C. Mills
Trust: Daniel Martí
Run-TryBot: Bryan C. Mills
TryBot-Result: Go Bot
Reviewed-by: Daniel Martí
---
diff --git a/doc/go1.18.html b/doc/go1.18.html
index 911bb712f7..63715ef0d6 100644
--- a/doc/go1.18.html
+++ b/doc/go1.18.html
@@ -47,6 +47,15 @@ Do not send CLs removing the interior tags from such phrases.
TODO: complete this section, or delete if not needed
+gofmt
+
+
+ gofmt
now reads and formats input files concurrently, with a
+ memory limit proportional to GOMAXPROCS
. On a machine with
+ multiple CPUs, gofmt should now be significantly faster.
+
+
+
Runtime
@@ -111,4 +120,4 @@ Do not send CLs removing the interior tags from such phrases.
deprecated in favor of SyscallN
.
-
\ No newline at end of file
+
diff --git a/src/cmd/go.mod b/src/cmd/go.mod
index 26be677254..c52a936b4e 100644
--- a/src/cmd/go.mod
+++ b/src/cmd/go.mod
@@ -6,6 +6,7 @@ require (
github.com/google/pprof v0.0.0-20210827144239-02619b876842
golang.org/x/arch v0.0.0-20210901143047-ebb09ed340f1
golang.org/x/mod v0.5.1-0.20210913215816-37dd6891021a
+ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b
golang.org/x/tools v0.1.6-0.20210904010709-360456621443
)
diff --git a/src/cmd/go.sum b/src/cmd/go.sum
index 19bb1ee213..9afba00791 100644
--- a/src/cmd/go.sum
+++ b/src/cmd/go.sum
@@ -11,6 +11,8 @@ golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 h1:HWj/xjIHfjYU5nVXpTM0s3
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/mod v0.5.1-0.20210913215816-37dd6891021a h1:55PVa91KndtPGH2lus5l2gDZqoO/x+Oa5CV0lVf8Ij8=
golang.org/x/mod v0.5.1-0.20210913215816-37dd6891021a/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
+golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
+golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e h1:XMgFehsDnnLGtjvjOfqWSUzt0alpTR1RSEuznObga2c=
golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
diff --git a/src/cmd/gofmt/gofmt.go b/src/cmd/gofmt/gofmt.go
index b3c120daab..860d77aaf0 100644
--- a/src/cmd/gofmt/gofmt.go
+++ b/src/cmd/gofmt/gofmt.go
@@ -6,6 +6,7 @@ package main
import (
"bytes"
+ "context"
"flag"
"fmt"
"go/ast"
@@ -22,6 +23,8 @@ import (
"strings"
"cmd/internal/diff"
+
+ "golang.org/x/sync/semaphore"
)
var (
@@ -50,17 +53,10 @@ const (
)
var (
- fileSet = token.NewFileSet() // per process FileSet
- exitCode = 0
- rewrite func(*ast.File) *ast.File
+ rewrite func(*token.FileSet, *ast.File) *ast.File
parserMode parser.Mode
)
-func report(err error) {
- scanner.PrintError(os.Stderr, err)
- exitCode = 2
-}
-
func usage() {
fmt.Fprintf(os.Stderr, "usage: gofmt [flags] [path ...]\n")
flag.PrintDefaults()
@@ -76,41 +72,211 @@ func initParserMode() {
func isGoFile(f fs.DirEntry) bool {
// ignore non-Go files
name := f.Name()
- return !f.IsDir() && !strings.HasPrefix(name, ".") && strings.HasSuffix(name, ".go")
+ return !strings.HasPrefix(name, ".") && strings.HasSuffix(name, ".go") && !f.IsDir()
+}
+
+// A sequencer performs concurrent tasks that may write output, but emits that
+// output in a deterministic order.
+type sequencer struct {
+ maxWeight int64
+ sem *semaphore.Weighted // weighted by input bytes (an approximate proxy for memory overhead)
+ prev <-chan *reporterState // 1-buffered
+}
+
+// newSequencer returns a sequencer that allows concurrent tasks up to maxWeight
+// and writes tasks' output to out and err.
+func newSequencer(maxWeight int64, out, err io.Writer) *sequencer {
+ sem := semaphore.NewWeighted(maxWeight)
+ prev := make(chan *reporterState, 1)
+ prev <- &reporterState{out: out, err: err}
+ return &sequencer{
+ maxWeight: maxWeight,
+ sem: sem,
+ prev: prev,
+ }
+}
+
+// exclusive is a weight that can be passed to a sequencer to cause
+// a task to be executed without any other concurrent tasks.
+const exclusive = -1
+
+// Add blocks until the sequencer has enough weight to spare, then adds f as a
+// task to be executed concurrently.
+//
+// If the weight is either negative or larger than the sequencer's maximum
+// weight, Add blocks until all other tasks have completed, then the task
+// executes exclusively (blocking all other calls to Add until it completes).
+//
+// f may run concurrently in a goroutine, but its output to the passed-in
+// reporter will be sequential relative to the other tasks in the sequencer.
+//
+// If f invokes a method on the reporter, execution of that method may block
+// until the previous task has finished. (To maximize concurrency, f should
+// avoid invoking the reporter until it has finished any parallelizable work.)
+//
+// If f returns a non-nil error, that error will be reported after f's output
+// (if any) and will cause a nonzero final exit code.
+func (s *sequencer) Add(weight int64, f func(*reporter) error) {
+ if weight < 0 || weight > s.maxWeight {
+ weight = s.maxWeight
+ }
+ if err := s.sem.Acquire(context.TODO(), weight); err != nil {
+ // Change the task from "execute f" to "report err".
+ weight = 0
+ f = func(*reporter) error { return err }
+ }
+
+ r := &reporter{prev: s.prev}
+ next := make(chan *reporterState, 1)
+ s.prev = next
+
+ // Start f in parallel: it can run until it invokes a method on r, at which
+ // point it will block until the previous task releases the output state.
+ go func() {
+ if err := f(r); err != nil {
+ r.Report(err)
+ }
+ next <- r.getState() // Release the next task.
+ s.sem.Release(weight)
+ }()
+}
+
+// AddReport prints an error to s after the output of any previously-added
+// tasks, causing the final exit code to be nonzero.
+func (s *sequencer) AddReport(err error) {
+ s.Add(0, func(*reporter) error { return err })
+}
+
+// GetExitCode waits for all previously-added tasks to complete, then returns an
+// exit code for the sequence suitable for passing to os.Exit.
+func (s *sequencer) GetExitCode() int {
+ c := make(chan int, 1)
+ s.Add(0, func(r *reporter) error {
+ c <- r.ExitCode()
+ return nil
+ })
+ return <-c
+}
+
+// A reporter reports output, warnings, and errors.
+type reporter struct {
+ prev <-chan *reporterState
+ state *reporterState
+}
+
+// reporterState carries the state of a reporter instance.
+//
+// Only one reporter at a time may have access to a reporterState.
+type reporterState struct {
+ out, err io.Writer
+ exitCode int
+}
+
+// getState blocks until any prior reporters are finished with the reporter
+// state, then returns the state for manipulation.
+func (r *reporter) getState() *reporterState {
+ if r.state == nil {
+ r.state = <-r.prev
+ }
+ return r.state
}
+// Warnf emits a warning message to the reporter's error stream,
+// without changing its exit code.
+func (r *reporter) Warnf(format string, args ...interface{}) {
+ fmt.Fprintf(r.getState().err, format, args...)
+}
+
+// Write emits a slice to the reporter's output stream.
+//
+// Any error is returned to the caller, and does not otherwise affect the
+// reporter's exit code.
+func (r *reporter) Write(p []byte) (int, error) {
+ return r.getState().out.Write(p)
+}
+
+// Report emits a non-nil error to the reporter's error stream,
+// changing its exit code to a nonzero value.
+func (r *reporter) Report(err error) {
+ if err == nil {
+ panic("Report with nil error")
+ }
+ st := r.getState()
+ scanner.PrintError(st.err, err)
+ st.exitCode = 2
+}
+
+func (r *reporter) ExitCode() int {
+ return r.getState().exitCode
+}
+
+// If info == nil, we are formatting stdin instead of a file.
// If in == nil, the source is the contents of the file with the given filename.
-func processFile(filename string, in io.Reader, out io.Writer, stdin bool) error {
- var perm fs.FileMode = 0644
+func processFile(filename string, info fs.FileInfo, in io.Reader, r *reporter) error {
if in == nil {
- f, err := os.Open(filename)
+ var err error
+ in, err = os.Open(filename)
if err != nil {
return err
}
- defer f.Close()
- fi, err := f.Stat()
+ }
+
+ // Compute the file's size and read its contents with minimal allocations.
+ //
+ // If the size is unknown (or bogus, or overflows an int), fall back to
+ // a size-independent ReadAll.
+ var src []byte
+ size := -1
+ if info != nil && info.Mode().IsRegular() && int64(int(info.Size())) == info.Size() {
+ size = int(info.Size())
+ }
+ if size+1 > 0 {
+ // If we have the FileInfo from filepath.WalkDir, use it to make
+ // a buffer of the right size and avoid ReadAll's reallocations.
+ //
+ // We try to read size+1 bytes so that we can detect modifications: if we
+ // read more than size bytes, then the file was modified concurrently.
+ // (If that happens, we could, say, append to src to finish the read, or
+ // proceed with a truncated buffer â but the fact that it changed at all
+ // indicates a possible race with someone editing the file, so we prefer to
+ // stop to avoid corrupting it.)
+ src = make([]byte, size+1)
+ n, err := io.ReadFull(in, src)
+ if err != nil && err != io.ErrUnexpectedEOF {
+ return err
+ }
+ if n < size {
+ return fmt.Errorf("error: size of %s changed during reading (from %d to %d bytes)", filename, size, n)
+ } else if n > size {
+ return fmt.Errorf("error: size of %s changed during reading (from %d to >=%d bytes)", filename, size, len(src))
+ }
+ src = src[:n]
+ } else {
+ // The file is not known to be regular, so we don't have a reliable size for it.
+ var err error
+ src, err = io.ReadAll(in)
if err != nil {
return err
}
- in = f
- perm = fi.Mode().Perm()
}
- src, err := io.ReadAll(in)
- if err != nil {
- return err
+ fileSet := token.NewFileSet()
+ fragmentOk := false
+ if info == nil {
+ // If we are formatting stdin, we accept a program fragment in lieu of a
+ // complete source file.
+ fragmentOk = true
}
-
- file, sourceAdj, indentAdj, err := parse(fileSet, filename, src, stdin)
+ file, sourceAdj, indentAdj, err := parse(fileSet, filename, src, fragmentOk)
if err != nil {
return err
}
if rewrite != nil {
if sourceAdj == nil {
- file = rewrite(file)
+ file = rewrite(fileSet, file)
} else {
- fmt.Fprintf(os.Stderr, "warning: rewrite ignored for incomplete programs\n")
+ r.Warnf("warning: rewrite ignored for incomplete programs\n")
}
}
@@ -128,10 +294,14 @@ func processFile(filename string, in io.Reader, out io.Writer, stdin bool) error
if !bytes.Equal(src, res) {
// formatting has changed
if *list {
- fmt.Fprintln(out, filename)
+ fmt.Fprintln(r, filename)
}
if *write {
+ if info == nil {
+ panic("-w should not have been allowed with stdin")
+ }
// make a temporary backup before overwriting original
+ perm := info.Mode().Perm()
bakname, err := backupFile(filename+".", src, perm)
if err != nil {
return err
@@ -151,45 +321,42 @@ func processFile(filename string, in io.Reader, out io.Writer, stdin bool) error
if err != nil {
return fmt.Errorf("computing diff: %s", err)
}
- fmt.Fprintf(out, "diff -u %s %s\n", filepath.ToSlash(filename+".orig"), filepath.ToSlash(filename))
- out.Write(data)
+ fmt.Fprintf(r, "diff -u %s %s\n", filepath.ToSlash(filename+".orig"), filepath.ToSlash(filename))
+ r.Write(data)
}
}
if !*list && !*write && !*doDiff {
- _, err = out.Write(res)
+ _, err = r.Write(res)
}
return err
}
-func visitFile(path string, f fs.DirEntry, err error) error {
- if err != nil || !isGoFile(f) {
- return err
- }
- if err := processFile(path, nil, os.Stdout, false); err != nil {
- report(err)
- }
- return nil
-}
-
func main() {
+ // Arbitrarily limit in-flight work to 2MiB times the number of threads.
+ //
+ // The actual overhead for the parse tree and output will depend on the
+ // specifics of the file, but this at least keeps the footprint of the process
+ // roughly proportional to GOMAXPROCS.
+ maxWeight := (2 << 20) * int64(runtime.GOMAXPROCS(0))
+ s := newSequencer(maxWeight, os.Stdout, os.Stderr)
+
// call gofmtMain in a separate function
// so that it can use defer and have them
// run before the exit.
- gofmtMain()
- os.Exit(exitCode)
+ gofmtMain(s)
+ os.Exit(s.GetExitCode())
}
-func gofmtMain() {
+func gofmtMain(s *sequencer) {
flag.Usage = usage
flag.Parse()
if *cpuprofile != "" {
f, err := os.Create(*cpuprofile)
if err != nil {
- fmt.Fprintf(os.Stderr, "creating cpu profile: %s\n", err)
- exitCode = 2
+ s.AddReport(fmt.Errorf("creating cpu profile: %s", err))
return
}
defer f.Close()
@@ -203,34 +370,67 @@ func gofmtMain() {
args := flag.Args()
if len(args) == 0 {
if *write {
- fmt.Fprintln(os.Stderr, "error: cannot use -w with standard input")
- exitCode = 2
+ s.AddReport(fmt.Errorf("error: cannot use -w with standard input"))
return
}
- if err := processFile("", os.Stdin, os.Stdout, true); err != nil {
- report(err)
- }
+ s.Add(0, func(r *reporter) error {
+ return processFile("", nil, os.Stdin, r)
+ })
return
}
for _, arg := range args {
switch info, err := os.Stat(arg); {
case err != nil:
- report(err)
+ s.AddReport(err)
case !info.IsDir():
// Non-directory arguments are always formatted.
- if err := processFile(arg, nil, os.Stdout, false); err != nil {
- report(err)
- }
+ arg := arg
+ s.Add(fileWeight(arg, info), func(r *reporter) error {
+ return processFile(arg, info, nil, r)
+ })
default:
// Directories are walked, ignoring non-Go files.
- if err := filepath.WalkDir(arg, visitFile); err != nil {
- report(err)
+ err := filepath.WalkDir(arg, func(path string, f fs.DirEntry, err error) error {
+ if err != nil || !isGoFile(f) {
+ return err
+ }
+ info, err := f.Info()
+ if err != nil {
+ s.AddReport(err)
+ return nil
+ }
+ s.Add(fileWeight(path, info), func(r *reporter) error {
+ return processFile(path, info, nil, r)
+ })
+ return nil
+ })
+ if err != nil {
+ s.AddReport(err)
}
}
}
}
+func fileWeight(path string, info fs.FileInfo) int64 {
+ if info == nil {
+ return exclusive
+ }
+ if info.Mode().Type() == fs.ModeSymlink {
+ var err error
+ info, err = os.Stat(path)
+ if err != nil {
+ return exclusive
+ }
+ }
+ if !info.Mode().IsRegular() {
+ // For non-regular files, FileInfo.Size is system-dependent and thus not a
+ // reliable indicator of weight.
+ return exclusive
+ }
+ return info.Size()
+}
+
func diffWithReplaceTempFile(b1, b2 []byte, filename string) ([]byte, error) {
data, err := diff.Diff("gofmt", b1, b2)
if len(data) > 0 {
diff --git a/src/cmd/gofmt/gofmt_test.go b/src/cmd/gofmt/gofmt_test.go
index 9ef7676214..676c5b43ed 100644
--- a/src/cmd/gofmt/gofmt_test.go
+++ b/src/cmd/gofmt/gofmt_test.go
@@ -58,7 +58,11 @@ func runTest(t *testing.T, in, out string) {
// process flags
*simplifyAST = false
*rewriteRule = ""
- stdin := false
+ info, err := os.Lstat(in)
+ if err != nil {
+ t.Error(err)
+ return
+ }
for _, flag := range strings.Split(gofmtFlags(in, 20), " ") {
elts := strings.SplitN(flag, "=", 2)
name := elts[0]
@@ -75,7 +79,7 @@ func runTest(t *testing.T, in, out string) {
*simplifyAST = true
case "-stdin":
// fake flag - pretend input is from stdin
- stdin = true
+ info = nil
default:
t.Errorf("unrecognized flag name: %s", name)
}
@@ -84,11 +88,17 @@ func runTest(t *testing.T, in, out string) {
initParserMode()
initRewrite()
- var buf bytes.Buffer
- err := processFile(in, nil, &buf, stdin)
- if err != nil {
- t.Error(err)
- return
+ const maxWeight = 2 << 20
+ var buf, errBuf bytes.Buffer
+ s := newSequencer(maxWeight, &buf, &errBuf)
+ s.Add(fileWeight(in, info), func(r *reporter) error {
+ return processFile(in, info, nil, r)
+ })
+ if errBuf.Len() > 0 {
+ t.Logf("%q", errBuf.Bytes())
+ }
+ if s.GetExitCode() != 0 {
+ t.Fail()
}
expected, err := os.ReadFile(out)
diff --git a/src/cmd/gofmt/rewrite.go b/src/cmd/gofmt/rewrite.go
index bab22e04cd..0e736e6132 100644
--- a/src/cmd/gofmt/rewrite.go
+++ b/src/cmd/gofmt/rewrite.go
@@ -28,7 +28,9 @@ func initRewrite() {
}
pattern := parseExpr(f[0], "pattern")
replace := parseExpr(f[1], "replacement")
- rewrite = func(p *ast.File) *ast.File { return rewriteFile(pattern, replace, p) }
+ rewrite = func(fset *token.FileSet, p *ast.File) *ast.File {
+ return rewriteFile(fset, pattern, replace, p)
+ }
}
// parseExpr parses s as an expression.
@@ -54,7 +56,7 @@ func dump(msg string, val reflect.Value) {
*/
// rewriteFile applies the rewrite rule 'pattern -> replace' to an entire file.
-func rewriteFile(pattern, replace ast.Expr, p *ast.File) *ast.File {
+func rewriteFile(fileSet *token.FileSet, pattern, replace ast.Expr, p *ast.File) *ast.File {
cmap := ast.NewCommentMap(fileSet, p, p.Comments)
m := make(map[string]reflect.Value)
pat := reflect.ValueOf(pattern)
diff --git a/src/cmd/vendor/golang.org/x/sync/AUTHORS b/src/cmd/vendor/golang.org/x/sync/AUTHORS
new file mode 100644
index 0000000000..15167cd746
--- /dev/null
+++ b/src/cmd/vendor/golang.org/x/sync/AUTHORS
@@ -0,0 +1,3 @@
+# This source code refers to The Go Authors for copyright purposes.
+# The master list of authors is in the main Go distribution,
+# visible at http://tip.golang.org/AUTHORS.
diff --git a/src/cmd/vendor/golang.org/x/sync/CONTRIBUTORS b/src/cmd/vendor/golang.org/x/sync/CONTRIBUTORS
new file mode 100644
index 0000000000..1c4577e968
--- /dev/null
+++ b/src/cmd/vendor/golang.org/x/sync/CONTRIBUTORS
@@ -0,0 +1,3 @@
+# This source code was written by the Go contributors.
+# The master list of contributors is in the main Go distribution,
+# visible at http://tip.golang.org/CONTRIBUTORS.
diff --git a/src/cmd/vendor/golang.org/x/sync/LICENSE b/src/cmd/vendor/golang.org/x/sync/LICENSE
new file mode 100644
index 0000000000..6a66aea5ea
--- /dev/null
+++ b/src/cmd/vendor/golang.org/x/sync/LICENSE
@@ -0,0 +1,27 @@
+Copyright (c) 2009 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/src/cmd/vendor/golang.org/x/sync/PATENTS b/src/cmd/vendor/golang.org/x/sync/PATENTS
new file mode 100644
index 0000000000..733099041f
--- /dev/null
+++ b/src/cmd/vendor/golang.org/x/sync/PATENTS
@@ -0,0 +1,22 @@
+Additional IP Rights Grant (Patents)
+
+"This implementation" means the copyrightable works distributed by
+Google as part of the Go project.
+
+Google hereby grants to You a perpetual, worldwide, non-exclusive,
+no-charge, royalty-free, irrevocable (except as stated in this section)
+patent license to make, have made, use, offer to sell, sell, import,
+transfer and otherwise run, modify and propagate the contents of this
+implementation of Go, where such license applies only to those patent
+claims, both currently owned or controlled by Google and acquired in
+the future, licensable by Google that are necessarily infringed by this
+implementation of Go. This grant does not include claims that would be
+infringed only as a consequence of further modification of this
+implementation. If you or your agent or exclusive licensee institute or
+order or agree to the institution of patent litigation against any
+entity (including a cross-claim or counterclaim in a lawsuit) alleging
+that this implementation of Go or any code incorporated within this
+implementation of Go constitutes direct or contributory patent
+infringement, or inducement of patent infringement, then any patent
+rights granted to you under this License for this implementation of Go
+shall terminate as of the date such litigation is filed.
diff --git a/src/cmd/vendor/golang.org/x/sync/semaphore/semaphore.go b/src/cmd/vendor/golang.org/x/sync/semaphore/semaphore.go
new file mode 100644
index 0000000000..30f632c577
--- /dev/null
+++ b/src/cmd/vendor/golang.org/x/sync/semaphore/semaphore.go
@@ -0,0 +1,136 @@
+// Copyright 2017 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package semaphore provides a weighted semaphore implementation.
+package semaphore // import "golang.org/x/sync/semaphore"
+
+import (
+ "container/list"
+ "context"
+ "sync"
+)
+
+type waiter struct {
+ n int64
+ ready chan<- struct{} // Closed when semaphore acquired.
+}
+
+// NewWeighted creates a new weighted semaphore with the given
+// maximum combined weight for concurrent access.
+func NewWeighted(n int64) *Weighted {
+ w := &Weighted{size: n}
+ return w
+}
+
+// Weighted provides a way to bound concurrent access to a resource.
+// The callers can request access with a given weight.
+type Weighted struct {
+ size int64
+ cur int64
+ mu sync.Mutex
+ waiters list.List
+}
+
+// Acquire acquires the semaphore with a weight of n, blocking until resources
+// are available or ctx is done. On success, returns nil. On failure, returns
+// ctx.Err() and leaves the semaphore unchanged.
+//
+// If ctx is already done, Acquire may still succeed without blocking.
+func (s *Weighted) Acquire(ctx context.Context, n int64) error {
+ s.mu.Lock()
+ if s.size-s.cur >= n && s.waiters.Len() == 0 {
+ s.cur += n
+ s.mu.Unlock()
+ return nil
+ }
+
+ if n > s.size {
+ // Don't make other Acquire calls block on one that's doomed to fail.
+ s.mu.Unlock()
+ <-ctx.Done()
+ return ctx.Err()
+ }
+
+ ready := make(chan struct{})
+ w := waiter{n: n, ready: ready}
+ elem := s.waiters.PushBack(w)
+ s.mu.Unlock()
+
+ select {
+ case <-ctx.Done():
+ err := ctx.Err()
+ s.mu.Lock()
+ select {
+ case <-ready:
+ // Acquired the semaphore after we were canceled. Rather than trying to
+ // fix up the queue, just pretend we didn't notice the cancelation.
+ err = nil
+ default:
+ isFront := s.waiters.Front() == elem
+ s.waiters.Remove(elem)
+ // If we're at the front and there're extra tokens left, notify other waiters.
+ if isFront && s.size > s.cur {
+ s.notifyWaiters()
+ }
+ }
+ s.mu.Unlock()
+ return err
+
+ case <-ready:
+ return nil
+ }
+}
+
+// TryAcquire acquires the semaphore with a weight of n without blocking.
+// On success, returns true. On failure, returns false and leaves the semaphore unchanged.
+func (s *Weighted) TryAcquire(n int64) bool {
+ s.mu.Lock()
+ success := s.size-s.cur >= n && s.waiters.Len() == 0
+ if success {
+ s.cur += n
+ }
+ s.mu.Unlock()
+ return success
+}
+
+// Release releases the semaphore with a weight of n.
+func (s *Weighted) Release(n int64) {
+ s.mu.Lock()
+ s.cur -= n
+ if s.cur < 0 {
+ s.mu.Unlock()
+ panic("semaphore: released more than held")
+ }
+ s.notifyWaiters()
+ s.mu.Unlock()
+}
+
+func (s *Weighted) notifyWaiters() {
+ for {
+ next := s.waiters.Front()
+ if next == nil {
+ break // No more waiters blocked.
+ }
+
+ w := next.Value.(waiter)
+ if s.size-s.cur < w.n {
+ // Not enough tokens for the next waiter. We could keep going (to try to
+ // find a waiter with a smaller request), but under load that could cause
+ // starvation for large requests; instead, we leave all remaining waiters
+ // blocked.
+ //
+ // Consider a semaphore used as a read-write lock, with N tokens, N
+ // readers, and one writer. Each reader can Acquire(1) to obtain a read
+ // lock. The writer can Acquire(N) to obtain a write lock, excluding all
+ // of the readers. If we allow the readers to jump ahead in the queue,
+ // the writer will starve â there is always one token available for every
+ // reader.
+ break
+ }
+
+ s.cur += w.n
+ s.waiters.Remove(next)
+ close(w.ready)
+ }
+}
diff --git a/src/cmd/vendor/modules.txt b/src/cmd/vendor/modules.txt
index 4ff07ab015..49a79890bc 100644
--- a/src/cmd/vendor/modules.txt
+++ b/src/cmd/vendor/modules.txt
@@ -39,6 +39,9 @@ golang.org/x/mod/sumdb/dirhash
golang.org/x/mod/sumdb/note
golang.org/x/mod/sumdb/tlog
golang.org/x/mod/zip
+# golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
+## explicit
+golang.org/x/sync/semaphore
# golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e
## explicit; go 1.17
golang.org/x/sys/internal/unsafeheader