<p>
A buffered channel can be used like a semaphore, for instance to
limit throughput. In this example, incoming requests are passed
-to <code>handle</code>, which receives a value from the channel, processes
-the request, and then sends a value back to the channel
-to ready the "semaphore" for the next consumer.
+to <code>handle</code>, which sends a value into the channel, processes
+the request, and then receives a value from the channel
+to ready the “semaphore” for the next consumer.
The capacity of the channel buffer limits the number of
-simultaneous calls to <code>process</code>,
-so during initialization we prime the channel by filling it to capacity.
+simultaneous calls to <code>process</code>.
</p>
<pre>
var sem = make(chan int, MaxOutstanding)
func handle(r *Request) {
- <-sem // Wait for active queue to drain.
- process(r) // May take a long time.
- sem <- 1 // Done; enable next request to run.
-}
-
-func init() {
- for i := 0; i < MaxOutstanding; i++ {
- sem <- 1
- }
+ sem <- 1 // Wait for active queue to drain.
+ process(r) // May take a long time.
+ <-sem // Done; enable next request to run.
}
func Serve(queue chan *Request) {
</pre>
<p>
-Because data synchronization occurs on a receive from a channel
-(that is, the send "happens before" the receive; see
-<a href="/ref/mem">The Go Memory Model</a>),
-acquisition of the semaphore must be on a channel receive, not a send.
+Once <code>MaxOutstanding</code> handlers are executing <code>process</code>,
+any more will block trying to send into the filled channel buffer,
+until one of the existing handlers finishes and receives from the buffer.
</p>
<p>
<pre>
func Serve(queue chan *Request) {
for req := range queue {
- <-sem
+ sem <- 1
go func() {
process(req) // Buggy; see explanation below.
- sem <- 1
+ <-sem
}()
}
}</pre>
<pre>
func Serve(queue chan *Request) {
for req := range queue {
- <-sem
+ sem <- 1
go func(req *Request) {
process(req)
- sem <- 1
+ <-sem
}(req)
}
}</pre>
<pre>
func Serve(queue chan *Request) {
for req := range queue {
- <-sem
req := req // Create new instance of req for the goroutine.
+ sem <- 1
go func() {
process(req)
- sem <- 1
+ <-sem
}()
}
}</pre>
crash, or do something else.)
</p>
+<p class="rule">
+The <i>k</i>th send on a channel with capacity <i>C</i> happens before the <i>k</i>+<i>C</i>th receive from that channel completes.
+</p>
+
+<p>
+This rule generalizes the previous rule to buffered channels.
+It allows a counting semaphore to be modeled by a buffered channel:
+the number of items in the channel corresponds to the semaphore count,
+the capacity of the channel corresponds to the semaphore maximum,
+sending an item acquires the semaphore, and receiving an item releases
+the semaphore.
+This is a common idiom for rate-limiting work.
+</p>
+
+<p>
+This program starts a goroutine for every entry in the work list, but the
+goroutines coordinate using the <code>limit</code> channel to ensure
+that at most three are running work functions at a time.
+</p>
+
+<pre>
+var limit = make(chan int, 3)
+
+func main() {
+ for _, w := range work {
+ go func() {
+ limit <- 1
+ w()
+ <-limit
+ }()
+ }
+ select{}
+}
+</pre>
+
<h3>Locks</h3>
<p>