@fxn, to clarify the source of confusion, historically, in Crystal lang, an unbuffered Channel ch
is a channel with capacity zero. This means that ch.send
will block until a fiber receives on the same channel.
That’s the case in your example above. You’re defining a channel c
with capacity zero, no Fiber is receiving on c
, so the application gets stuck at the very first c.send(1)
call.
Although there is no such a thing as a Channel with infinite capacity in Crystal, you can define a channel with a̶r̶b̶i̶t̶r̶a̶r̶y̶ large (within the limits of the heap) capacity, e.g. in your example
c = Channel(Int32).new(capacity = N)
In this case, c.send(1)
will not block, as the value will be written in the channel’s buffer - hence we refer to this type of channels as buffered - and your code above will terminate.
Now, it turns out that for most use cases, relying on channels with large capacity is not only unnecessary, but also error prone. To some extent, when you set the capacity to a very large number, you’re giving up on understanding the load your system can tolerate before breaking.
Looking at a programming language with a similar concurrency model, in the book Concurrency in Go, the author Katherine Cox-Buday writes that the only applicable situations where a buffered channel can increase the overall performance are the following
- If batching requests in a stage saves time.
- If delays in a stage produce a feedback loop into the system.
I’ll mention one more thing, to support the above: one simple way of breaking an Erlang application is to send a process messages at a rate that is higher than it’s ability to process them. See “Flood the mailbox for a process” here.