The Crystal Programming Language Forum

Unbuffered channels?

I used channels today for the first time, for AoC. Is it true that Crystal does not have unbuffered channels so that send never blocks?

If you search the forum, I think there has been some discussion recently about that.
(I think?)

I did before posting, but did not find them. There were some references to Channel::Unbuffered that nowadays does not seem to be available anymore (and was buffered with capacity 32 when it existed?).

Do you recall any particular thread?

Ah, by the way, by “unbuffered” I mean infinite capacity, not sure the word is correct. Similar to mailboxes in Erlang.

I’m pretty sure if you don’t give a channel a size when newing one up, its essentially unbuffered.

See https://crystal-lang.org/reference/guides/concurrency.html#buffered-channels

The distinction between buffered and unbuffered channels is no longer represented by the classes Channel::Buffered and Channel::Unbuffered. The Channel class now handles the two types of channels by itself, acting buffered if given a size and unbuffered otherwise.

Also an unbuffered channel doesn’t have infinite capacity. Instead, it’s like a buffered channel of size 1 but it blocks the moment data is sent to it.

Before 0.31.0 there were two kinds of channels: one buffered and one unbuffered. Now they are unified: if you don’t pass a capacity it’s unbuffered, if you pass a capacity it’s buffered. Unfortunately that’s still not documented…

1 Like

I’m pretty sure if you don’t give a channel a size when newing one up, its essentially unbuffered.

Not sure the word “unbuffered” in my question is good, seems to mean capacity 0 in some places.

What I am after is infinite capacity, that is

% crystal eval 'N = 10_000; c = Channel(Int32).new; N.times { c.send(1) }'

terminates no matter how big is N.

1 Like

Oh, there’s no channel with infinite capacity.

2 Likes

@fxn, to clarify the source of confusion, historically, in Crystal lang, an unbuffered Channel ch is a channel with capacity zero. This means that ch.send will block until a fiber receives on the same channel.
That’s the case in your example above. You’re defining a channel c with capacity zero, no Fiber is receiving on c, so the application gets stuck at the very first c.send(1) call.

Although there is no such a thing as a Channel with infinite capacity in Crystal, you can define a channel with a̶r̶b̶i̶t̶r̶a̶r̶y̶ large (within the limits of the heap) capacity, e.g. in your example

c = Channel(Int32).new(capacity = N)

In this case, c.send(1) will not block, as the value will be written in the channel’s buffer - hence we refer to this type of channels as buffered - and your code above will terminate.

Now, it turns out that for most use cases, relying on channels with large capacity is not only unnecessary, but also error prone. To some extent, when you set the capacity to a very large number, you’re giving up on understanding the load your system can tolerate before breaking.

Looking at a programming language with a similar concurrency model, in the book Concurrency in Go, the author Katherine Cox-Buday writes that the only applicable situations where a buffered channel can increase the overall performance are the following

  • If batching requests in a stage saves time.
  • If delays in a stage produce a feedback loop into the system.

I’ll mention one more thing, to support the above: one simple way of breaking an Erlang application is to send a process messages at a rate that is higher than it’s ability to process them. See “Flood the mailbox for a process” here.

3 Likes

Thanks for your extense response @lbarasti!

Yeah, I saw today how blocking works. I am also aware of potential issues with process mailboxes in Erlang, but that is how they work and they are useful nonetheless.

The motivation for this question is today’s AoC.

The gist of today’s puzzle is that you have basically a ring of fibers connected by channels, think pipes. Those fibers are like automatas with input/output, to give an idea. Each fiber’s output is the next fiber’s input. Last fiber closes the loop by connecting its output to the input of the first fiber. Data loops the ring until there is a certain halting condition.

You cannot block, because data has to flow in the ring.

Fortunately, the problem statement is such that capacity = 1 is enough. But I wondered about a generalization of this problem in which automatas read an arbitrary number of times from their input, and output an arbitrary number of values, which in turn are not fixed, but depend on their programs and may differ per loop.

In that generalization, I would not be able to pick a fixed buffer size, I believe. It would be the reponsibility of the automata’s programs to make a reasonable use of the pipes.

In any case, my doubt is solved. Wanted to ensure I did not overlook a way to have infinite capacity.

Thanks folks!

Not that I encourage but, an “infinite” buffer channel can be simulated by doing the ch.send inside a spawn. The new fiber will be blocked but the original will continue. As long as the app does not exit all the ch.send will be enqueued in the scheduler and processed when a ch.receive happens.

If the ch has a capacity > 0 some sends will be able to run immediately, freeing some fibers.

It is a waste of resources though and the remarks made by @lbarasti are to be taken into account.

1 Like

Another analogous common pattern in other programming languages is having several threads coordinated by queues. Sometimes, you just don’t care about the queue size because you know how the program behaves, and it is handy that the queue adapts to the workload at runtime.

So, of course, there are use cases for bounded buffers. But I believe unbounded(?) channels could have their use cases too.