I see them more as complementary, with different variants of nurseries that only collect results (including exceptions), whereas supervisors would be more suited for complex scenarios having support for limits on simultaneous execution, restarts and whatnot. So supervisors would be the choice for building the main loop of a web server, whereas a nursery would be the main choice during the actual execution of an incoming request (perhaps the program want to fetch a bunch of resources concurrently?).
What are your thoughts? It is very much possible that I miss some aspect of supervisors as I havenât actually used erlang/elixir for anything.
@yxhuvud I see nurseries as abstract structures to group fibers while OTP is about starting/monitoring explicit actors, which is more concrete. I guess they may not be competing and supervisors might be buildable on top of nurseries, yes.
The main problem isnât the synchronizing, it is that you donât have access to anything related to the event loop. Which means that anything in stdlib related to IO or Channels are not available
Iâm surprised Thread is documented and I think thatâs unintentional. Itâs been explicitly kept out of the docs in the past and it still has the :nodoc: magic comment to avoid documenting it, but it made its way into the docs somehow.
To be clear, I wouldnât use Thread directly. In addition to what others have said, I seem to remember garbage collection is also an issue on threads not allocated by the Crystal scheduler. I canât remember what it was specifically, but I remember a member of the core team mentioning it ⌠somewhere.
Ruby has Thread.new and Ractor.new, but in Crystal you use spawn do ... end for concurrency. spawn creates a new Fiber. The Thread class represents a low-level OS thread, and thereâs no need for users to touch it directly.
Hereâs a typical producer-consumer pattern (written with Claudeâs coding assistance):
require "wait_group"
# Compile with:
# crystal run example.cr -Dpreview_mt -Dexecution_context
WORKERS = 32
MAX_PARALLELISM = 8
JOB_COUNT = 1_024
EXPECTED_SUM = 523_776 # (0 + 1023) * 1024 / 2
consumers = Fiber::ExecutionContext::Parallel.new("consumers", MAX_PARALLELISM)
jobs = Channel(Int32).new(64)
partials = Channel(Int32).new(WORKERS)
wg = WaitGroup.new(WORKERS)
# Parallel consumers: each worker accumulates a local sum.
WORKERS.times do
consumers.spawn do
local_sum = 0
while value = jobs.receive?
local_sum += value
end
ensure
partials.send(local_sum || 0) # Send one partial result per worker.
wg.done
end
end
# Producer: send 0..1023 to jobs, then close to signal workers to stop.
JOB_COUNT.times { |i| jobs.send(i) }
jobs.close
wg.wait
# Final reduce: aggregate partial sums from all workers.
total = WORKERS.times.sum { partials.receive }
puts "total: #{total}"
puts "expected: #{EXPECTED_SUM}"
puts "ok?: #{total == EXPECTED_SUM}"