Crystal multithreading support

Great News, threading and supporting multiple levels of parallelism is critical in writing backend enterprise grade applications that can maximally use the underlying computing hardware. There are numerous examples of programming languages that under the covers use system threads but provide the ergonomics to polish the edges away for developers. I would encourage Crystal lang to research techniques used in Go lang, Rust, Java 21 to develop the threading model that will enable Crystal to penetrate the enterprise especially in processing large amounts of data very quickly.

Good Luck…

Crystal’s concurrency model is already modeled after Go’s.

I haven’t looked deeply into Rust (only dabbled) and I haven’t used Java for anything serious in over 10 years, but what I’ve seen in both is that applications use kernel threads directly. Is that not the case anymore?

1 Like

Yes, Fibers are coming to Java:

There are even continuations.

1 Like

So modeling after something and actually implementing the mechanisms of how the thing that is modeled actually work can be two different things. Goroutines are mapped to actual system threads under the covers, much like Kotlin’s coroutines and now in Java 21 or 22+ is how virtual threads are implemented. I believe that concurrency model in Crystal does not map to system threads under the covers hence this major effort to support multithreading is being undertaking to provide the actual parallel efficiencies.

a link that explains somewhat ( its really annoying to find deeper information on goroutines and how they are scheduled under the covers to system threads ): What are goroutines and how are they scheduled? - DEV Community

So using actual system ( os ) level threads, Crystal’s concurrency model can be enhanced to have an M:N mapping increasing throughput on many applications

It does when -Dpreview_mt is used. Then it becomes an M:N model where M fibers are scheduled over N threads by the runtime. The number of threads can then (currently) be controlled with the CRYSTAL_WORKERS environment variable.

Yeah the article did mention the preview environment switch. But from reading the article, there needs to be work done and perhaps testing is what I am understanding to make it production grade… So yeah combining others thoughts so far in this thread, we can have, in the baseline, a true M:N thread mapping for optimally using hardware resources while keeping development ergonomics high and intuitive.

This is how Crystal’s multithreaded scheduler works. That’s what I meant when I said Crystal’s concurrency was modeled after Go’s.

The scheduling of workloads is already production-grade and it’s faster than Go. The article mentions improving it, which is great, but the scheduling of work was never the reason it isn’t yet the default in Crystal.

This trivializes the efforts folks have put into implementing multithreading in Crystal. Those efforts long predate this thread. The work you “encourage Crystal lang to research” has already been done and continues to be refined. It was even linked in the blog post you’re replying to.

Keep in mind that Crystal is developed by a small team, most of whom aren’t getting paid to work on it. Just because something isn’t fully implemented in a release yet doesn’t mean several smart people haven’t already been thinking about the problem, from multiple perspectives, for a long time.

One thing I’ve learned in my own interactions with the Crystal core team is that, no matter what I’m about to suggest, someone on the team has probably already been rolling it around in their mind for years. Maybe they’ve got branches or entire repos dedicated to that idea already. Even when I’m arguing with them or I think they’re wrong, I do try to keep in mind that they’ve probably thought more about the problem space (and how a solution might fit into the broader Crystal ecosystem) than I have, because the ideas I’m bringing up usually aren’t specific to my use cases. And when you consider that multithreading in Crystal has probably been the most requested feature to be mainlined since I first got into the language back in 2016 or so, it’s important to keep that in mind in this discussion, as well.


@vinit812010 I’d suggest checking out RFC 0002: MT Execution Contexts by ysbaddaden · Pull Request #2 · crystal-lang/rfcs · GitHub. Is an ongoing RFC related to MT that has a lot to do with what you’re suggesting.


so is this a specific case of multithreaded use in crystal or a general case? i.e. can we use threads in the general case outside async scheduling

i see where the comparison with go brought out the “actually” with specificity as a counterpoint but i was referring to the general case - similar to using Javas or c# threadpool use. also if mentioning “never the reason” could you elaborate your argument of the actual reason it’s not the default, perhaps sharing a link to previous discussions

hold your attitude at the door, your only argument in this diatribe is that it’s a small team and takes time and your implication that i am some dummy who should bow to the intellectual gods of crystal of whom you are an acolyte just feels tiring to discuss. I am sure there are plenty of “smart” people out there and I am in no way trivializing their work. These smart people would encourage continued growth and increasing use of crystal and engagement within this forum through more constructive discussion. Small teams are fine and perhaps pointing the obvious seems annoying to you, it can simply be answered as “thanks for your thoughts, design and approaches are being actively thought about and discussed. Active discussions of ideas is encouraged by the community in an open and non judgmental forum. It is an exciting announcement by the crystal team of additional support to work on bringing the preview feature into the baseline”

perhaps sharing links to previous discussion threads also. See taking time to post by also providing more information which you clearly have just helps everyone

I do appreciate your efforts at providing me with more information as to the state of development so thank you for your post

great thanks for the link appreciate your insights

The use of threads is only supported within the multithreaded fiber scheduler (enabled with -D preview_mt), but we don’t handle threads ourselves. By the time the first line of your application code executes, all of the threads have been configured. The use of threads in application code is not documented and not supported.

I didn’t specify this because the post you were replying to is about the multithreaded fiber scheduler, which you also said up-thread that you knew about.

Thread safety of the runtime/standard library is the limiting factor.

I also did share a link to a previous thread that describes how the multithreaded fiber scheduler works.

I never implied this. I specifically called out your actions, not you as a person.

I’m not sure where you got this from. I’m no acolyte here. I’m probably older than most of the Crystal core team (if not all of them) and may very well have been in the software racket longer than they have. They do, however, have more experience designing programming languages than I do.

Maybe you didn’t intend to, but if so, intent and impact are not in alignment. “I would encourage Crystal lang to research techniques used in Go lang” implies this has not been done. When you consider that there is already a working implementation of Go’s coroutine scheduler inside the Crystal runtime, that does indeed trivialize their work.

Sounds like you’re expecting humility while displaying none yourself. I tried to let you know that this is already how multithreading works in Crystal. Rather than ask questions, you chose to respond as if you knew it all already. If you don’t know something, lead with curiosity.

well encouragement is not trivialization nor does it imply they didn’t do it already. perhaps i should have started with a question however the tone of your response was not warranted in this regard which is really aggressive.

ok fair i should have started with a question however i would request same level on your part instead of saying i trivialized someone else’s work and go with a more humble response rather than the aggression i found in your resonse

in any event this is getting side tracked. back to the point there is nothing in the op i mention fiber scheduling until your brought up your excellent points. so as i am sure there are people who are working on this. but again its been a while and i am sure times continue, it doesn’t hurt to mention other technologies as well.

so i see you are well versed in the history of this feature and there is an rfc which is open to comments, while i haven’t looked at all the responses i hope you had a chance to provide your opinion as to the approach or discussions being had as your experience could help steer the issues

Thank’s for this insightful discussion.
On the topic itself, I think the current RFC probably already answers to a good portion of the suggestions made here. I’d appreciate if you could take a look at it and leave comments there about anything that you seem is not properly addressed yet, or any kind of issue you see with the proposal.

On the quarreling about tone and language, I’d suggest everyone to take it a bit more relaxed. Try to assume positive intent on the other side. While I can follow some of the accusations, that some comments could maybe be interpreted as agressive or harmful in some way, that wouldn’t and certainly wasn’t my primary impression of your comments. Please bear in mind that textual communication loses a lot of context and it might not always be fully clear to express oneselve in written English. Details get lost on the way and leave room for misinterpretation.
I think the context also makes it quite clear that you all have good intent on the factual level, so I should expect that you can agree on the human side as well. :heart_hands:


hear hear thanks sir, i will take look at that, i appreciate your insight and @jgaskins responses i apologize from my end

good luck on such interesting work as we are a big user of rabbitmq as a message bus rather than kafka or pulsar that anything that can speed up messaging while providing strong message safety is something we really need. The alternative Nats is an option we are taking a look at. we have installed lavin however it was not approved as of now.

crystal is something i am personally looking at for writing microservices instead of kotlin as the containers for the jvm are massive. the advantages of the java ecosystem though cannot be overstated. We are mostly enterprise Java devs and have recently transitioned to kotlin and use coroutines extensively for grpc processing.

appreciate your comment and @jgaskins’s links to discussion which i will take a look. But looking forward to you work

we have installed lavin however it was not approved as of now.

do you have more information on this? lacking some feature or?

the standard annoying corporate plans: mandated use of AWS services. We have been told to transition our services over to AWS equivalents in the upcoming years. otherwise we use AMQP 0.9.1 exclusively.

Indirectly we are finding that rabbitmq performance is not where we want to be, but that could be chalked up to speed of fsync to our ssds. So we are putting that responsibility on AWS SQS/SNS. I am personally looking into lavin and other technologies as I suspect that once we start getting the billing statements from AWS management is gonna get a stroke and want us to migrate off AWS but at that point we will need the performance to be on par at around hundreds of thousands of messages per second across the cluster possibly millions.

We cannot lose messages unlike our event bus which is kafka.

1 Like

mentioned in the thread linked by @jgaskins here is a standard microbenchmark site for whatever its worth to look at. I am pretty sure it may or may not track with real life workloads, but might provide some value to folks:

Thanks for the feedback. We did study other languages. A lot!

The concurrency model has been heavily influenced by Go (CSP). We’ve since been leaning to structured concurrency, though I prefer Erlang OTP for structuring applications (I prefer supervisors over nurseries).

The parallelism implementation is solid as in “it’s stable”: it’s safe and can be used in production. Yet, it’s not as performant as it could. It certainly won’t keep all those cores at 100%. This is what the proposal in RFC 0002 aims to fix: push the implementation further (i.e. even closer to Go’s MT model) + give back some thread-level control (i.e. Kotlin’s execution contexts).

Thanks for the language benchmarks. I see Crystal doesn’t have any MT implementations. I’ll fix that when I’ll start to have the new schedulers running (I only have a rough implementation right now that’s far from just compiling).


Awesome… thanks for your hardwork and best of luck

have you considered concurrent ml primitives? it is similar to go and crystal concurrency but allows you to create Event abstractions