The Crystal Programming Language Forum

Why is Crystal language faster than Ruby language?

I would very much like to know what exactly makes Crystal language faster than Ruby language while code is so similar. The short answer could be that it is compiled, and Ruby is interpreted, yet I would like to understand more about the language specifications.
thank you… :smiley:

In Ruby at runtime methods can be redefined even for just a single instance, classes can have a module included or extended into them, constants are not guaranteed to be constant and objects can have instance variables added and removed at any time outside of #initialize. All of this involves checks and lookups at runtime that Crystal doesn’t have to do.

The fact that objects in Crystal have a predictable size allows a better usage of memory and this results in less effort for the garbage collector.

In some circumstances Crystal can store stuff in the stack instead of the heap, which is faster and again puts less pressure on the garbage collector.

The Crystal standard library handles IO in a less wasteful way than Ruby. Compare for example Array#pack and String#unpack with Crystal’s IO#write_bytes: the former involves intermediate arrays and strings while the latter writes directly into the IO.

I’m sure other people can come up with more reasons.

1 Like

Another major factor is that we generate LLVM IR and then hand that over to LLVM for byte code generation and more importantly code optimization.

Given in Crystal all types are known in advance, that is they’re present in the above mentioned LLVM IR, optimizations can be done at compile time that are impossible or very hard to do in Ruby. A simple example is executing basic math operations at compile time, when you write a = 2 + 3 the effective code generated will be a = 5.

Additionally Ruby’s internal byte code has to be generic to work on any CPU and architecture, it can make little use of architecture specific optimizations. Where as when you generate native code you can target a specific architecture or even CPU, allowing to apply optimizations based on knowledge about certain performance characteristics. For example there may be multiple ways to express a certain operation using different CPU instructions, one way working better for an Intel CPU and another working better for an AMD CPU.

Yet all of this is just a glimpse, the approaches of native code generation vs. a runtime byte code interpreter are just quite different. You can get an impression of how different just by looking at the amount of code necessary for the currently work in progress byte code interpreter for Crystal: crystal i by asterite · Pull Request #10910 · crystal-lang/crystal · GitHub



Do you think it will be possible to run a project in bytecode mode (like Dart) to avoid long rebuild times for each modification during the dev? Or is it not made for that?

It’s not made for that. The semantic analysis has to be done over and over each time you run a project, even in bytecode mode. That said, I’m definitely seeing ir as much faster to boot than compiled mode, because generating bytecode seems to be much faster than generating llvm code.

Ok, thank you