Just like python does, come from pyInt+pyLong–>pyLong to avoid confusion,
so it depends on crystal’s typical use cases,
if say web application, then user really doesn’t care about int and int size
and don’t want to deal with possible overflow/underflow.
if it’s performance critical application, say, embedded application,
then user want explicit control over the int and int size,
then it makes sense crystal default int it Int32.
now crystal falls on which side?
ps, I think BigInt is same as python’s pyLong?
One of the advantages of Crystal is that it is fast.
BigInt is much much slower than native int, because every number must be allocated on heap and operations with BigInt can’t be optimized by compiler. This may be not too important for dynamic typed languages, because all of the vars are already on heap and in most cases won’t be optimized, but for a compiled static typed languages situation is different - languages like Go or Rust use integers with fixed size and Crystal belongs the same “league”.
I feel like you might be underestimating the
BigInt performance penalty. It’s not just your application’s math you have to consider. There’s all kinds of math that happens under the hood — I/O buffer usage, expanding an array’s capacity, getting the value at an array index, locating the bucket in a hash/dictionary for a given key, passing a
struct instance by value, etc.
In Python, all the math that isn’t exposed directly in Python code is implemented in C using primitive ints. In Crystal, all of those operations are implemented in Crystal. If Crystal integers were all
BigInts, all of those operations would involve heap allocations. Iterating over an array would involve
array.size * 3 heap allocations (one for the incrementing of the index plus two for the pointer arithmetic — for every single element in the array).
I just ran a benchmark on an Ubuntu server running an Intel Xeon (pretty common production configuration) that just increments an
ips, 1000x per benchmark iteration to reduce the impact of benchmark overhead. The
BigInt was over 300x slower.
Int64 3.13M (319.09ns) (± 1.32%) 0.0B/op fastest
BigInt 9.83k (101.73µs) (± 0.66%) 47.0kB/op 318.80× slower
When you consider how much of this arithmetic is happening under the hood (it’s a lot more than most people realize), that’s far too much performance to trade to avoid having to think about the width of an integer.