For 2.0: Make Default Values i64 vs i32

It’s been about 2 decades since 64-bit systems became the dominant platform for general computing systems. Additionally, default system memory has risen from 1-2 GB to 16-32GB, with many systems providing available memory of 128GB.

Currently, if you do: i = 0 its explicit definition is i = 0i32.
I propose making values default to i64, thus i = 0 defaults to be i = 0i64.

BENEFIT
I primarily do numerical applications. I sometimes run into runtime arithmetic overflows because somewhere in the code is an implicit i32 value I have to track down and change to an [i|u]64 while doing a series of arithmetic operations. This will possibly make the use of [i|u]128 values easier to see and use in code.

This should also apply to index values for Arrays, Bitarrays, and Enumerables , to provide better access to memory beyond 32-bit lengths|addresses.

Again, since most hardware is 64-bits oriented, it’s actually more efficient|faster to use the native size structure of the hardware as much as possible to optimize compiler efficiency and output.

Other languages have already (at least in part) made 64-bits values|operations their default configurations to take better advantage of hardware.

IMPACT ON OLD CODE
Since 2.0, by default, will introduce breaking changes, this should not be too distruptive. Old code should work with no|little change, as values < i64 would operate as before. Now, to limit sizes of values < i64 would require explicit type declarations, as currently necessary for values < and > i32.

1 Like

It’s been about 2 decades since 64-bit systems became the dominant platform

Yup. Then came WASM, which is 32-bits and only allows to allocate 4GB of RAM… and 32-bit targets ain’t dead again :person_shrugging:

it’s actually more efficient|faster to use the native size structure of the hardware as much as possible to optimize compiler efficiency and output.

This is completely wrong. It doesn’t matter as long as the integer size is natively supported by the target (so i64 is much slower on 32-bit targets for example, but i32 and smaller are as fast as i64 on 64-bits). Also, smart/tricky benchmarks can show that i64 ends up slower (because cache thrashing, etc). Read for example Reddit - The heart of the internet

5 Likes

Please bring reproducible benchmarks to back up performance claims.

5 Likes

Aside from the rest of the topic, I’d like to interject and point out that Crystal doesn’t support WASM and due to the nature of the issue it’s very probable that it never will – which is sad, but it’s better to stay down-to-Earth and focus on things that are within the range.

I’m fairly sure the WASM mention was in response to the claim that 64-bit is dominant and 32-bit is obsolete, as WASM, a new format, doesn’t support it, which makes the original claim moot.

1 Like

This should also apply to index values for Arrays, Bitarrays, and Enumerables , to provide better access to memory beyond 32-bit lengths|addresses.

AFAIK the way memory for Arrays is currently managed, data there is contiguous in memory. Meaning an array index beyond Int32 could present some challenges in some lower memory cases. GitHub - jgaskins/big_array: Array type that can hold more than 2**32-1 elements already exists and works well from what I can tell, but a general-case implementation would probably need actual memory mapping, so it wouldn’t be as simple as just changing the array index to Int64.

1 Like