Is this a bug?

I have this hash of UInt64s below.

    private WITNESS_RANGES = {
      341_531u64 => [9345883071009581737u64],
      1_050_535_501u64 => [336781006125u64, 9639812373923155u64],
      350_269_456_337u64 => [4230279247111683200, 14694767155120705706, 16641139526367750375] of UInt64,
      55_245_642_489_451u64 => [2, 141889084524735, 1199124725622454117, 11096072698276303650] of UInt64,
      7_999_252_175_582_851u64 => [2, 4130806001517, 149795463772692060, 186635894390467037, 3967304179347715805] of UInt64,
      585_226_005_592_931_977u64 => [2, 123635709730000, 9233062284813009, 43835965440333360, 761179012939631437, 1263739024124850375] of UInt64,
      18_446_744_073_709_551_615u64 => [2, 325, 9375, 28178, 450775, 9780504, 1795265022] of UInt64,
      318_665_857_834_031_151_167_461u128   => [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37] of UInt64,
      3_317_044_064_679_887_385_961_981u128 => [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41] of UInt64
    }

I get this warning for some 19 digits values for the 3rd|4th hash arrays.

 76 | 350_269_456_337u64 => [4230279247111683200, 14694767155120705706, 16641139526367750375] of UInt64,
                                                  ^
Warning: 14694767155120705706 doesn't fit in an Int64, try using the suffix u64 or i128

In computeC2c.cr:76:73

 76 | 350_269_456_337u64 => [4230279247111683200, 14694767155120705706, 16641139526367750375] of UInt64,
                                                                        ^
Warning: 16641139526367750375 doesn't fit in an Int64, try using the suffix u64 or i128

In computeC2c.cr:77:74

 77 | 55_245_642_489_451u64 => [2, 141889084524735, 1199124725622454117, 11096072698276303650] of UInt64,
                                                                         ^
Warning: 11096072698276303650 doesn't fit in an Int64, try using the suffix u64 or i128

However, those values fit within 64-bits, which Max is
(2**64 - 1) => 18446744073709551615

When I explicitly define the arrays as xxxxxxxxxxxu64 values there are no warnings.
Either way, the code appears to work with the warnings.

Since all those values are valid UInt64s isn’t this any error in parsing them?
Using Crystal 1.11.2.

The default integer type ist signed and the compiler is supposed to interpret untyped integer literals as signed integers. If you want unsigned values , please be explicit.
The warning is completely correct. Those numbers don’t fit into a (signed) Int64.
This warning may become an error in the future.

I understand what you’re saying, but I don’t feel that should be the preferred or default behavior.

If an array is declared as: ary = [x, y, z] of UInt64
the compiler knows (or should be made to know) those values are to be treated as unsigned 64-bits, not signed 64-bits.

It’s not intuitively logical (or pleasant) to have to manually set individual values to be the same type you’ve already told the compiler to treat them as.

Why then have – of [U]Int[y]– if it’s really not a universal shortcut for declaring arrays?

And this should be documented so people will not be caught offguard by this (IMHO) quirk.

Sorry, I completely overlooked the of UInt64 suffix on the array literal because it’s outside the viewport.

So a minimal reproduciton would be this:

[14694767155120705706] of UInt64 # Warning: 14694767155120705706 doesn't fit in an Int64, try using the suffix u64 or i128

I agree that this warning should not be issued. There is an explicit type declaration and no ambiguity.

The warning is a syntax one, not a semantic one, so I think it is working as intended unless we somehow decide that _i128 and _u128 aren’t needed for 128-bit integer literals.

I don’t see how this is dependent on the suffix requirement for 128-bit integer literals.
To me this is inconsequential. The compiler knows its a literal involved in autocasting.
So the warning about ambiguity is unnecessary and the compiler should try to avoid it. But that’s not easy, of course.

That’s dependent on Array#[]= having a relevant type restriction in the value parameter, which is what’s inconsequential here. All integer literals within Int64::MAX.to_u64 + 1..UInt64::MAX have this warning, because they should have been an Int128 instead of UInt64 if the rules for smaller literals are followed, but that would be a breaking change. So the warning is here to stay until 2.0.

Curious questions.

In the future, if you declare an array with of UInt128 and some elements are greater than Int128::MAX would that also then generate a warning?

I’m trying to understand why the explicit array declaration of its members types doesn’t force the compiler to test and treat all the values as the declared type.

Does this have something technically to do with the way LLVM works?
If not, can’t you just make the compiler do whatever is necessary to fulfill the declaration directive?

If the suffixless literals in that range become UInt128 expressions, and later we decide that 256-bit integers will be in the language, then those literals will naturally become candidates for Int256 literals instead, and that would generate a similar warning. This has nothing to do with the presence of an Array declaration or any other semantic property.

So it has to do with whether you declare a literal as - 1234 - vs – 1234_u64?