I’ve noticed recently that the following example, where a is a local variable, compiles and runs fine:
struct Test
def initialize(a : Array(String | Int32))
end
end
test = Test.new([1, 2])
but this example, different only in that a is now an instance variable, does not:
struct Test
def initialize(@a : Array(String | Int32)) # => Error: instance variable '@a' of Test must be Array(Int32 | String), not Array(Int32)
end
end
test = Test.new([1, 2])
Am I missing something, or is this behavior unintended? I would understand if both cases had this error (the Int32 type is not the same as the union Int32 | String, which has a different size in memory), but I’m confused as to why Crystal can fix this in the first case but not the second.
Cheers!
The language is a bit inconsistent here and it should be fixed. What happens is that type restrictions in method arguments are like filters. Is it an array that holds ints or strings? Yes! It holds any of those things. If you read from it you will get one of those things (well, in this case you’ll only get an int, but it’s still an int or string). However, a type declaration on an instance var is stronger. You can’t assign an array of int to something that holds an array of int it string, because then you would be able to push a string into it, and you can’t push a string into an array of int.
Does that make sense?
In summary, type restrictions are for reading, type declarations are for writing.
I think we should make all type restrictions be invariant by default for generic types, and introduce a different syntax for covariant (something like Array(T) where T < U). Unfortunately that makes the language a lot more complex.
Yes, the difference in behaviour is surprising. But each one is also the least surprising behaviour for the respective feature That is, until there is a feature to express covariant semantics.