As a contrast, the following code allows the compiler to infer the type of value based on the assignment behavior within the block and when compiling, the compiler knows that the value can also be String.
def hello(value = nil)
if value.nil?
value = 100
end
p! typeof(value) # => (Int32 | String)
p! value
end
hello("hello world")
Comparing the LLVM IR of both, it can be seen that at runtime, the variable here is passed as a type (Int32 | String), whereas in the original code, a pointer is used. This clarifies the difference between the two.
Because local variables are scoped. They don’t escape.
Doing the same analysis for instance variables would require to analyze the whole source code to determine the type of an instance variable because types can be reopened.
I think it’s basically hitting this rule: Type inference - Crystal, hence the error when you try and give it a String. The Nil part comes from it being assigned in a method that isn’t #initialize.
Why we can’t let my original code raise compile-time error instead? Is there a consideration here?
def hello(value = 100)
p! typeof(value) # => String,
p! value
end
hello("hello world") # # Why can't raise compile-time error because user give a string?
If we raise error like above, regardless of whether it local var is assigned to an ivar, the behavior is same, it is good, right?
Hi, probably I may not have clearly described my question, i assume a example for clarify my intention.
e.g. following is a method defined in foo.cr
def foo(value = 100)
# ... probably a long method, but that not important
end
Following is another file bar.cr which invoke hello method.
require "./foo"
hello("hello")
As a new user for code base, when i open file foo.cr, saw the definition of foo method, i would assume the type of value should be a Int32, why? because there is no any compile error, and method have a default value of 100, so the value should be a Int32.
But i am wrong, in fact, i I made exactly same error when i reading code wrote by myself two years ago, I consider as a type safety language, compiler should not allow this, the benefits of this are also one of the main reasons I left Ruby and started using Crystal.
And moreover, If I assign a default value to an ivar later will raise compile time error, why not choose the same way to handle local var?
As it stands your #foo method doesn’t have a type restriction, so it makes sense it’s allowed to accept any value, and will be duck typed as it’s used in the method. If you want to be more strict and prevent this kind of scenario just make it value : Int32 = 100 and call it a day. Crystal is nice and doesn’t require their usage everywhere, but IMO use them when you can.
Hi, maybe you are right, but, a bit chaos here. as a not a newbie user of Crystal, I messed it up again, just because i remember part of the asterite said word, but I forgot he was talking about ivar.
I consider if there never give a default value, as you said, it makes sense it’s allowed to accept any value, but, if give a default value to it, why we can’t guess the type of it from this value? this seems like the best result i can figure out, unless there’s a reason can’t do like this.
x = 1
p typeof(x)
x = "hello"
Why above code we can safely assume the type of x is a Int32?
Because Crystal is making a union type out of all the values x COULD be. That is very important. Word to the wise, unlearn your Ruby programming when approaching Crystal. As a fellow former Rubyist, things really don’t work the same and you really can’t try to program the same way. IMO you really shouldn’t union type things unless you absolutely have to and fully deal with the implications of using a union type. When Ruby got a type checking system it was basically built on top of what Ruby already could do, Crystal is entirely built around a type system, so you have to treat it that way from the get-go. Basically don’t worry too much about this because you probably SHOULDN’T do this too much, and we have ways to handle it in code like .is_a?.