Type inference use the default value of argument not work?

Please checking example.

def hello(value = 100)
  p! typeof(value) # => What is the output here?
  p! value
end

hello("hello world")
the code answer and question

typeof(value) # => String
value # => “hello world”

Why the output not the Int32|String, but String?

1 Like

Becase the call of hello("hello world") discards the default argument value since it’s known at compile time that is not needed.

You would be even more surprised that the following also works

def hello(value : String = 100)
  p! typeof(value) # => What is the output here?
  p! value
end
 
hello("hello world")

At least for now type restrictions don’t apply to default values: Type restriction against default value · Issue #12829 · crystal-lang/crystal · GitHub

1 Like

Thanks for explain.

But, i still remember asterite said, The compiler doesn’t look at calls to infer the type of instance vars, i found the following link.

So, why this happen for a local variable? (value treat as a local variable, right?)

As a contrast, the following code allows the compiler to infer the type of value based on the assignment behavior within the block and when compiling, the compiler knows that the value can also be String.

def hello(value = nil)
  if value.nil?
    value = 100
  end
  p! typeof(value) # => (Int32 | String)
  p! value
end

hello("hello world")

Comparing the LLVM IR of both, it can be seen that at runtime, the variable here is passed as a type (Int32 | String), whereas in the original code, a pointer is used. This clarifies the difference between the two.

1 Like

Because local variables are scoped. They don’t escape.

Doing the same analysis for instance variables would require to analyze the whole source code to determine the type of an instance variable because types can be reopened.

1 Like

Okay, could you please explain why following code not compile?

class Foo
  def hello(value = 100)
    @value = value
    p! typeof(value)
    p! value
  end
end

foo = Foo.new
foo.hello("hello world")

10 | foo.hello(“hello world”)
^
Error: expected argument #1 to ‘Foo#hello’ to be (Int32 | Nil), not String

Overloads are:

  • Foo#hello(value : ::Int32 | ::Nil = 100)

value is local, so, don’t escape, when pass “hello world”, value should be “String”, but it inference as (Int32 | Nil) here.

In fact, i found a issue similar as your hello(value : String = 100) example, i just wonder why this not raised a error to inform me.

I think it’s basically hitting this rule: Type inference - Crystal, hence the error when you try and give it a String. The Nil part comes from it being assigned in a method that isn’t #initialize.

2 Likes

Thanks for point out, i even don’t remember saw that doc before.

Why we can’t let my original code raise compile-time error instead? Is there a consideration here?

      def hello(value  = 100)  
        p! typeof(value) # => String, 
        p! value
      end

      hello("hello world") # # Why can't raise compile-time error because user give a string?

If we raise error like above, regardless of whether it local var is assigned to an ivar, the behavior is same, it is good, right?

What’s wrong with allowing that code to compile and run?

If you forget about types you would expect that to run, right? So with types it would be good of it compiles and runs.

Types are a means to distinguish programs that would work.

2 Likes

Hi, probably I may not have clearly described my question, i assume a example for clarify my intention.

e.g. following is a method defined in foo.cr

def foo(value = 100)
   # ... probably a long method, but that not important
end

Following is another file bar.cr which invoke hello method.

require "./foo"

hello("hello")

As a new user for code base, when i open file foo.cr, saw the definition of foo method, i would assume the type of value should be a Int32, why? because there is no any compile error, and method have a default value of 100, so the value should be a Int32.

But i am wrong, in fact, i I made exactly same error when i reading code wrote by myself two years ago, I consider as a type safety language, compiler should not allow this, the benefits of this are also one of the main reasons I left Ruby and started using Crystal.

And moreover, If I assign a default value to an ivar later will raise compile time error, why not choose the same way to handle local var?

As it stands your #foo method doesn’t have a type restriction, so it makes sense it’s allowed to accept any value, and will be duck typed as it’s used in the method. If you want to be more strict and prevent this kind of scenario just make it value : Int32 = 100 and call it a day. Crystal is nice and doesn’t require their usage everywhere, but IMO use them when you can.

Hi, maybe you are right, but, a bit chaos here. as a not a newbie user of Crystal, I messed it up again, just because i remember part of the asterite said word, but I forgot he was talking about ivar.

I consider if there never give a default value, as you said, it makes sense it’s allowed to accept any value, but, if give a default value to it, why we can’t guess the type of it from this value? this seems like the best result i can figure out, unless there’s a reason can’t do like this.

x = 1
p typeof(x)
x = "hello"

Why above code we can safely assume the type of x is a Int32?

Because Crystal is making a union type out of all the values x COULD be. That is very important. Word to the wise, unlearn your Ruby programming when approaching Crystal. As a fellow former Rubyist, things really don’t work the same and you really can’t try to program the same way. IMO you really shouldn’t union type things unless you absolutely have to and fully deal with the implications of using a union type. When Ruby got a type checking system it was basically built on top of what Ruby already could do, Crystal is entirely built around a type system, so you have to treat it that way from the get-go. Basically don’t worry too much about this because you probably SHOULDN’T do this too much, and we have ways to handle it in code like .is_a?.

1 Like

At a quick glance, in such a case, you would normally use Overload, defining two methods. One is Integer and the other is String.

def hello(value : Int32 = 100)
  p value
end

def hello(value : String)
  hello(value.to_i)
end

hello(20)
hello
hello("30")

What is the reason you don’t want to use overload here?
Perhaps that is where the essence of the problem lies.