Related: Expand ameba's functionality with semantic information · Issue #513 · crystal-ameba/ameba · GitHub
The proof-of-concept crystal tool undefined detected the undefined method in the case in question.
a.cr
# UNEXPECTED -- compiles even though bar is not defined
def foo(user : NamedTuple?)
bar(user.not_nil![:id]?)
end
# UNEXPECTED -- compiles even though bar is not defined
def foo(user : Tuple?)
bar(user.not_nil![0]?)
end
# UNEXPECTED -- compiles even though bar nor quux is defined
def foo(user : Hash(String,String)?)
bar(user.not_nil!.quux)
end
foo(nil)
crystal tool undefined a.cr
a.cr:3:3 bar 1 lines
a.cr:8:3 bar 1 lines
a.cr:13:3 bar 1 lines
a.cr:13:7 quux 1 lines
Most of Crystal’s build time is spent in LLVM.
Semantic does not take much time. The data passed to LLVM must be pruned, but checking for undefined methods during pruning may not be as costly as it seems.
At this point, I do not yet have the expertise to fully understand the code generated by AI, which prevents me from contributing a pull request to the Crystal Repository.Understanding and utilizing AI output requires extensive knowledge and experience. As AI performance improves, there may be a paradox that experts will become necessary rather than unnecessary ![]()
My apologies if that sounded rude, certainly wasn’t my intention to upset anyone. I’m just puzzled by this because everything else about Crystal feels so well thought out with nary a surprise (except perhaps macros but that’s for another day!)
n, Crystal has chosen to aggressively optimize away unreachable code
Not exactly. You look at the language expecting full, strict, exact typing, likely with type coercion, everything’s supposed to be compiled as-is, methods declared as-is, etc. This is the complete opposite of what Crystal was ever meant to be: types are optional first, and everything’s basically templated.
Types are always inferred from reality (with some caveats like ivars where we can’t always infer and do ask for the actual type): we only consider concrete types, not potential ones. If some code is unreachable, it’s discarded, because yeah, it’s dead code, but also because we don’t have any information to properly type that piece of code: we don’t have the concrete types to do the analysis and thus can’t do it.
Types are abstract constraints to the actual types. When an argument is typed, anything we give it must match a subset of the full constraint, same for the returned type. Crystal won’t generate one generic method that does everything but multiple specialized methods that will only do what is really called.
Note: the optimization is aggressive for runtime performance reasons, not for faster compile times: it would be simpler to generate one method rather than many.
This shifts the onus of correctness checking from the compiler back to the developer
Types are enforced, they must match any constraint, methods must exists, you can do lots of things with types at compile time, but that only applies to concrete types in reachable code, for the reasons explained above.
without giving the developer any hint on the what or the why
We won’t complain about any unreachable bit of code. The compiler comes with some tools, though:
crystal tool unreachablefor listing unreachable codecrystal too hierarchyto see the whole tree of types
Then, as @nobodywasishere pointed to, a linter could help.
An advantage to this design, is that a call graph won’t need any cleanup for unreachable cases. If a method is called with types A, B, and C then all three are possible: there’s a concrete usage in the program. If the language compiled everything as-is, then we’d have to determine the actual concrete types to shunt impossible cases… but why bother with impossible cases in compiled code?
AFAIK, no one explain clearly why following code build no compile-time error?
def foo(user : NamedTuple?)
bar(user.not_nil![:id]?)
end
foo(nil)
Why method bar is unreachable for this case?
Is there a general rule I can use to remember for similar situations? (aggressively optimize?)
EDIT:
I tried rename method name, like following, and output the IR for foo1111111 (no IR for bar1111111 found)
def foo1111111(user : NamedTuple?)
bar1111111(user.not_nil![:id]?)
end
foo1111111(nil)
call void @"*foo1111111<Nil>:NoReturn"(%Nil zeroinitializer), !dbg !95
----------------------
; Function Attrs: noreturn uwtable
define internal void @"*foo1111111<Nil>:NoReturn"(%Nil %user) #2 !dbg !19192 {
entry:
call void @"*Nil#not_nil!:NoReturn"(%Nil zeroinitializer), !dbg !19193
unreachable, !dbg !19193
}
-----------------------
!19192 = distinct !DISubprogram(name: "foo1111111", linkageName: "foo1111111", scope: !97, file: !97, line: 1, type: !5, scopeLine: 1, spFlags: DISPFlagLocalToUnit | DISPFlagDefinition | DISPFlagOptimized, unit: !0)
So, can anyone explain, why the define of foo1111111 is return NoReturn instead?
define internal void @"*foo1111111<Nil>:NoReturn"(%Nil %user) {
entry:
call void @"*Nil#not_nil!:NoReturn"(%Nil zeroinitializer)
}
I mentioned this in my previous post:
My understanding is that because the only value ever passed to foo is nil, it’s essentially having the method be like:
def foo(user : Nil)
bar(raise NilAssertionError.new)
end
I.e. the compiler just knows the .not_nil! call will fail and everything else is unreachable. It starts to correctly report the missing bar method if you do like foo({id: 10}) because then the compiler sees there’s another possible code path that requires it to type bar.
Oops, I see now, thank you very much.
There is a misunderstanding here(for me), I thought nil.not_nil! should be a compile-time error here, but actually, it is a exception only raise when runtime.
To take this explanation a step farther, we could separate things out onto their own lines in addition to expanding not_nil!:
def foo(user : NamedTuple?)
if user.nil?
raise NilAssertionError.new
end
arg = user[:id]?
bar(arg)
end
foo(nil)
While that may be a silly amount of expansion, it shows the ordering constraints pretty well, I think.
And then we can apply the simplification for only ever calling with Nil:
def foo(user : Nil)
raise NilAssertionError.new
arg = user[:id]?
bar(arg)
end
foo(nil)
Appreciate the detailed response. While this helps me understand the design choices made and for why things are the way they are it does not help with the fact that I now have a large codebase which is very likely ridden with errors but which compiles just fine giving me a false sense of completion/correctness – until I commit more time and effort to write more code coverage to discover these errors myself instead of the compiler actively doing it for me.
The compiler can help a lot, but only so much. It won’t write functional tests for you either. ![]()
It sounds like you took over a project written by someone else, but the code base is too large and unreliable.
This experience seems to be common in Ruby and web applications. I often saw young programmers talking bad about Ruby on social networking sites.
Right now most people use Crystal for private projects because they like it. As Crystal becomes more popular, there will be cases where code written by others in Crystal will be used at work.
As the community grows, more tools will be available. The fastest way is to build it yourself.
Sigh.
In reality, what they are dissatisfied with is not Ruby itself, but rather the poorly designed service that they suddenly had to maintain. (I live in Japan, so I am referring to Japanese workplaces and social media…)
@kojix2 Oh I see what you mean – I love Ruby and it has been my go-to so far but was really hoping Crystal would take its place for performance and maintainability reasons.
I think it’s very likely, although don’t know when.
For the issue in this discussion, I consider you are a bit too pessimistic, I’ve written many code in Crystal, but it doesn’t happen very often, in fact, It’s hard to write code like this, why you accept a nilable parameter for foo, but force invoke .not_nil! again as another not exists method args?
Basically, I’ve tried to understand what the problem is, but I don’t think it’s likely that so many things would happen together by chance too often.
@zw963 I hear you but unfortunately the reason behind this post is because I ran into this issue real hard!
Just as a footnote: if you ran ameba it would tell you not to use not_nil! :-D
Not sure if the above was said in jest but a quick search on my project directory reveals a number of occurrences of not_nil! not just in the code I’ve inherited but in well known shards including aws, redis, crinja and pg too! I believe at this point I’ve nothing more to add and so will let the others with more experience and know-how decide what they want to make of it. I wish I could contribute a patch but I’m no compiler developer. Thanks!
not_nil! in general is a bad idea, you are telling the compiler to ignore the possibility of the var being nil but sometimes it’s practical.
