Yeah, I’ve always noticed that the codegen (bc+obj) phase takes the longest. That’s where LLVM compiles the IR into byte/object code.
[alexa@lain benben]$ CRYSTAL_OPTS=--stats rake
=== Fully Optimized Build ===
shards build -p -Dpreview_mt --release --no-debug -Dyunosynth_wd40
Dependencies are satisfied
Building: benben
Parse: 00:00:00.000173638 ( 1.02MB)
Monkey patching the ZStandard bindings to fix a memory leak
Semantic (top level): 00:00:00.586158087 ( 171.49MB)
Semantic (new): 00:00:00.002499083 ( 171.49MB)
Semantic (type declarations): 00:00:00.042015682 ( 187.49MB)
Semantic (abstract def check): 00:00:00.076894181 ( 187.49MB)
Semantic (restrictions augmenter): 00:00:00.010981664 ( 187.49MB)
Semantic (ivars initializers): 00:00:00.105040268 ( 251.49MB)
Semantic (cvars initializers): 00:00:00.010272253 ( 251.49MB)
Semantic (main): 00:00:01.107068908 ( 507.67MB)
Semantic (cleanup): 00:00:00.000920854 ( 507.67MB)
Semantic (recursive struct check): 00:00:00.001233433 ( 507.67MB)
Codegen (crystal): 00:00:01.604538998 ( 571.92MB)
Codegen (bc+obj): 00:01:01.061826937 ( 571.92MB)
Codegen (linking): 00:00:00.856938455 ( 571.92MB)
Codegen (bc+obj):
- no previous .o files were reused
strip --strip-unneeded bin/benben
rst2man man/benben.1.rst > man/benben.1
gzip -f -9 -k man/benben.1
[alexa@lain benben]$
This is about 48k LOC. Slackware Linux, Core i9-10850K, 64GB RAM, over an NFS mount.
One thing I’m curious about is if adding on type annotations everywhere speeds up compile times. Like if the code doesn’t have a lot of type information, does it take longer to compile?