I want to combine 2 things that I love more than anything.
Crystal
AI
Not necessarily in that order.
I vibe code with Crystal all the time, it’s been perfect because of the type system, but the syntax is clean and makes it easy for Claude to mimic the patterns within the files easily. The rate of writing Ruby by accident from the models has also gone down a lot. I’m very grateful for these advancements!
So I decided to do 2 things, that are only related by Crystal and AI.
I forked shards and I added some tooling to allow writing documentation for agents within a library. So now you can write docs for your peferred AI coding assistant and then pass along the skills, subagents and MCPs for that library. The shards-alpha project is what I called it so that there is an easy but different command from the real shards, this way you can try it out without disrupting the normal shards installation. This is just for testing the idea out as my workflow improves. I think others will enjoy the level of plug-and-play that I have achieved with this.
# Via Homebrew
brew tap crimson-knight/shards
brew install shards-alpha
# -- OR --
# Install from source
git clone https://github.com/crimson-knight/shards.git
cd shards
crystal build src/shards.cr -o bin/shards-alpha --release
# Copy bin/shards-alpha somewhere on your PATH or symlink it into your /local/usr/bin
Then in your project:
# Set up Claude Code with compliance skills + agents
shards-alpha assistant init
# Use it like normal shards — everything is compatible
shards-alpha install
I went really crazy and decided to see if I could create a ralph-loop and get Claude Opus 4.6 to figure out how to implement WASM 3.0 as a compile target with garbage collection.
And it works! There’s support for all kinds of features that are part of the spec, but I’m still working through linking all of the C libraries that we depend on being linked/compiled into the WASM target. This took over six hours while running on a loop and at one point it gave up because it couldn’t figure out the path it was on. I had it think from a more first-principles perspective about it’s original architectural decisions and then it realized it had made an assumption too briefly–found the fig that worked that unlocked GC support–and it chugged away until done from there.
My plan is to experiment with building full stack apps with Crystal so that I can create a mono repo, and serve the web API and the entire browser app from Crystal+Amber.
It outlines a limitation in Crystal: while we can declare a global function symbol with fun name (where name isn’t mangled), we can only reference an external symbol with $name in a lib definition, we can’t declare one (AFAIK). Maybe we can though inline assembly?
We again miss an option to specify the exception model in TargetOptions (EH is disabled by default for WASM), and that’s probably the only option we need to set through a new LLVMTargetMachineOptionsSetExceptionModel function (and enum) that should be proposed upstream.
I’ll see if I can take your feedback and provide it to another Claude session next week.
My real purpose for trying something so hard and abstract was to see how far I can push the $200/mo subscription, and thus far I’ve used about 50% of my weekly limit across all models.
My next insane adventure is to see if I can get maximized parallelism for compiling and doing incremental compilation. That’s what is currently chugging away. Maybe we’ll see if it’s successful in another 4-5 hours.
@jwoertink From the description of the linked PR, this boils down to:
$ crystal build -Drelease -O3
Compilation time will improve… but runtime performance will plummet drastically because Crystal is tailored for --single-module: nothing’s inlined anymore save for the few very low level intrinsics. Every method like def to_i; to_i32; end that is normally inlined to a fast convert instruction will add a slow call overhead on top of it.
What would help is for @[AlwaysInline] to be inlined by a before codegen semantic pass, instead of annotating the generated LLVM function, and annotate lots of small methods and/or have the same semantic pass do custom heuristics to decide to inline or not low complexity methods.
Of course, I’d love to have that.
Note: one difficulty is that inlining may affect structs: pass-by-value may become pass-by-reference and mutations affect the caller’s struct, instead of the copy (oops).
I’ll pass your feedback on to the agent to explore and research further.
Currently, what I’ve been mostly focusing on is polishing some of the other tooling that I specifically need for my immediate projects. However, I’m still pushing the boundary here of this. compiler because I want to be able to use crystal and build to all targets. And I also just want the development experience to be better.
That being said, something that has become very clear is that the incremental compiler is really only meant for live interactive development cycles. It is a slower. Way to compile for the actual release binaries, and I think you’re right in your assessment that changing this will miss out on the actual output being more optimized so the Current compiler in single module is more efficient in terms of its performance. So, what I was trying to do is maintain a highly optimized version of the compiler to be close to or exactly matching its current behavior for doing release builds.
Basically, during the development cycle, you want to use the incremental compiler. Slower to start, but it stays running, and then you get rapid feedback. But then when you go to release your build, you want the original compiler. because the approaches that it took were much more performance optimized.
At least that’s my understanding, but so far the benchmarks that it’s been measuring have told me that when you use the stock crystal compiler versus the incremental compiler that I’ve built here, that you still get the exact same. File size in output. So it may still end up actually including the performance optimizations you’re talking about.