I know I’m a bit late to the party but I just discovered this thread and was quite shocked by some of the proposals here.
To me the most important feature of Crystal is duck typing and type inference. The reason why I even started using dynamically typed languages like PHP, JavaScript and Ruby was to avoid the type declaration hell in Java & Co (slightly exaggerated example):
public SomeClassWithAVeryLongName duplicate(SomeClassWithAVeryLongName someClassWithAVeryLongName) {
SomeClassWithAVeryLongName newSomeClassWithAVeryLongName = new SomeClassWithAVeryLongName(someClassWithAVeryLongName.attr1, someClassWithAVeryLongName.attr2);
return newSomeClassWithAVeryLongName;
}
All of these explicit type declarations are unnecessary in Crystal - and that is its appeal!
def duplicate(some_class)
SomeClass.new(some_class.attr1, some_class.attr2)
end
And if I really want to make sure nobody puts something unexpected into my method, I can still add the type to the parameter. But that’s up to me, and not being able to do that anymore would be a total showstopper for me. It is one of the key advantages of Ruby and imho anyone coming to Crystal from there would just stop using the language if this wasn’t possible anymore.
I also never understood that hype for LSPs. I don’t want some IDE spamming me with error messages about my code that is not even finished yet. And saving key strokes? If key strokes are the bottleneck of your development progress, something is clearly wrong. Sorry about the rant, I’m becoming a bit emotional here. I understand that I am an outlier in this regard when looking at the industry as a whole but there being no proper LSP support until now tells me I’m not all alone. What I’m asking for is to keep my case in mind before making breaking changes just for compilation speed because someone needs to recompile every single change they made in their editor.
I find the ideas in @asterite’s 3rd part blog post quite promising - has there been any progress in reducing the size of that dependency graph in the meantime? I would love to help but I have no experience in writing compilers whatsoever.
While I do find the idea of flawless autocompletion everywhere appealing, I’m willing to give it up for a language that seems to make me able to do more with less (but still perfectly readable) code.
I think the lack of autocompletion will help keeping Crystal from getting overly verbose, without it you’re less likely to define a_method_with_an_way_to_long_name. Which is not a bad thing in my book.
I was watching a colleague showing of Github Copilot and how it was able to nail 5 full lines of React templating, and I was just thinking “but why do you need to write so many lines of code that’s so close to something you’ve already written that a statistical model can predict it?”
Copilot is good at obvious stuff like writing getters and setters in PHP. If I had a dime for each setter/getter I’ve written…
Curious, have you tried something with really good tooling? Like C#. The real benefit for me at least is discoverability. Without any autocomplete, you’re left constantly jumping between the docs and your code, trying to find which functions are available and how to use them. It’s constant context switching and slows me down massively.
With good autocomplete, both the documentation and function signatures and what not are available to you as you type, automatically filtered by type, characters you’ve already typed etc.
If you don’t like errors in your code you can always change your settings to not show them. I don’t really see a downside to good tooling.
Same. Back in the day when I wrote C# code I kinda liked having IntelliSense… but when I tried writing C# without any sort of LSP or IntelliSense, I found myself reasoning about my program more, and also just enjoying programming even more. Later I found myself fighting the LSP more than it helping me when I tried it with Go, so I ditched LSPs altogether and haven’t looked back.
The closest I come to anything like an LSP these days is just having my editor auto-complete words I’ve already typed in open buffers. That’s all I really want for any language.
As for documentation, I just alt-tab out, or I leave it up on a second screen (which currently is a PineBook Pro connected via Barrier). Or, if it has a REPL and something like (documentation foo 'function) in Common Lisp, I’m willing to do that, but I’ve only really done that within SLIME so far.
Curious, have you tried something with really good tooling? Like C#. The real benefit for me at least is discoverability. Without any autocomplete, you’re left constantly jumping between the docs and your code, trying to find which functions are available and how to use them. It’s constant context switching and slows me down massively.
I have worked on a C# project with Visual Studio. My experience with the tooling was that I still found myself googling for the docs because the visible short description of the function signature that I got from autocomplete didn’t describe the behavior well enough (not sure if the full docs would have been available in that little popup but I wouldn’t have liked that anyway). That may have been because I had very limited experience with the language beforehand and I could imagine a state where I would only need a quick reminder how the function I am looking for was called again. But still I am used to google that as well and I never felt any need to change that. It probably even helps to better keep in mind how the stdlib tools you use actually work.
I don’t really understand how autocomplete prevents context switches. Either I know what I need and can type it right away (and again, I don’t think saving key strokes is the right goal; I can type way faster than I can come up with good design ideas and actually typing your thoughts out helps reflecting on them) OR I will have to search for solutions anyway - and here the context switch already happened, regardless of where I start looking for answers. In my experience, the autocomplete list of possible methods I can call still leads to me looking up what these really do in detail, and I like to use my full screen for that.
I don’t really see a downside to good tooling.
I don’t disagree with this statement but I think our understandings of “good tooling” differ. From my view there can definitely be too much information packed into too little space, and most LSP functionality already crosses that line for me. Sometimes I accidentally trigger vim’s built-in autocomplete feature and I always feel like it’s getting into my way. That thing has never actually helped me.
All of that being said, it’s not my goal to take the LSP away from anyone. Reading some of the answers in this thread just suddenly made me feel like I would soon be forced to use it, too, as the whole language and the tools around it would have been designed for exactly that. Of course I am open to faster compile times but not at any price. My personal vision for Crystal is about more type inference, not less.
agreed. crystal stdlib has so many useful methods that you usually have to read the API page to use them. or use the docr tool but it is also break the flow.
I don’t need a perfect LSP, but at least I need it provide some suggestion about the methods that this variable can call.
For this I think that a database of all the methods that tied with some namespaces (stdlib, shard, current project, the main_file for the current working directory), merge them together and provide all the suggestions to me is already good enough.
I will try to read the LSP source code to see if I can add some sqlite3 databases on top or not. If we can keep all the possible methods of the current scope (even if it is unreachable and not getting parsed by current compiler) than we can also solve the problem with needing to set main file in order to make the current LSP works.
While I agree that LSP and tooling are important factors for this discussion, I feel the need to highlight that they do not have such a big impact on incremental compilation. Current tools like Ameba have and continue to work well with Crystal at present, so if incremental compilation never came to Crystal then it wouldn’t really be an issue, the community would continue evolve around what we have.
Decentralization has always been a big thing in Crystal. Outside of the compiler tools there isn’t much regard for things like LSP support and editor tooling—that is left to the community—so I don’t see something like this ever happening. Also keep in mind that current and future tooling would be limited by things like type inference which can’t really happen outside of using the compiler unless you want to sacrifice performance. Not having type inference, or at best a very strict subset of it, would greatly benefit these things.
This circles back to earlier posts in the forum (and I believe it’s addressed in Asterite’s blog series too). Type inference is a major stopping block for faster compile times, and while there are other ways to improve it besides incremental compilation, there is only so far we can get before things start degrading rather than improving.
A lot of LLMs struggle to differentiate Crystal from Ruby, and even if it knew you were talking about Crystal, their general knowledge of Crystal seems sparse so the responses may not be accurate or relevant to the current context.
What if the compiler identifies compilation units based on whether a file or set of files has enough information to be an independent unit? This allows people to opt-in to incremental compilation by specifying the types. You trade duck-typing for strictness and a defined compilation unit.