I think one reason Ruby and Crystal never became common in data science and the sciences has to do with their grammar. This is probably an unpopular, minority opinion within the community. Of course, the lack of libraries has been a major factor. But I don’t believe that explains everything.
In science, we are always facing the unknown. The unknown is chaotic, and we cannot know its structure in advance. Through instruments we catch glimpses of nature, with limited precision, and we call these fragments “data.” Then we apply various “operations” to the data, trying to find meaning or patterns. We can’t know ahead of time which operation will work. In fact, trying out new operations is often the key to discovering meaning.
But as a tool for dealing with such chaos, Ruby’s and Crystal’s object orientation doesn’t feel like the best fit. In the world of object orientation, we spend a lot of effort making data and operations line up neatly. We refactor, polish libraries, and build orderly structures. The result is a world where each object seems to “know” what operations it should undergo.
That kind of ordered world is a powerful advantage when building things, sometimes even surpassing Python. But when it comes to facing the unknown, I suspect that the idea of relying on highly structured, library‑bound operations was never really suited to the task.
That said, AI systems are now maturing and spreading across industry. On top of this more settled framework, Ruby and Crystal may finally be finding a new stage.
(Translation from Japanese by ChatGPT)