I want to integrate Ai with crystal

Like we are making a application in which we have to fetch data from database and then the ai should provide actionable insights regarding that data so how will i do it using crystal

In my opinion, it would be best if someone in academia, rather than industry, implemented a robust subset of NumPy in Crystal. This should be done based on long-term academic interests rather than short-term profit.

One precedent for this is the NArray library in Ruby. Most of the AI-related libraries in Ruby depend on NArray. The original developer of NArray was a physicist, but NArray is no longer being actively developed.

One of the main reasons for this stagnation is the lack of stable financial support for the creation and maintenance of such a basic library. NumFOCUS is an exception, but it is backed by the Python community and the power of the United States.

I think that a matrix computation library for a programming language that is not popular worldwide is almost certain to be a dead end from the start, because it is difficult to motivate developers and secure resources. Researchers need to achieve the best results. Companies need to focus on securing profits. There is no reason not to use the already established Python ecosystem here.

This is why, as I mentioned at the beginning, it is preferable for someone in academia rather than industry to build a robust matrix calculation library. Such a person would be motivated by academic interest rather than short-term profit.

And even if the developer eventually leaves the project, it is preferable that the library be left so that someone else can continue development.

(translation with DeepL)

1 Like

I think we’re a ways off from being able to run inference inside Crystal code unless someone’s published something I haven’t found yet. We had a discussion about the topic of AI in this thread recently, too, which may be helpful for you.

There are a few shards you can use to send AI prompts to hosted LLMs.

You can also use Ollama on hardware you control or OpenRouter to have access to a wider selection of models. Ollama and OpenRouter both use the OpenAI API (for chat completions, anyway), so you can use the openai shard to talk to their APIs by setting the OPENAI_API_BASE env var. Just keep in mind that that shard doesn’t use an HTTP connection pool (the Anthropic and Google ones do) so if you’re using it in multiple fibers, such as when calling out from your own backend service, you need to maintain your own connection pool.

If you want to use OpenAI, Anthropic, Google, or OpenRouter, you’ll also need an API key from them. Ollama can be used with any API key.

Sure i will check this . Thanks for helping

Well yeah you are right there should be one for crystal also First time using crystal so it is becoming more difficult for me to understand things

Showing last frame. Use --error-trace for full trace.

In lib/google/src/gemini.cr:7:1
it is giving me this error when using gemini one
7 | require “./error”
^
Error: can’t find file ‘./error’ relative to ‘/workspace/productivity-tool/productivity_tracker/lib/google/src/gemini.cr’

That was a mistake on my part. I just added the file to the repo. shards update google should fix it for you.

thank you much sir for helping it worked for me

1 Like