Cross Compiling automatically to OSX?

I don’t think you are wasting your time. Having a clear distributable process it make sense, specially for apps like CLI.

In the case of microservices people are satisfied with creating a docker image + running it locally.
As @Blacksmoke16 mention, snap helps in the linux distribution.

I don’t foresee a built-in way to build binaries for every platform directly from the compiler itself. But some external tooling could help. It’s hard to make something that fits everybody needs.

In the past the compiler was built for linux and darwin using omnibus, today that is still been used only in darwin at distribution-scripts since the linux build process now uses a more native approach. But it does still require a real OSX.

  • Using brew with a tap will make your users build the project.
  • Using brew with core will allow them to download the binaries but you will probably need to keep dependencies up to date.
  • Building some other way you might need to target the minimum OSX version you want. For example the compiler uses MACOSX_DEPLOYMENT_TARGET=10.11

I actually want a chapter in the crystal-book regarding how to distribute applications. Even if that story is: you need to build locally. Since some Crystal users come from non compiled language there are definitely some missing pieces and worth covering IMO. At least pointing them in a direction of how linkers and libraries work.


FWIW its possible to setup bottles for your tap, it’s just not automated like in core.

This article helped me a lot with oq. . (mainly since i have access to a mac to build/upload it on).

Wow, the distribution scripts seem super complex… makes me realize that what I am trying to do will probably generate binaries that wont work for many other users.

I would really like to avoid dealing with brew and snap. Brew and osx are not accessible to me, and snap had always given me issues when trying to install packages through it. As a user, I prefer curling the binary from a GitHub repo rather than using snap.

That is discouraging…

Thanks for the detailed explanations though, and although I realize nobody in the Crystal extended team prioritizes built in distribution toolchain, I sure hope someone in the community might pick up the glove and lead the creation of an external, possibly docker-based tool. I am happy to help where I can, if such an effort starts to take shape.

If snap is troublesome (which I find too) try appimage ( There are others too, but I don’t know much about them.

Static builds are in my opinion a crutch. Basically trying to solve a complicated problem with a large mallet.
There is really no way to run one binary on different flavors of Unix, and I don’t see it as Crystal’s responsibility to solve a problem that is already solved in the shape of Appimage or Snap etc.
And btw docker is just the same as those, it just feels differently for the user, but in the end it is also just packaging a binary with all the libraries it was built with.

Do give Appimage a try, and with a bit of effort I’m sure you can manage to automatically build a whatever_app.appimage that runns on many if not all Linux flavours.

1 Like

Interesting approach. Although not the holy grail I was looking for, it definitely looks like something I want to try. Thanks for sharing.

Thanks @DannyB for sharing your setup! It’s surely going to be a help for other shard authors.

If you’ve got 1. (build for Alpine) you actually won’t need 3. because that will already do it.
I suppose it’s probably not really useful to distribute both statically and dynamically linked binaries.

I don’t follow. Appimage and Snap are explicitly linux tools. How would they possibly help running on different flavours of Unix?
The way I see it statically compiled binaries solve the same problem as Appimage and Snap. The mallet analogy might be fitting. But static builds work without any runtime dependency (which includes Appimage and Snap). So I’d rather prefer static binaries. As @DannyB noted above: its as easy a curling a binary and running it.

I was actually hoping for that, but I am confused… earlier you said:

I know that these two sentences can still live together, but so far my attempts at running binaries compiled on Alpine, on my Ubuntu have mixed results:

  • A simple “hello world” binary build on Alpine, works on my Ubuntu (available for download on the same GitHub release page).

  • A slightly more useful Crystal binary built on Alpine, runs on my Ubuntu, but generates this error:

    Invalid memory access (signal 11) at address 0x0
    Segmentation fault (core dumped).

When I saw this error, I figured “the guys in the Crystal forum were right - I cannot build a binary on Alpine and run it on Ubuntu”.

So… honorable Crystal council - yea or nay? :face_with_monocle:

You can cross compile “target” OS X, but AFAIK you still have to do the final linking from an OS X box, right? That seemed to be the case when I tried it awhile back… :)

1 Like

Sorry for the Unix, I should have said Linux.
And as far as I know, appimages also don’t need any runtime dependency (that I know of).
My point is that it is as easy to download a appimage and avoids the known problems with static compilation.

Should be yay. I don’t know what caused this error, but it should usually work.
Might be some incompatible library. Which ones are linked?

It does. The operating system is not capable of executing the application from an appimage directly. This dependency is embedded directly into the appimage, but it’s still a dependency.

The answer got longer and off topic, so moved here:
Invalid memory access on Alpine

Yeah let’s make a new thread for the alpine issue, since it’s not OS X :)