Docker images for ARM? RPI?

Hi guys,

Just wanted to ask anyone if they have any working ARM docker images for latest Crystal? I made one a while back and I haven’t been able to update it to the latest Crystal because I use the deb installer from 2016. I’d love for the Crystal team to release an official docker build for ARM. I am currently working on a Dockerfile for the new version using this article.

My old kemal image for ARM only.

1 Like

I was wondering what people do with docker files of the compiler.
If you need the compiler, you want to compile something, so why use a docker container and not install the compiler direclty?

Because I want to Dockerize it so I can run it in swarm mode, as well as make it easier to reinstall RPis that I have with Crystal. Look up what Docker does and you’ll understand why we need this. I’m not compiling Crystal from scratch every time I need a higher version.

I know what docker does, i used it before. But I was wondering why you would need the compiler itself on many instances, and not just have it on one system, where you compile the binary you then deploy?

edit: I have crystal on one PI and basically only change to a new compiler version when I do a new major release of our software so everything is thoroughly tested, and that does not happen very often.

Ok then you should understand why this would be useful. People build docker images, they will want to build docker images based off crystal. The literal draw of docker is “containerized development tools”. Naturally people will want the crystal compiler in a docker image. Some people don’t want to need to rebuild crystal for their platform from scratch when we can dockerize the whole process and make it way simpler for ARM users to use crystal.

I have this working locally on OSX.
Firstly build you docker image: https://github.com/spider-gazelle/spider-gazelle/blob/master/Dockerfile <-- this is our base build for microservices.

You want the image to be as small as possible as every arch you add multiplies the size of the image.
So x64 == 20mb, x64 + ARM7 is 40MB etc

You need to enable experimental docker features - you can do this in preferences

Then you can build multi-arch images super easy:

docker buildx build --platform linux/amd64,linux/arm64 -t org/image-name .
1 Like

I know this is an old topic and I don’t think my original solution actually crossed compiled
However spider-gazelle will now build multi-architecture images with the updated Dockerfile and the same buildx command

you can enable buildx from the command line with:
docker buildx create --use

build and push your multi-architecture image

docker buildx build --platform linux/amd64,linux/arm64 -t org/image-name:tag . --push

As local docker doesn’t support multi-arch images you need to push them and then download them again to use them. However if you want to test your image before pushing, you can do it locally with a single platform specified:

docker buildx build --platform linux/amd64 --output type=docker -t org/image-name:1.0.0.dev-amd64 .

Then you can extract the static executable (if desired)

docker cp $(docker create --name tc org/image-name:1.0.0.dev-amd64):./app ./app && docker rm tc

Hi, not work for me on linux.

 ╰─ $  docker buildx create --use
vibrant_sanderson
╰─ $ docker buildx build --platform linux/amd64,linux/arm64 -t org/image-name:tag .
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 87.1s (16/39)                                                                                                                                                                    
 => [internal] booting buildkit                                                                                                                                                         18.4s
 => => pulling image moby/buildkit:buildx-stable-1                                                                                                                                      17.5s
 => => creating container buildx_buildkit_vibrant_sanderson0                                                                                                                             0.9s
 => [internal] load .dockerignore                                                                                                                                                        0.1s
 => => transferring context: 2B                                                                                                                                                          0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                     0.1s
 => => transferring dockerfile: 2.75kB                                                                                                                                                   0.0s
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:3.16                                                                                                               8.3s
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:3.16                                                                                                               8.1s
 => [auth] library/alpine:pull token for registry-1.docker.io                                                                                                                            0.0s
 => [internal] load build context                                                                                                                                                        0.1s
 => => transferring context: 530B                                                                                                                                                        0.0s
 => [linux/arm64 build  1/10] FROM docker.io/library/alpine:3.16@sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad                                                29.8s
 => => resolve docker.io/library/alpine:3.16@sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad                                                                     0.0s
 => => sha256:9b18e9b68314027565b90ff6189d65942c0f7986da80df008b8431276885218e 2.71MB / 2.71MB                                                                                          29.6s
 => => extracting sha256:9b18e9b68314027565b90ff6189d65942c0f7986da80df008b8431276885218e                                                                                                0.1s
 => [linux/amd64 build  1/10] FROM docker.io/library/alpine:3.16@sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad                                                24.7s
 => => resolve docker.io/library/alpine:3.16@sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad                                                                     0.0s
 => => sha256:213ec9aee27d8be045c6a92b7eac22c9a64b44558193775a1a7f626352392b49 2.81MB / 2.81MB                                                                                          24.5s
 => => extracting sha256:213ec9aee27d8be045c6a92b7eac22c9a64b44558193775a1a7f626352392b49                                                                                                0.1s
 => [linux/amd64 build  2/10] WORKDIR /app                                                                                                                                               0.1s
 => [linux/amd64 build  3/10] RUN adduser     --disabled-password     --gecos ""     --home "/nonexistent"     --shell "/sbin/nologin"     --no-create-home     --uid "10001"     "appu  0.1s
 => [linux/amd64 build  4/10] RUN apk add --no-cache         ca-certificates     &&     update-ca-certificates                                                                          35.2s
 => [linux/arm64 build  2/10] WORKDIR /app                                                                                                                                               0.0s
 => [linux/arm64 build  3/10] RUN adduser     --disabled-password     --gecos ""     --home "/nonexistent"     --shell "/sbin/nologin"     --no-create-home     --uid "10001"     "appu  0.1s
 => ERROR [linux/arm64 build  4/10] RUN apk add --no-cache         ca-certificates     &&     update-ca-certificates                                                                    30.2s
 => CANCELED [linux/amd64 build  5/10] RUN apk add   --update   --no-cache   --repository=http://dl-cdn.alpinelinux.org/alpine/edge/main   --repository=http://dl-cdn.alpinelinux.org/a  0.1s
------                                                                                                                                                                                        
 > [linux/arm64 build  4/10] RUN apk add --no-cache         ca-certificates     &&     update-ca-certificates:                                                                                
#0 0.126 fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/aarch64/APKINDEX.tar.gz                                                                                                       
#0 8.852 fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/aarch64/APKINDEX.tar.gz                                                                                                  
#0 27.74 (1/1) Installing ca-certificates (20220614-r0)                                                                                                                                       
#0 29.99 Executing busybox-1.35.0-r17.trigger                                                                                                                                                 
#0 30.03 Executing ca-certificates-20220614-r0.trigger                                                                                                                                        
#0 30.09 OK: 6 MiB in 15 packages                                                                                                                                                             
#0 30.22 Error while loading /usr/bin/run-parts: No such file or directory                                                                                                                    
------                                                                                                                                                                                        
Dockerfile:20
--------------------
  19 |     # Add trusted CAs for communicating with external services
  20 | >>> RUN apk add --no-cache \
  21 | >>>         ca-certificates \
  22 | >>>     && \
  23 | >>>     update-ca-certificates
  24 |     
--------------------
error: failed to solve: process "/dev/.buildkit_qemu_emulator /bin/sh -c apk add --no-cache         ca-certificates     &&     update-ca-certificates" did not complete successfully: exit code: 1

How to fix it?

One more question, Is it possible to build and link a ARM binary use docker buildx on local laptop directly?

For now, i have to build it on my laptop X86_64 laptop, then start a Raspberry Pi use qemu emulator, and link it there, i really hope we can build/link in one step for ARM.

Thank you.

failed to update certificates? no idea, probably not important as the image is new anyway. See if stackoverflow has the answer / comment out the lines

you don’t need use the emulator, buildx compiles ARM images on my x86_64 laptop without issue

#0 30.22 Error while loading /usr/bin/run-parts: No such file or directory                                                                                                                    

This error?

Hi, @stakach , one more question, please.

I consider i build successful in my X86_64 linux laptop for a linux/arm64 platform, use following command.

 ╰─ $ 130  docker buildx build --platform linux/arm64 --target build -t arm64 .
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 2.0s (16/16) FINISHED                                                                                                                                                            
 => [internal] load build definition from Dockerfile                                                                                                                                     0.0s
 => => transferring dockerfile: 2.74kB                                                                                                                                                   0.0s
 => [internal] load .dockerignore                                                                                                                                                        0.0s
 => => transferring context: 2B                                                                                                                                                          0.0s
 => [internal] load metadata for docker.io/library/alpine:3.16                                                                                                                           2.0s
 => [build  1/12] FROM docker.io/library/alpine:3.16@sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad                                                             0.0s
 => => resolve docker.io/library/alpine:3.16@sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad                                                                     0.0s
 => [internal] load build context                                                                                                                                                        0.0s
 => => transferring context: 148B                                                                                                                                                        0.0s
 => CACHED [build  2/12] WORKDIR /app                                                                                                                                                    0.0s
 => CACHED [build  3/12] RUN adduser     --disabled-password     --gecos ""     --home "/nonexistent"     --shell "/sbin/nologin"     --no-create-home     --uid "10001"     "appuser"   0.0s
 => CACHED [build  4/12] RUN apk add   --update   --no-cache     ca-certificates     yaml-dev     yaml-static     libxml2-dev     openssl-dev     openssl-libs-static     zlib-dev       0.0s
 => CACHED [build  5/12] RUN update-ca-certificates                                                                                                                                      0.0s
 => CACHED [build  6/12] RUN apk add   --update   --no-cache   --repository=http://dl-cdn.alpinelinux.org/alpine/edge/main   --repository=http://dl-cdn.alpinelinux.org/alpine/edge/com  0.0s
 => CACHED [build  7/12] COPY shard.yml shard.yml                                                                                                                                        0.0s
 => CACHED [build  8/12] COPY shard.lock shard.lock                                                                                                                                      0.0s
 => CACHED [build  9/12] RUN shards install --production --ignore-crystal-version                                                                                                        0.0s
 => CACHED [build 10/12] COPY ./src /app/src                                                                                                                                             0.0s
 => CACHED [build 11/12] RUN shards build --production --release --error-trace                                                                                                           0.0s
 => CACHED [build 12/12] RUN for binary in /app/bin/*; do         ldd "$binary" |         tr -s '[:blank:]' '\n' |         grep '^/' |         xargs -I % sh -c 'mkdir -p $(dirname dep  0.0s

What i want is copy the built binary from container into my laptop, but, when i build it done, i can’t found the generated image which i built just now (tag name is: arm64), sure, i can’t copy it out, do you know how to achieve this?

Thank you.

I was playing around with this as I am looking to achieve multi-stage builds with buildx (separate the image generation from the push)

you can do

docker buildx build --platform linux/amd64 --output type=local,dest=folder .

and the generated files are on your local machine in the folder specified.
That said, I’m having issues with libunwind and static builds - exceptions don’t seem to be working for me, still trying to solve that

1 Like

Following command works!

mkdir 121
docker buildx build --platform linux/arm64 --target build -t arm64 --output type=local,dest=121  .
 ╰─ $ 121/app/bin/test 
hello
 ╰─ $ file 121/app/bin/test
121/app/bin/docker: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, with debug_info, not stripped

Thanks a lot for make me know i even can generate a ARM64 linked binary even don’t need start a qemu emulator for it.

1 Like

I think I solved my issue with exceptions too. Need to add these this lib

libunwind-dev
libunwind-static

previously was seeing this with exceptions

Failed to raise an exception: END_OF_STACK
[0x458236] ???
[0x42ce58] ???
[0x4395ae] ???
[0x433ec9] ???
[0xc75545] ???

Tried to raise:: Unknown DW_FORM_data16 (Exception)
  from usr/lib/crystal/core/crystal/dwarf/info.cr:83:29 in '??'
  from usr/lib/crystal/core/crystal/dwarf/info.cr:67:23 in '??'
  from usr/lib/crystal/core/exception/call_stack/elf.cr:10:7 in '??'
  from src/ldso/dl_iterate_phdr.c:45:1 in '??'

I, my own Dockerfile just use the offical alpine image.

FROM crystallang/crystal:1.5.0-alpine-build AS base

Can i know why you build crystal image from the alpine image? what is the difference between you and the official installed package?


EDIT: I checked the official image, It seem like most of packages in your Dockefile is exists in official image, although, probably only part of for some package:

following package not exists

yaml-dev
libunwind-dev
libunwind-static

I guess yaml-dev is not necessary, only yaml-static is enough.

so, what is the usage of libunwind-dev? some shards depend on it?

it’s used for exceptions in the standard library
I don’t use the official alpine image as it does not support ARM64

I have some issues with exceptions, so added that lib in case it had an effect

I have some issues with exceptions, so added that lib in case it had an effect

Oops, i just found out, what i expected to build a arm64 is actual a X86_64 binary.

$: docker buildx build --platform linux/arm64 -t crystal_build_static_binary_linux/arm64 --output type=local,dest=linux/arm64 -f /tmp/build_static_binary_use_crystal.dockerfile .

Ignore those irrelevant parameters above, i really try to build with --platform linux/arm64, but, get a AMD64 binary.

 ╰─ $ file linux/arm64/app/bin/test 
linux/arm64/app/bin/docker: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), static-pie linked, with debug_info, not stripped

Can i know why my Dockerfile not works for arm64?

following is my Dockerfile.

# -*- mode: dockerfile; -*-

FROM crystallang/crystal:1.5.0-alpine-build AS base

RUN sed -i "s/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g" /etc/apk/repositories

RUN addgroup -g 1000 docker && \
    adduser -u 1000 -G docker -h /home/docker -s /bin/sh -D docker

WORKDIR /app

RUN --mount=type=cache,target=/var/cache/apk \
    set -eux; \
    apk upgrade

RUN wget https://github.com/boxboat/fixuid/releases/download/v0.5.1/fixuid-0.5.1-linux-amd64.tar.gz -O - | tar zxvf - -C /usr/local/bin

RUN USER=docker && \
    GROUP=docker && \
    chown root:root /usr/local/bin/fixuid && \
    chmod 4755 /usr/local/bin/fixuid && \
    mkdir -p /etc/fixuid && \
    printf "user: $USER\ngroup: $GROUP\n" > /etc/fixuid/config.yml

RUN chown docker:docker /app -R

USER docker:docker

# Install shards for caching
COPY shard.yml shard.yml
COPY shard.lock shard.lock

RUN shards install --production --ignore-crystal-version

COPY src src

RUN shards build --production --release --no-debug --error-trace --static -Dstrict_multi_assign

Its because your base image crystallang/crystal:1.5.0-alpine-build only supports amd64 architecture. You should be using a base image which supports the architecture you are building for.

Take a look at dockerhub and it will show you only linux/amd64 as supported architecture by this image.

HIH

Ali

1 Like

Thank you, i saw the supported architectures is linux/amd64 now.