I am trying to dockerize my tests and run them on M1 (Arm).
my Dockerfile looks like that:
FROM cypress/browsers:latest
WORKDIR /projectName
COPY package.json .
COPY tsconfig.json .
COPY cypress.config.ts .
COPY /cypress .
COPY makefile .
RUN npm install -g yarn
RUN yarn install
CMD tail -f /dev/null
I then use a Makefile with a few commands:
build_docker_arm:
sudo docker build ${ARM_PLATFORM} -t projectName .
start_docker_arm:
docker run ${ARM_PLATFORM} -d -t -i -v `pwd`/cypress:/fenrir/cypress -name=projectName projectName
run_docker_chrome:
docker exec -t -i projectName npx cypress run --browser chrome --spec "cypress/e2e/features/*"
When I run make run_docker_chrome I get this
If I try to run without the --browser chrome then the tests work
See open issue in cypress:
Add Arm browsers to linux/arm64 Docker images
https://github.com/cypress-io/cypress-docker-images/issues/695
I am trying to set up a Docker container that builds and runs a small application. This is my Dockerfile:
#####################
# build the jar
#####################
FROM gradle:jdk11 as builder
COPY --chown=gradle:gradle application /application
WORKDIR /application
RUN gradle build
#####################
# run the app
#####################
# Use this on a non-arm machine
# FROM openjdk:11
# Use this on an arm machine, such as a raspberry pi
FROM arm32v7/adoptopenjdk:11
EXPOSE 8080
COPY --from=builder /application/build/libs/myjar.jar .
WORKDIR /
CMD java -jar ./myjar.jar
docker build -t myimage . works without problems on my personal machine (a Macbook Pro). If I try to build the image on a Raspberry Pi 4B (which is ultimately the goal), it hangs at the RUN gradle build step and never completes.
This is my terminal output:
pi#raspberrypi:~/development/my_test $ docker build -t test .
Sending build context to Docker daemon 15.92MB
Step 1/9 : FROM gradle:jdk11 as builder
---> 0924090a3770
Step 2/9 : COPY --chown=gradle:gradle application /application
---> Using cache
---> b702fc76b9cb
Step 3/9 : WORKDIR /application
---> Using cache
---> dbc2aac75c7c
Step 4/9 : RUN gradle --no-daemon build
---> Running in faec45c6cf01
OpenJDK Server VM warning: No monotonic clock was available - timed services may be adversely affected if the time-of-day clock changes
And that's it. Nothing further happens.
At first, I had ignored the OpenJDK warning since I had seen it with other images and had no problems running them. After every other option failed, I started to suspect it might be the culprit. How can this be resolved?
I successfully tested below solution for the Raspberry Pi 4 (Raspbian 10 buster) for jdk 16 and jdk 17:
#Add dep Key
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138
#Add dep
sudo echo 'deb http://httpredir.debian.org/debian buster-backports main contrib non-free' | sudo tee -a /etc/apt/sources.list.d/debian-backports.list
#Update and Install
sudo apt update
sudo apt install libseccomp2 -t buster-backports
Source: https://community.openhab.org/t/docker-openhab-3-2-0-snapshot-stuck-at-unhealthy-with-openjdk-client-vm-warning-no-monotonic-clock-was-available/128865/7
I found the solution: As it turns out, the missing monotonic clock can lead to unpredictable behaviour with java. Some applications run, some have errors, some don't start. Turns out, the missing clock issue can be fixed by installing libseccomp 2.4.2 or higher, which unfortunately does not get served by apt on the pi. So it seems like the only way currently is a manual install as described here, from the source page here. I did this and the error went away, Gradle started to run, build my app and print the proper output to the terminal.
I created an out the box template c# edge solution in VScode using the IoT tools but get the following error when trying to build and push the solution:
Step 4/12 : RUN dotnet restore
---> Running in 22c6f8ceb80c
A fatal error occurred, the folder [/usr/share/dotnet/host/fxr] does not contain any version-numbered child folders
The command '/bin/sh -c dotnet restore' returned a non-zero code: 131
This is clearly happening when its trying to run through the commands in Dockerfile.arm32v7 which has the following in it:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster-arm32v7 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim-arm32v7
WORKDIR /app
COPY --from=build-env /app/out ./
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
ENTRYPOINT ["dotnet", "SampleModule.dll"]
This only happens when building for arm32v7 but works for amd64. I'm building for use on raspberry pi so I need arm32.
Seems to me maybe a problem with my environment but not sure where to look?
I've also seen some comments that you need to build on an ARM host machine if you want it to work but I haven't seen that in the documentation before and doesn't make sense from an ease of development point of view
When submitting a build using az acr build, you can pass in the parameter --platform linux/arm/v7 which should give you an ARM build environment.
If you prefer to build locally, you can try using the pulling the qemu package into the build context in the first stage of your Dockerfile.
I am having an headache with cross compilation from amd64 to arm7l
I could finally do it with Gitlab CI, so now, I compile my binary in a docker image, here is the dockerfile:
FROM golang
WORKDIR /go/src/gitlab.com/company/edge_to_bc
COPY . .
RUN dpkg --add-architecture armhf && apt update && apt-get install -y gcc-arm-linux-gnueabihf libltdl-dev:armhf
I build it as
Then I will build the the new container "cross-compil" ready with the name ubuntu:cross-compil
Now, I can compile my binary with:
docker run -it -v ${EDGE_TO_BC_PATH}/release:/go/src/gitlab.com/company/edge_to_bc/release ubuntu:cross-compil bash -c 'CC=arm-linux-gnueabihf-gcc CXX=arm-linux-gnueabihf-g++ CGO_ENABLED=1 GOOS=linux GOARCH=arm GOARM=7 go build -v -o ./release/edge_to_bc '
I can see my executable generated in ./release/edge_to_bc
Then I build my docker image:
docker build -t registry.gitlab.com/company/edge_to_bc:armhf .
And I push it.
In the Dockerfile, I just copy the executable from host:
FROM alpine:3.7
RUN apk --no-cache add ca-certificates libtool
WORKDIR /sunclient/
COPY ./release/edge_to_bc ./
EXPOSE 5555
CMD [ "./edge_to_bc" ]
But when I run it in my arm board with:
docker run --rm registry.gitlab.com/company/edge_to_bc:armhf
I get:
standard_init_linux.go:207: exec user process caused "no such file or directory"
When debugging, if I want to get list of files with
docker run --rm registry.gitlab.com/company/edge_to_bc:armhf
I get:
standard_init_linux.go:207: exec user process caused "exec format error"
Which indicate binary still not have correct format...
What did I miss ? I spent a lot of time on this topic, and don't have much more ideas.
When I check the architecture of binary, this is what I get:
edge_to_bc git:(master) ✗ readelf -h ./release/edge_to_bc
ELF Header:
Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00
Class: ELF32
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: ARM
Version: 0x1
Entry point address: 0x19209
Start of program headers: 52 (bytes into file)
Start of section headers: 23993360 (bytes into file)
Flags: 0x5000400, Version5 EABI, hard-float ABI
Size of this header: 52 (bytes)
Size of program headers: 32 (bytes)
Number of program headers: 10
Size of section headers: 40 (bytes)
Number of section headers: 49
Section header string table index: 48
On the target OS, this is what I get:
[root#gw-sol1 ~]# uname -a
Linux gw-sol-1 4.4.113-UNRELEASED #1 SMP PREEMPT Thu Mar 7 16:46:40 CET 2019 armv7l armv7l armv7l GNU/Linux
EDIT:
When I build the app directly on ARM device, it will work:
go build -o ./release/edge_to_bc -v -ldflags '-w -s -extldflags "-static"' ./...
the ELF:
ELF Header:
Magic: 7f 45 4c 46 01 01 01 03 00 00 00 00 00 00 00 00
Class: ELF32
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - GNU
ABI Version: 0
Type: EXEC (Executable file)
Machine: ARM
Version: 0x1
Entry point address: 0x125f1
Start of program headers: 52 (bytes into file)
Start of section headers: 16594072 (bytes into file)
Flags: 0x5000400, Version5 EABI, hard-float ABI
Size of this header: 52 (bytes)
Size of program headers: 32 (bytes)
Number of program headers: 7
Size of section headers: 40 (bytes)
Number of section headers: 38
Section header string table index: 37
It seems quite similar to the other one, at least in architecture.
Then build the docker image:
docker build . -t image/peer-test:armhf
When you run the build on the amd64 system of the arm7 image with this command:
docker build -t registry.gitlab.com/company/edge_to_bc:armhf .
It will use base images and run commands in that image for amd64. So even if your single edge_to_bc binary may be cross compiled, the rest of the image is not. Next, the binary compile command itself looks like it is linking to libraries, which quite likely are not in your final image. You can run ldd edge_to_bc to see these links, and if they are missing, you'll get the file not found error.
In my own cross compile test, I'm using BuildKit, Buildx, and some experimental features on 19.03.0-rc2, so there will be parts of this that are not backwards compatible, but hopefully you find helpful. I'm using a multi-stage build to avoid the compile outside of docker and then a second build. I'm also specifying the platform for the build host and using the target arch and OS variables to configure go to cross compile. In this scenario, I went with a completely statically linked binary so there were no libraries to include, and with only copy commands in my final release image, I avoid issues running the build on a different platform.
# syntax=docker/dockerfile:experimental
# ^ above line must be at the beginning of the Dockerfile for BuildKit
# --platform pulls an image for the build host rather than target OS/Arch
FROM --platform=$BUILDPLATFORM golang:1.12-alpine as dev
RUN apk add --no-cache git ca-certificates
RUN adduser -D appuser
WORKDIR /src
COPY . /src/
CMD CGO_ENABLED=0 go build -o app . && ./app
# my dev stage is separate from build to allow mounting source and rebuilding
# on developer machines
FROM --platform=$BUILDPLATFORM dev as build
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
# --mount is an experimental BuildKit feature
RUN --mount=type=cache,id=gomod,target=/go/pkg/mod/cache \
--mount=type=cache,id=goroot,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
go build -ldflags '-w -extldflags -static' -o app .
USER appuser
CMD [ "./app" ]
# this stage will have the target OS/Arch and cannot have any RUN commands
FROM scratch as release
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /etc/passwd /etc/group /etc/
COPY --from=build /src/app /app
USER appuser
CMD [ "/app" ]
FROM scratch as artifact
COPY --from=build /src/app /app
FROM release
To build this, I can run one-off build commands with BuildKit:
DOCKER_BUILDKIT=1 docker build --platform=linux/amd64 -f Dockerfile -t golang-app:amd64 .
DOCKER_BUILDKIT=1 docker build --platform=linux/arm64 -f Dockerfile -t golang-app:arm64 .
But even better is the Buildx version that creates a multi-arch image that will run on multiple platforms. This does require that you push to a registry server:
docker buildx build -f Dockerfile --platform linux/amd64,linux/arm64 \
-t ${docker_hub_id}/golang-app:multi-arch --output type=registry .
For your scenario, you would swap out the references from arm64 to your own architectures. The --platform option was listed as experimental in many commands I ran, so you may need to configure the following in the /etc/docker/daemon.json file to enable:
{ "experimental": true }
I believe this required a full restart of the docker daemon (systemctl restart docker), not just a reload, to take effect.
To extract the artifact from the build (the compiled binary for the specific architecture), I use:
docker build -f Dockerfile --target artifact -o type=local,dest=. .
That will output the contents of the artifact stage (a single binary) to the local directory.
The above is option 3 in Docker's list of ways to build multi-arch images.
Option 1 is to configure qemu with binfmt_misc to build and run images for different platforms. I have not yet been able to get this to work on Linux with Buildx, but Docker has done this for Mac and you may be able to find more details on what they did in the LinuxKit project. Using qemu for your situation may be ideal if you need to run commands and install other tools as part of your build, not just cross compile a single statically linked binary.
Option 2 is to run the build on the target platform, which as you've seen works well. With Buildx, you can add multiple build nodes, up to one per platform, allowing you to run the build from a single location (CI server).
I'm trying to build a docker container in Visual Studio Code (I'm actually working with IoT Edge modules). The application is the OPC Publisher module for IoT Edge that you can find here. I'm working on Windows 10
I have the following Dockerfile:
ARG runtime_base_tag=2.1-runtime-bionic-arm32v7
ARG build_base_tag=2.1-sdk-bionic-arm32v7
FROM microsoft/dotnet:${build_base_tag} AS build
WORKDIR /app
# copy csproj and restore as distinct layers
COPY ./src/*.csproj ./opcpublisher/
WORKDIR /app/opcpublisher
RUN dotnet restore
# copy and publish app
WORKDIR /app
COPY ./src/. ./opcpublisher/
WORKDIR /app/opcpublisher
RUN dotnet publish -c Release -o out
# start it up
FROM microsoft/dotnet:${runtime_base_tag} AS runtime
WORKDIR /app
COPY --from=build /app/opcpublisher/out ./
WORKDIR /appdata
ENTRYPOINT ["dotnet", "/app/opcpublisher.dll"]
When I run the command docker build I have the following error:
Step 7/16 : RUN dotnet restore
---> Running in d6bd61466c67
qemu: Unsupported syscall: 389
And then it's waiting indefinitely, the process is not exiting with error code or something else.
It seems to be an error with the .NET command. I commented the step 7 in the Dockerfile and I got the same error at the step RUN dotnet publish -c Release -o out
I installed qemu and .NET cli tools (this ?) but I still have the error.
Does someone have an idea of the problem ?
Thanks
EDIT
When I run VS code as asdministrator, I have this additional output:
Step 7/16 : RUN dotnet restore
---> Running in 293e6271cb65
qemu: Unsupported syscall: 389
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
Segmentation fault
The command '/bin/sh -c dotnet restore' returned a non-zero code: 139
And the command stops (no more indefinitely wait)