I have setup a build pipeline on an ARM device that is building a .NET Core application. The last step of the build pipeline would be to store the compiled .NET Core app in a docker image.
Is it possible to store the app in the .NET Core runtime image for X86?
My hope is that the .NET Core app does not care about the system architecture as long as the .NET framework is deployed. And that docker does not need to start the X86 image to generate the new image:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
COPY /my-application/build/ /app/
EXPOSE 80/tcp
WORKDIR /app
ENTRYPOINT ["dotnet", "app.dll"]
If I understand your question correctly you have a ARM machine running the pipeline and you want it to compile both the ARM and the x86 image?
Buildx - for cross-platform image building
Sure you can. You can use buildx to manage the cross-compiling for you. So go ahead and install buildx.
After you have setup buildx and configured it. You can just run:
docker buildx build \
--platform linux/amd64,linux/386,linux/arm/v7 \
--push \
-t docker_user/docker_image:latest \
.
Since the base image this will work for every platform you want. You can change the platforms you want to build for.
What buildx does, it emulate the target platform and execute all the steps in your regular docker file as if running on that platform. Buildx also tags the image, -t parameter. And Pushes it to the docker registry of choice, if you specify --push.
Actually it pushes an image per platform and a manifest file joining those images. If an other docker client wants to run the image, the manifest is loaded and the needed platform is selected.
In docker compiling
For this to work, you'll need to compile the image in the docker pipeline. That is recommended anyway because compiling it locally and then copying it to the container will result in different images depending on the the installed software on the machine building the image.
Follow the instructions here to created a needed dockerfile.
Requirements
For this to work the base image has to also support multiple architectures. You can check this in the docker registry. It is the case for the dotnet core images. But if your base image isn't supporting the platform it probably won't work. However recompiling the entire image should work (as long al the base image is supporting that platform).
See in action
You also have a github action for installing buildx in a github runner. I use this for several of my libraries, see this workflow file or the result here
Related
I want to build Docker image for AMD and ARM Graviton2 processors. I already know about multi-arch CLI command docker buildx build --platform linux/amd64,linux/arm64, manifests and the fact that Docker will pull the right image variant matching architecture.
I wonder if I have to use in my Dockerfile for ARM as a parent arm64v8/ubuntu:20.04 or it's fine to use ubuntu:20.04 for both? Will it work the same way on both architectures? What's the purpose of this official arm64v8 dockerhub repo?
There is a significant difference in build times - 5min with FROM ubuntu:20.04 vs 30min with FROM arm64v8/ubuntu:20.04.
Ok so I figured it out that this ubuntu:20.04 and this arm64v8/ubuntu:20.04 two images has exactly the same SHA. So Ubuntu:20.04 is only the parent of all these per-arch images and if you run docker manifest inspect ubuntu you will see it all.
So it's clear that arm64v8/ubuntu:20.04 repo is only for the case you want to build ARM image on different architecture (if you don't want to use multibuild buildx command). It that case you have to start writing your Dockerfile with FROM arm64v8/ubuntu:20.04.
I created an Angular 7 application using VS2017 by following this documentation. The application is working fine in local machine, but I want to add docker support for this angular application, and also deploy it into either local docker or local kubernetes.
So, can anyone help on that issue?
I do not know the book that you referenced. But in general the steps would be:
- Try to run your application locally from command line (I guess it can be started with dotnet run).
- Create a Dockerfile
- Use official docker images that already include dotnet framework as Base-Image (e.g.: from microsoft/dotnet:runtime)
- In your Dockerfile you can add as much as you want (install dependencies, run unit-tests, etc.), but to keep it simple the following should be enough:
Dockerfile:
from microsoft/dotnet:runtime
COPY . .
RUN dotnet restore
RUN dotnet build
ENTRYPOINT ["dotnet", "run"]
To optimize for performance you can use multi-stage docker images and split your Dockerfile into build and runtime
Note, that I didn't read your tutorial, but this is how I would start with preparing for docker
To work with kubernetes you can simply push your docker image (docker build -t <your-tag>) to a docker-registry, which your kubernetes cluster has access to and create a k8s-deployment for that contains that image. Locally you don't need a docker-registry but can simply kubectl run ...
See:
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
and https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/
In Docker Hub website Is it possible to build an automated build for an Arm Image ?
I have modified my existing DockerFile to use an Arm base image, but it fails on the next line when it tries to run the apk command with exec format error. So it looks like Docker Hub is trying to build as Intel despite base image being built for Arm.
Is it possible to build Arm image with Docker Hub website or not ?
If not can anyone provide succint instructions on how to build Arm Image from my Dockerfile either by
Using my (Intel PC) from the docker command line
Using my (Intel Mac) from the docker command line
Using QNAP TS131P Container station, (since this is natively Arm maybe this is simpler)
Turned out to be relatively easy using the QNAP, alot simpler than it seemed from the posts I had read, I think my confusion was these posts were about building Arm version on an Intel machine, which i didnt need to do. And all the Arm specific instructions were for Raspberry Pi which had its own problems.
Created new empty repository in DockerHub
Uploaded my DockerFile to my webserver
ssh qnapserver
docker build DockerFileUrl
docker login DockerHubUsername DockerHubPassword
docker images (to get imageId of built image)
docker tag imageId DockerHubNameSpace/DockerHubRepository:latest
docker push DockerHubNameSpace/DockerHubRepository:latest
The push worked, and I was then able to use ContainerStation to get the image from Docker Hub and run in a container.
I have a Docker File for my application and I use Docker Hub to build it.
This works fine on a Synology DS218+ Disk Station, which is Intel based.
Qnap supports Docker on both Intel and Arm devices with its Container Station software , I have purchased a TS131P to test this out but it failed with exec format error. Apparently I have to build an Arm version of the image, but how do I do this ?
Can I build the image on the Qnap itself somehow ?
Update
So my base image was openjdk:8-jre-alpine, so I have found on DockerHub an arm32 equivalent of this, https://hub.docker.com/r/arm32v6/openjdk/ so now:
Created a new BitBucket rep
Copied over Docker File
Changed first line of Docker File to FROM arm32v6/openjdk:8-jre-alpine
Created a new Automated Build on Docker linked to this repo
But the build is now failing on the second line
RUN apk --no-cache add \
curl \
tini
with
[91mstandard_init_linux.go:190: exec user process caused "exec format error"
Since I am using arm image I assume that apk should be compiled for arm, or do I need to tell Docker Hub to build on Arm rather than Intel ?
The simple answer is you have to build an arm image on an arm server, so I built in on the Arm nas itself, since this supports Docker, this is what I did
Ensure ContainerStation running on nas server
ssh nas server (from PC)
docker build buildfile docker login
--enter username username
--enter password password
docker images (to get imageId of built image)
docker tag imageId repoName/imageName:latest
docker push
and this was enough to make arm32 version available to be installed on arm32 machine.
Currently I have two separate images, one for Intel and one for Arm. I understand that there is a way to combine multiple images into a single super image, but I have not attempted that yet.
repoName/imageName:latest
I've engineering background mostly with coding/dev't than deployment. We have introduced Microservices recently to our team and I am doing POC on deploying these Microservices to Docker. I made a simple application with maven, Java 8 (not OpenJdk) and jar file is ready to be deployed but I stuck with the exact steps on how to deploy and run/test the application on Docker container.
I've already downloaded Docker on mac and went over this documentation but I feel like there are some steps missing in the middle and I got confused.
I appericiate your help.
Thank you!
If you already have a built JAR file, the quickest way to try it out in docker is to create a Dockerfile which uses the official OpenJDK base image, copies in your JAR and configures Docker to run it when the container starts:
FROM openjdk:7
COPY my.jar /my.jar
CMD ["java", "-jar", "/my.jar"]
With that Dockerfile in the same location as your JAR file run:
docker build -t my-app .
Which will create the image, and then to run the app in a container:
docker run my-app
If you want to integrate Docker in your build pipeline, so the output of each build is a new image, then you can either compile the app inside the image (as in Mark O'Connor's comment above; or build the JAR outside of the image and just use Docker to package it, like in the simple example above.
The advantage of the second approach is a smaller image which just has the app without the source code. The advantage of the first is you can build your image on any machine with Docker - you don't need Java installed to build it.