I'm running into a seemingly impossible situation: I am building a .Net application with the following Dockerfile:
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8-windowsservercore-ltsc2019 AS netbuild
WORKDIR /app
COPY . ./
RUN dotnet publish -c Debug -o out
FROM mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019
WORKDIR /app
COPY --from=netbuild /app/MyProject/out .
#Some other things and an entrypoint
And my build command is
docker build -t myapp:test -f Dockerfile-backend-only .
On my local machine this works perfectly fine. However, when I run it on a different machine, which serves as a build agent, dotnet publish decides to put the result of the build into C:\app\out instead of C:\app\MyProject\out.
Now I already read that the behaviour of the -o option for relative paths changed in some releases of the build tools, but this is a Dockerfile, so this shouldn't have any effect. Also both machines are running Win 10 Pro and have up-to-date Docker installations.
Related
I have a .net solution which has 2 projects, and each project is a microservice (a server). I have a dockerfile which first installs all the dependencies which are used by both projects. Then I publish the solution:
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.sln .
COPY Server/*.csproj ./Server/
COPY JobRunner/*.csproj ./JobRunner/
RUN dotnet restore ./MySolution.sln
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build /app/out .
ENTRYPOINT ["dotnet", "Server.dll"]
When the solution is published, 2 executables are available: Server.dll and JobRunner.dll. However, I can only start only one of them in Dockerfile.
This seems to be wasteful because restoring the solution is a common step for both Server and JobRunner project. In addition this line RUN dotnet publish -c Release -o out produces both an executable for Server and JobRunner. I could write a separate Dockerfile for each project but this seems redundant as 99% of the build steps for each project is identical.
Is there a way to somehow start 2 executables from a single file without using a script (I don't want that both services will be started in a single container)? The closest I've found is the --target option in docker build but it probably won't work because I'd need multiple entrypoints.
In your Dockerfile, change ENTRYPOINT to CMD in the very last line.
Once you do this, you can override the command by just providing an alternate command after the image name in the docker run command:
docker run ... my-image \
dotnet JobRunner.dll
(In principle you can do this without changing the Dockerfile, but the docker run construction is awkward, and there's no particular benefit to using ENTRYPOINT here. If you're using Docker Compose, you can override either entrypoint: or command: on a container-by-container basis.)
I created an out the box template c# edge solution in VScode using the IoT tools but get the following error when trying to build and push the solution:
Step 4/12 : RUN dotnet restore
---> Running in 22c6f8ceb80c
A fatal error occurred, the folder [/usr/share/dotnet/host/fxr] does not contain any version-numbered child folders
The command '/bin/sh -c dotnet restore' returned a non-zero code: 131
This is clearly happening when its trying to run through the commands in Dockerfile.arm32v7 which has the following in it:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster-arm32v7 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim-arm32v7
WORKDIR /app
COPY --from=build-env /app/out ./
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
ENTRYPOINT ["dotnet", "SampleModule.dll"]
This only happens when building for arm32v7 but works for amd64. I'm building for use on raspberry pi so I need arm32.
Seems to me maybe a problem with my environment but not sure where to look?
I've also seen some comments that you need to build on an ARM host machine if you want it to work but I haven't seen that in the documentation before and doesn't make sense from an ease of development point of view
When submitting a build using az acr build, you can pass in the parameter --platform linux/arm/v7 which should give you an ARM build environment.
If you prefer to build locally, you can try using the pulling the qemu package into the build context in the first stage of your Dockerfile.
I want to run multiple instances of .net core API on windows server 2016 using windows docker container. I am able to create image and container successfully, but on invoking docker start the container are not running Up instead it exited with code (2147516566).
Here is my docker file content which is in published API directory
FROM mcr.microsoft.com/dotnet/core/runtime:2.2-nanoserver-sac2016
COPY / app/
ENTRYPOINT ["dotnet", "app/MyAPI.dll"]
I didn't spend long on it, but I didn't have good luck running binaries I built myself. The docker add in for visual studio always performs the build inside a container. I have adapted to this. Here is an example Dockerfile I have anonymized. Hopefully I didn't break anything:
# Base image for running final product
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-nanoserver-sac2016 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
# build asp.net application
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-nanoserver-sac2016 AS build
WORKDIR /src
COPY ["Test.Docker.Windows/Test.Docker.Windows.csproj", "Test.Docker.Windows/"]
RUN dotnet restore "Test.Docker.Windows/Test.Docker.Windows.csproj"
COPY . .
WORKDIR "/src/Test.Docker.Windows"
RUN dotnet build "Test.Docker.Windows.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Test.Docker.Windows.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
# startup.bat contains dotnet test.Docker.Windows.dll
CMD ./startup.bat
I have been updating our dotnet core application from dotnet 2.0 to 2.1 and I see that multi step dockerfiles are now suggested. I read that it is more efficent to build your app inside a container with the dotnet SDK and then copy to a different container with just the runtime. https://github.com/dotnet/announcements/issues/18
I am wondering why this would be better that how we did it with our 2.0 images which would be to run dotnet publish MySolution.sln -c Release -o ./obj/Docker/publish on our build server (dotnet 2.1 sdk is installed on it) and then do a single step build where we copy the build output to an image.
It is claimed to be simpler to do a multi step build, but it seems more complicated to me to copy over everything you need for a build, and then copy the results to another container.
Here is what the multi step Dockerfile looks like
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY myproject.csproj app/
RUN dotnet restore myproject.csproj
COPY . .
WORKDIR /src/Setting
RUN dotnet restore myproject.csproj
RUN dotnet build myproject.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish myproject.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "myproject.dll"]
vs a single step one.
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "Myproject.dll"]
Any thoughts?
I read that it is more efficent to build your app inside a container with the dotnet SDK and then copy to a different container with just the runtime.
I believe that this description already exists before .Net Core 2.1 in the official documentation of Docker.
This is simply because you need SDK only to build your application, but if you have already built the application somewhere else, you can directly run it on runtime container (not SDK container). I did that a lot when no SDK were available for ARM processors, I had to build the application on my PC, then had to copy it to the target device. That was not that easy, because I always had dependencies problems, which I had to solve manually.
Therefore, it is almost always recommended to build the application on the machine you want to deploy the app on. This guaranties that your application will be build with the proper settings for that exact software and hardware requirements.
Because of that, I would always build the application on the target deploying device (on SDK container), and then copy the output of building to runtime container.
Now you are asking: why not simply run the application in SDK container? The only reason is probably because it will be significantly larger and heavier than any runtime container (as it has runtime environment + building tools).
Ended up just using a single stage build because we can build the app on our CI server. The CI server has the SDK installed but our container just uses the runtime image. Building from the SDK image could be interesting if you don't have the dotnet sdk available, or you want to build on a container.
Here is what my Dockerfile looks like for 2.1:
FROM microsoft/dotnet:2.1-aspnetcore-runtime
WORKDIR /app
EXPOSE 80
COPY Microservices/FooService/Foo/obj/Docker/publish .
ENTRYPOINT ["dotnet", "Foo.dll"]
I have a Docker Swarm with multiple Raspberry Pis (raspbian light). I try to run
ASP.NET Core Containers on them as service / stack.
I have following Dockerfile:
FROM microsoft/dotnet:2.0-sdk AS build
WORKDIR /source
COPY WebApi.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /publish -r linux-arm /p:PublishWithAspNetCoreTargetManifest="false"
FROM microsoft/dotnet:2.0-runtime-deps-stretch-arm32v7
ENV ASPNETCORE_URLS http://+:80
WORKDIR /app
COPY --from=build /publish .
ENTRYPOINT [ "./WebApi" ]
What works:
Building & pushing the container image on my win10 laptop (till the sdk image is only available for x64) and running the container on a single raspberry node with docker run:
docker run --rm -it myrepo/myimage
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
(so the container can run and it's no arm vs x64 issue)
What don't work:
docker service create myrep/myimage
2n4ahyhgb3ju5rvo97tekh9vg
overall progress: 0 out of 1 tasks
1/1: no suitable node (unsupported platform on 3 nodes)
And of course docker stack deploy.
If I inspect the created image (or even the arm32v7 image from microsoft) it just states amd64 instead of arm.
"Architecture": "amd64",
"Os": "linux",
So is it a case of wrong meta data? Which is only checked by docker if you use swarm? How can I change that / make it run? Building a base image on my own? Do I miss something?
Edit 1
I tried it with the .NET Core 2.1 images and had the same behavior.
With the latest .NET Core 2.1 Preview 2 images released yesterday it finally works.
Update (add Dockerfile):
This simple one should do the trick:
FROM microsoft/dotnet:2.1-sdk-stretch-arm32v7 AS builder
ENV DOTNET_CLI_TELEMETRY_OPTOUT 1
WORKDIR /src
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /publish -r linux-arm
FROM microsoft/dotnet:2.1-runtime-deps-stretch-slim-arm32v7
WORKDIR /app
COPY --from=builder /publish .
ENTRYPOINT [ "./MyService" ]
You can even build your image now on your raspberry pi. But be aware it´s still a (nice working) preview.