In my opinion autogenerated Dockerfile for Web .net core application is too large, but why? Why Microsoft decided to create it like this?
This is autogenerated Dockerfile when we add flag "Add docker support" during App creation:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster AS build
WORKDIR /src
COPY ["app/app.csproj", "app/"]
RUN dotnet restore "app/app.csproj"
COPY . .
WORKDIR "/src/app"
RUN dotnet build "app.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "app.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "app.dll"]
And in my opinion it can looks like this:
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster as build
WORKDIR /src
COPY ["app/app.csproj", "app/"]
RUN dotnet restore "app/app.csproj"
COPY . .
WORKDIR "/src/app"
RUN dotnet build "app.csproj" -c Release -o /app/build
RUN dotnet publish "app.csproj" -c Release -o /app/publish
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "app.dll"]
Why Microsoft decided to first - get the aspnet:3.0-buster-slim just to expose ports and use them later as final? It would be much shorter just to get this image as the last step as in my example. Also do we need double From for the sdk:3.0-buster (fist named as build, second as publish)? It's possible to add multiple RUN one by one as in my example.
Maybe there is some tech suggestions why they decide to do that?
Thanks!
A Dockerfile is a series of steps used by the docker build . command. There are at a minimum three steps required:
FROM some-base-image
COPY some-code-or-local-content
CMD the-entrypoint-command
As our application becomes more and more complex additional steps are added. Like restoring packages and dependencies. Commands like below are used for that:
RUN dotnet restore
-or-
RUN npm install
Or the likes. As it becomes more difficult the image build time will increase and the image size itself will increase.
Docker build steps generates multiple docker images and caches them. Notice the below output:
$ docker build .
Sending build context to Docker daemon 310.7MB
Step 1/9 : FROM node:alpine
---> 4c6406de22fd
Step 2/9 : WORKDIR /app
---> Using cache
---> a6d9fba502f3
Step 3/9 : COPY ./package.json ./
---> dc39d95064cf
Step 4/9 : RUN npm install
---> Running in 7ccc864c268c
notice how step 2 is saying Using cache because docker realized that everything upto step 2 is the same as from previous build step it is safe to use the cached from the previous build commands.
One of the focuses of this template is building efficient images.
Efficiency could be achieved in two ways:
reducing the time taken to build the image
reducing the size of the final image
For #1 using cached images from the previous builds is leveraged. Dividing dockerfile to rely more and more on the previous build makes the build process faster. It is only possible to rely on cache if the Dockerfile is written efficiently.
By separating these stages of build and publish the docker build . command will be better able to use more and more cache from previous steps in docker file.
For #2 avoid installing packages that are not required, for example.
refer docker documentation for more details here.
By default, VisualStudio uses the Fast mode build that actually builds your projects on the local machine and then shares the output folder to the container using volume mounting.
In Fast mode, Visual Studio calls docker build with an argument that tells Docker to build only the base stage. Visual Studio handles the rest of the process without regard to the contents of the Dockerfile. So, when you modify your Dockerfile, such as to customize the container environment or install additional dependencies, you should put your modifications in the first stage. Any custom steps placed in the Dockerfile's build, publish, or final stages will not be executed.
Thus, the answer to your question
Why Microsoft decided to first - get the aspnet:3.0-buster-slim just to expose ports and use them later as final?
It's necessary in order to provide optimized Fast mode build and debugging in VisualStudio.
Related
I have a .net solution which has 2 projects, and each project is a microservice (a server). I have a dockerfile which first installs all the dependencies which are used by both projects. Then I publish the solution:
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.sln .
COPY Server/*.csproj ./Server/
COPY JobRunner/*.csproj ./JobRunner/
RUN dotnet restore ./MySolution.sln
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build /app/out .
ENTRYPOINT ["dotnet", "Server.dll"]
When the solution is published, 2 executables are available: Server.dll and JobRunner.dll. However, I can only start only one of them in Dockerfile.
This seems to be wasteful because restoring the solution is a common step for both Server and JobRunner project. In addition this line RUN dotnet publish -c Release -o out produces both an executable for Server and JobRunner. I could write a separate Dockerfile for each project but this seems redundant as 99% of the build steps for each project is identical.
Is there a way to somehow start 2 executables from a single file without using a script (I don't want that both services will be started in a single container)? The closest I've found is the --target option in docker build but it probably won't work because I'd need multiple entrypoints.
In your Dockerfile, change ENTRYPOINT to CMD in the very last line.
Once you do this, you can override the command by just providing an alternate command after the image name in the docker run command:
docker run ... my-image \
dotnet JobRunner.dll
(In principle you can do this without changing the Dockerfile, but the docker run construction is awkward, and there's no particular benefit to using ENTRYPOINT here. If you're using Docker Compose, you can override either entrypoint: or command: on a container-by-container basis.)
I created an out the box template c# edge solution in VScode using the IoT tools but get the following error when trying to build and push the solution:
Step 4/12 : RUN dotnet restore
---> Running in 22c6f8ceb80c
A fatal error occurred, the folder [/usr/share/dotnet/host/fxr] does not contain any version-numbered child folders
The command '/bin/sh -c dotnet restore' returned a non-zero code: 131
This is clearly happening when its trying to run through the commands in Dockerfile.arm32v7 which has the following in it:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster-arm32v7 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim-arm32v7
WORKDIR /app
COPY --from=build-env /app/out ./
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
ENTRYPOINT ["dotnet", "SampleModule.dll"]
This only happens when building for arm32v7 but works for amd64. I'm building for use on raspberry pi so I need arm32.
Seems to me maybe a problem with my environment but not sure where to look?
I've also seen some comments that you need to build on an ARM host machine if you want it to work but I haven't seen that in the documentation before and doesn't make sense from an ease of development point of view
When submitting a build using az acr build, you can pass in the parameter --platform linux/arm/v7 which should give you an ARM build environment.
If you prefer to build locally, you can try using the pulling the qemu package into the build context in the first stage of your Dockerfile.
I am trying to contain my asp.net-core application into a docker container. As I use the Microsoft-secret-store for saving credentials, I need to run a dotnet user-secrets command withing my container. The application needs to read these credentials when starting, so I have to run the command prior to starting my application. When trying to do that in my Dockerfile I get the error:
---> Running in 90f974a06d83
Could not find a MSBuild project file in '/app'. Specify which project to use with the --project option.
I tried building my application first and then building a container with the already build dll, but that gave me the same error. I also tried connecting to the container with ENTRYPOINT ["/bin/bash"] and then looking around in the container. It seems that the /app folder that gets created does not have the .csproj files included. Im not sure if that could be an error.
My Dockerfile
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Joinme.Facade/Joinme.Facade.csproj", "Joinme.Facade/"]
COPY ["Joinme.Domain/Joinme.Domain.csproj", "Joinme.Domain/"]
COPY ["Joinme.Business/Joinme.Business.csproj", "Joinme.Business/"]
COPY ["Joinme.Persistence/Joinme.Persistence.csproj", "Joinme.Persistence/"]
COPY ["Joinme.Integration/Joinme.Integration.csproj", "Joinme.Integration/"]
RUN dotnet restore "Joinme.Facade/Joinme.Facade.csproj"
COPY . .
WORKDIR "/src/Joinme.Facade"
RUN dotnet build "Joinme.Facade.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Joinme.Facade.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
RUN dotnet user-secrets set "jwt:secret" "some_password"
ENTRYPOINT ["dotnet", "Joinme.Facade.dll"]
My expected results are that the secret gets set, so I can start the container without it crashing.
Plain and simple: the operation is failing because at this stage, there is no *.csproj file(s), which the user-secrets command requires. However, this is not what you should be doing anyways for a few reasons:
User secrets are not for production. You can just as easily, or in fact more easily, set an environment variable here instead, which doesn't require dotnet or the SDK.
ENV jwt:secret some_password
You should not actually be storing secrets in your Dockerfile, as that goes into your source control, and is exposed as plain text. Use Docker secrets, or an external provider like Azure Key Vault.
You don't want to build your final image based on the SDK, anyways. That's going to make your container image huge, which means both longer transfer times to/from the container registry and higher storage/bandwidth costs. Your final image should be based on the runtime, or even something like alpine, if you publish self-contained (i.e. keep it as small as possible).
I'm pretty new to docker and I was wondering.
When you are working with many micro-services (50+). Is it still relevant to use the runtime image versus the SDK image?
In order to use the runtime image, I need to do a self-contained publish, that is around 100MO bigger.
With 50 micro-services, it's 5GO of data, to have self-contained app.
Is it worth it to take the Runtime image in this case?
The runtime image contains the runtime, so you don't need to publish self-contained. The SDK is only required if you need to build. The runtime has everything necessary to, you know, run. If you're publishing self-contained, you'd only need the OS, so your base image would just be alpine or something, not ASP.NET Core at all (because ASP.NET Core would be contained in the app).
Then, Docker has staged builds. As such, the typical way to do this is to build and publish all in the image, just in different stages. The final image is based on the last stage, so that is where you'd reference the runtime image. For example:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["MyProject/MyProject.csproj", "MyProject/"]
RUN dotnet restore "MyProject/MyProject.csproj"
COPY . .
WORKDIR "/src/MyProject"
RUN dotnet build "MyProject.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "MyProject.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.dll"]
Each FROM line indicates a new stage. The only thing that survives is the final stage, where all that's being done is the published files are copied in and the app is run, giving you an optimally sized image. However, using the staged build, all the logic necessary to build and publish your app is contained in the image.
I have been updating our dotnet core application from dotnet 2.0 to 2.1 and I see that multi step dockerfiles are now suggested. I read that it is more efficent to build your app inside a container with the dotnet SDK and then copy to a different container with just the runtime. https://github.com/dotnet/announcements/issues/18
I am wondering why this would be better that how we did it with our 2.0 images which would be to run dotnet publish MySolution.sln -c Release -o ./obj/Docker/publish on our build server (dotnet 2.1 sdk is installed on it) and then do a single step build where we copy the build output to an image.
It is claimed to be simpler to do a multi step build, but it seems more complicated to me to copy over everything you need for a build, and then copy the results to another container.
Here is what the multi step Dockerfile looks like
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY myproject.csproj app/
RUN dotnet restore myproject.csproj
COPY . .
WORKDIR /src/Setting
RUN dotnet restore myproject.csproj
RUN dotnet build myproject.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish myproject.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "myproject.dll"]
vs a single step one.
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "Myproject.dll"]
Any thoughts?
I read that it is more efficent to build your app inside a container with the dotnet SDK and then copy to a different container with just the runtime.
I believe that this description already exists before .Net Core 2.1 in the official documentation of Docker.
This is simply because you need SDK only to build your application, but if you have already built the application somewhere else, you can directly run it on runtime container (not SDK container). I did that a lot when no SDK were available for ARM processors, I had to build the application on my PC, then had to copy it to the target device. That was not that easy, because I always had dependencies problems, which I had to solve manually.
Therefore, it is almost always recommended to build the application on the machine you want to deploy the app on. This guaranties that your application will be build with the proper settings for that exact software and hardware requirements.
Because of that, I would always build the application on the target deploying device (on SDK container), and then copy the output of building to runtime container.
Now you are asking: why not simply run the application in SDK container? The only reason is probably because it will be significantly larger and heavier than any runtime container (as it has runtime environment + building tools).
Ended up just using a single stage build because we can build the app on our CI server. The CI server has the SDK installed but our container just uses the runtime image. Building from the SDK image could be interesting if you don't have the dotnet sdk available, or you want to build on a container.
Here is what my Dockerfile looks like for 2.1:
FROM microsoft/dotnet:2.1-aspnetcore-runtime
WORKDIR /app
EXPOSE 80
COPY Microservices/FooService/Foo/obj/Docker/publish .
ENTRYPOINT ["dotnet", "Foo.dll"]