I have been updating our dotnet core application from dotnet 2.0 to 2.1 and I see that multi step dockerfiles are now suggested. I read that it is more efficent to build your app inside a container with the dotnet SDK and then copy to a different container with just the runtime. https://github.com/dotnet/announcements/issues/18
I am wondering why this would be better that how we did it with our 2.0 images which would be to run dotnet publish MySolution.sln -c Release -o ./obj/Docker/publish on our build server (dotnet 2.1 sdk is installed on it) and then do a single step build where we copy the build output to an image.
It is claimed to be simpler to do a multi step build, but it seems more complicated to me to copy over everything you need for a build, and then copy the results to another container.
Here is what the multi step Dockerfile looks like
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY myproject.csproj app/
RUN dotnet restore myproject.csproj
COPY . .
WORKDIR /src/Setting
RUN dotnet restore myproject.csproj
RUN dotnet build myproject.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish myproject.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "myproject.dll"]
vs a single step one.
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "Myproject.dll"]
Any thoughts?
I read that it is more efficent to build your app inside a container with the dotnet SDK and then copy to a different container with just the runtime.
I believe that this description already exists before .Net Core 2.1 in the official documentation of Docker.
This is simply because you need SDK only to build your application, but if you have already built the application somewhere else, you can directly run it on runtime container (not SDK container). I did that a lot when no SDK were available for ARM processors, I had to build the application on my PC, then had to copy it to the target device. That was not that easy, because I always had dependencies problems, which I had to solve manually.
Therefore, it is almost always recommended to build the application on the machine you want to deploy the app on. This guaranties that your application will be build with the proper settings for that exact software and hardware requirements.
Because of that, I would always build the application on the target deploying device (on SDK container), and then copy the output of building to runtime container.
Now you are asking: why not simply run the application in SDK container? The only reason is probably because it will be significantly larger and heavier than any runtime container (as it has runtime environment + building tools).
Ended up just using a single stage build because we can build the app on our CI server. The CI server has the SDK installed but our container just uses the runtime image. Building from the SDK image could be interesting if you don't have the dotnet sdk available, or you want to build on a container.
Here is what my Dockerfile looks like for 2.1:
FROM microsoft/dotnet:2.1-aspnetcore-runtime
WORKDIR /app
EXPOSE 80
COPY Microservices/FooService/Foo/obj/Docker/publish .
ENTRYPOINT ["dotnet", "Foo.dll"]
Related
In my opinion autogenerated Dockerfile for Web .net core application is too large, but why? Why Microsoft decided to create it like this?
This is autogenerated Dockerfile when we add flag "Add docker support" during App creation:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster AS build
WORKDIR /src
COPY ["app/app.csproj", "app/"]
RUN dotnet restore "app/app.csproj"
COPY . .
WORKDIR "/src/app"
RUN dotnet build "app.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "app.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "app.dll"]
And in my opinion it can looks like this:
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster as build
WORKDIR /src
COPY ["app/app.csproj", "app/"]
RUN dotnet restore "app/app.csproj"
COPY . .
WORKDIR "/src/app"
RUN dotnet build "app.csproj" -c Release -o /app/build
RUN dotnet publish "app.csproj" -c Release -o /app/publish
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "app.dll"]
Why Microsoft decided to first - get the aspnet:3.0-buster-slim just to expose ports and use them later as final? It would be much shorter just to get this image as the last step as in my example. Also do we need double From for the sdk:3.0-buster (fist named as build, second as publish)? It's possible to add multiple RUN one by one as in my example.
Maybe there is some tech suggestions why they decide to do that?
Thanks!
A Dockerfile is a series of steps used by the docker build . command. There are at a minimum three steps required:
FROM some-base-image
COPY some-code-or-local-content
CMD the-entrypoint-command
As our application becomes more and more complex additional steps are added. Like restoring packages and dependencies. Commands like below are used for that:
RUN dotnet restore
-or-
RUN npm install
Or the likes. As it becomes more difficult the image build time will increase and the image size itself will increase.
Docker build steps generates multiple docker images and caches them. Notice the below output:
$ docker build .
Sending build context to Docker daemon 310.7MB
Step 1/9 : FROM node:alpine
---> 4c6406de22fd
Step 2/9 : WORKDIR /app
---> Using cache
---> a6d9fba502f3
Step 3/9 : COPY ./package.json ./
---> dc39d95064cf
Step 4/9 : RUN npm install
---> Running in 7ccc864c268c
notice how step 2 is saying Using cache because docker realized that everything upto step 2 is the same as from previous build step it is safe to use the cached from the previous build commands.
One of the focuses of this template is building efficient images.
Efficiency could be achieved in two ways:
reducing the time taken to build the image
reducing the size of the final image
For #1 using cached images from the previous builds is leveraged. Dividing dockerfile to rely more and more on the previous build makes the build process faster. It is only possible to rely on cache if the Dockerfile is written efficiently.
By separating these stages of build and publish the docker build . command will be better able to use more and more cache from previous steps in docker file.
For #2 avoid installing packages that are not required, for example.
refer docker documentation for more details here.
By default, VisualStudio uses the Fast mode build that actually builds your projects on the local machine and then shares the output folder to the container using volume mounting.
In Fast mode, Visual Studio calls docker build with an argument that tells Docker to build only the base stage. Visual Studio handles the rest of the process without regard to the contents of the Dockerfile. So, when you modify your Dockerfile, such as to customize the container environment or install additional dependencies, you should put your modifications in the first stage. Any custom steps placed in the Dockerfile's build, publish, or final stages will not be executed.
Thus, the answer to your question
Why Microsoft decided to first - get the aspnet:3.0-buster-slim just to expose ports and use them later as final?
It's necessary in order to provide optimized Fast mode build and debugging in VisualStudio.
I am trying to contain my asp.net-core application into a docker container. As I use the Microsoft-secret-store for saving credentials, I need to run a dotnet user-secrets command withing my container. The application needs to read these credentials when starting, so I have to run the command prior to starting my application. When trying to do that in my Dockerfile I get the error:
---> Running in 90f974a06d83
Could not find a MSBuild project file in '/app'. Specify which project to use with the --project option.
I tried building my application first and then building a container with the already build dll, but that gave me the same error. I also tried connecting to the container with ENTRYPOINT ["/bin/bash"] and then looking around in the container. It seems that the /app folder that gets created does not have the .csproj files included. Im not sure if that could be an error.
My Dockerfile
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Joinme.Facade/Joinme.Facade.csproj", "Joinme.Facade/"]
COPY ["Joinme.Domain/Joinme.Domain.csproj", "Joinme.Domain/"]
COPY ["Joinme.Business/Joinme.Business.csproj", "Joinme.Business/"]
COPY ["Joinme.Persistence/Joinme.Persistence.csproj", "Joinme.Persistence/"]
COPY ["Joinme.Integration/Joinme.Integration.csproj", "Joinme.Integration/"]
RUN dotnet restore "Joinme.Facade/Joinme.Facade.csproj"
COPY . .
WORKDIR "/src/Joinme.Facade"
RUN dotnet build "Joinme.Facade.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Joinme.Facade.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
RUN dotnet user-secrets set "jwt:secret" "some_password"
ENTRYPOINT ["dotnet", "Joinme.Facade.dll"]
My expected results are that the secret gets set, so I can start the container without it crashing.
Plain and simple: the operation is failing because at this stage, there is no *.csproj file(s), which the user-secrets command requires. However, this is not what you should be doing anyways for a few reasons:
User secrets are not for production. You can just as easily, or in fact more easily, set an environment variable here instead, which doesn't require dotnet or the SDK.
ENV jwt:secret some_password
You should not actually be storing secrets in your Dockerfile, as that goes into your source control, and is exposed as plain text. Use Docker secrets, or an external provider like Azure Key Vault.
You don't want to build your final image based on the SDK, anyways. That's going to make your container image huge, which means both longer transfer times to/from the container registry and higher storage/bandwidth costs. Your final image should be based on the runtime, or even something like alpine, if you publish self-contained (i.e. keep it as small as possible).
I'm pretty new to docker and I was wondering.
When you are working with many micro-services (50+). Is it still relevant to use the runtime image versus the SDK image?
In order to use the runtime image, I need to do a self-contained publish, that is around 100MO bigger.
With 50 micro-services, it's 5GO of data, to have self-contained app.
Is it worth it to take the Runtime image in this case?
The runtime image contains the runtime, so you don't need to publish self-contained. The SDK is only required if you need to build. The runtime has everything necessary to, you know, run. If you're publishing self-contained, you'd only need the OS, so your base image would just be alpine or something, not ASP.NET Core at all (because ASP.NET Core would be contained in the app).
Then, Docker has staged builds. As such, the typical way to do this is to build and publish all in the image, just in different stages. The final image is based on the last stage, so that is where you'd reference the runtime image. For example:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["MyProject/MyProject.csproj", "MyProject/"]
RUN dotnet restore "MyProject/MyProject.csproj"
COPY . .
WORKDIR "/src/MyProject"
RUN dotnet build "MyProject.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "MyProject.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.dll"]
Each FROM line indicates a new stage. The only thing that survives is the final stage, where all that's being done is the published files are copied in and the app is run, giving you an optimally sized image. However, using the staged build, all the logic necessary to build and publish your app is contained in the image.
I have a .NET Core 2.1 console app. I want to run this console app in a Docker image. I'm new to Docker and just trying to figure it out.
At this time, I have a Dockerfile, which was inspired from Microsoft's Example. My file actually looks like this:
FROM microsoft/dotnet:2.1-runtime-nanoserver-1709 AS base
WORKDIR /app
FROM microsoft/dotnet:2.1-sdk-nanoserver-1709 AS build
WORKDIR /src
COPY MyConsoleApp/MyConsoleApp.csproj MyConsoleApp/
RUN dotnet restore MyConsoleApp/MyConsoleApp.csproj
COPY . .
WORKDIR /src/MyConsoleApp
RUN dotnet build MyConsoleApp.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish MyConsoleApp.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyConsoleApp.exe"]
My question is, what is the difference between: microsoft/dotnet:2.1-runtime-nanoserver-1709, microsoft/dotnet:2.1-sdk, and microsoft/dotnet:2.1-runtime? Which one should I use for my .NET 2.1 console app? I'm confused as to if I a) build my console app then deploy it to a docker image or b) build a docker image, get the code, and build the console app on the image itself. Any help is appreciated.
For anyone interested in actual Dockerfile reference:
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /app
COPY <your app>.csproj .
RUN dotnet restore <your app>.csproj
COPY . .
RUN dotnet publish -c Release -o out
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=build /app/out ./
ENTRYPOINT ["dotnet", "<your app>.dll"]
It's a question more not about docker itself, but .net docker images
You can go to the https://hub.docker.com/r/microsoft/dotnet/ click in particular image find Dockerfile and understand what exactly software builds in this image and find the difference between different Dockerfile's
Regarding second question better to "build a docker image, get the code, and build the console app on the image itself." So you build the image, expose port, mount your code as volume, and your code executes inside the container with help of software installed in the container
microsoft/dotnet:2.1-runtime-nanoserver-1709 - This image is used to setup .net runtime, under which any .net app runs.
microsoft/dotnet:2.1-sdk-nanoserver-1709 - This image is of SDK used to compile your .net core app
Refer this blog for how to create & run .net core console app in docker:
My answer for the first question is:
microsoft/dotnet:2.1-runtime-nanoserver-1709 is a docker image you can run .net core apps,
And microsoft/dotnet:2.1-sdk-nanoserver-1709 is a docker image you build apps , you can also run app with it.
And my answer for the second question is:
If you just want to run app in docker, build your app with visual studio on you local machine (this will be the most easy way to build it), use the microsoft/dotnet:2.1-runtime-nanoserver-1709 , build a docker image.
I put an example on Github on how to do this for a .net core webapi application, so should be pretty much the same for a console application.
My docker file looks like:
https://github.com/MikeyFriedChicken/MikeyFriedChicken.DockerWebAPI.Example
This is doing pretty much the same as yours.
Just to clarify previous answers what your dockerfile is doing is creating 'temporary' images which have the ability to build your software. Once it is all built these images just get scrapped.
If you look at your last lines in your dockerfile you can see it copies the output from the build images to your final image which is based on 'microsoft/dotnet:2.1-runtime-nanoserver-1709' which has just enough libraries and dependencies for running your .net core application.
I'm a little confused about how I have to build the image for an Asp.Net Core project for a production environment because it should use aspnetcore image instead of aspnetcore-build, somebody can explain me how is the best way to build and push an image for a production environment, please?
I built the solution in release mode, but I'm not sure if VS created the image using aspnetcore image, since when I published Docker said that it was mounting the image from aspnetcore-build image.
I already figured this out. Visual Studio uses a Docker Multi-Stage build when it creates the Docker file, so when it builds the Docker image, in the end, it's using the aspnetcore image, which is the last stage build declared on the Docker file, and it uses the base stage, which is actually theaspnetcore image. This is an example:
FROM microsoft/aspnetcore:2.0.5 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0.5-2.1.4 AS build
WORKDIR /src
COPY . .
RUN dotnet restore -nowarn:msb3202,nu1503
WORKDIR /src/src/Services/Catalog/Catalog.API
RUN dotnet build --no-restore -c Release -o /app
FROM build AS publish
RUN dotnet publish --no-restore -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Catalog.API.dll"]
So, the recommended way to build and publish the Docker image is from our CI/CD process and not manually.