Am trying to create a docker image where you can build/publish but also run a .net core website but having a real issue with
image mcr.microsoft.com/dotnet/sdk:5.0
If you build inside a container with that it publishes no problem but when you run you get issues.
By contrast if you run with
mcr.microsoft.com/dotnet/aspnet:5.0
then you get no issues, that runs the app fine, but of course you cant use that package to publish or build as its runtime only.
I have done an experiement to run a .net core web app using identical source code and dockerfiles except one is using the sdk and one is using aspnet.
Both dockerfiles build an image using pre-built dlls from a publish folder that was generated before. They look like this..
docker file 1
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
#copy dlls from publish into working directory
COPY /publish .
ENTRYPOINT ["dotnet", "/app/Whipover.dll"]
docker file 2
FROM mcr.microsoft.com/dotnet/sdk:5.0
WORKDIR /app
#copy dlls from publish into working directory
COPY /publish .
ENTRYPOINT ["dotnet", "/app/Whipover.dll"]
Docker file 1 image runs the website no problem,
Docker file 2 image gives the error..
" Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Cannot assign requested address'."
Do we know why that is? I thought the sdk image is supposed to be able to do everything the aspnet image can do.
As for the error I could modify docker file 2 to include
#ENV ASPNETCORE_URLS=http://+:80
#EXPOSE 80 5000
.. which runs the website but then it cant retrieve the contents of wwwroot.. so that would be another issue. Also I dont see why I need to manually expose ports when dockerfile 1 had no problem at all?
Thanks
The solution is to have a multipart dockerfile where you build with the sdk and then run with a runtime different image.
I was under the impression the sdk could do both.
I was looking at an out of date course on pluralsight. This is the best reference to use is
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/docker/building-net-docker-images?view=aspnetcore-5.0
Related
If I have a simple dockerfile with only COPY commands and no RUN commands, How can I build it and push it to a repo without running any emulation?
I am trying to build on bitbucket which has these restrictions but in general if an environment where emulation can't be run for security reasons, how can you build multi-arch dockerfiles that only need to copy files to make another layer based off an existing multi-arch image. This shouldn't need to actually run a container in order to build the image, right? How can a simple build like this be run that is guaranteed not to launch an emulated container?
Edit: Here is an example file. This uses dotnet, but that same idea should apply to any language where the app deployed is not platform specific. I think all of these docker directives should just modify metadata or directly write an image layer, not actually require running a container.
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY app .
EXPOSE 80
ENTRYPOINT ["dotnet", "myApp.dll"]
I was trying to get a minimal example go app running inside a docker container.
But I kept getting exec /app: no such file or directory when running the container.
I checked and double checked all my paths in the image where I built and copied my application data, even looked inside the container with interactive shell to verify my app was there but nothing worked.
My Dockerfile:
# syntax=docker/dockerfile:1
# BUILD-STAGE
FROM golang:1.17-alpine as build
WORKDIR /go/app
COPY . .
RUN go mod download
RUN go build -o /go/bin/app
# RUN-STAGE
FROM gcr.io/distroless/static-debian11
COPY --from=build /go/bin/app .
EXPOSE 8080
CMD ["./app"]
After several hours of try and error I finally tried to add CGO_ENABLED=0 to my go build command.... and it worked!
My question is now.... why exactly does this happen?
There was no error when I built my image with CGO implicitly enabled and I even verified that the binary was copied into my second stage!
Why does the runtime say there is no such file when it was built using CGO, but can find it easily when it was built with CGO disabled?
The image that you are using does not contain libc 1 which you build your go app against when using CGO_ENABLED=1.
As suggested in 1 you can use gcr.io/distroless/base.
There was no error when building your app because golang:1.17-alpine contains musl libc (something like libc but smaller).
Then you tried running the app that required libc in an environment that does not have it anymore. So the no such file error.
google container tools
I am new to Docker. I have the following directory structure for my project
docker-compose.yml
|-services
|-orders
|-DockerFile
I am using standard ASP.Net Core DockerFile that has the following content:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["Services/Orders/Orders.csproj", "Services/Orders/"]
RUN dotnet restore "Services/Orders/Orders.csproj"
COPY . .
WORKDIR "/src/Services/Orders"
RUN dotnet build "Orders.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Orders.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Orders.dll"]
My docker-compose.yml file has
# Please refer https://aka.ms/HTTPSinContainer on how to setup an https developer certificate for your ASP .NET Core service.
version: "3.4"
services:
orders-api:
image: orders-api
build:
context: .
dockerfile: Services/Orders/Dockerfile
ports:
- "2222:80"
I have some confusion with these two files
Question 1: What is the use of WORKDIR /app on line number 2?
My understanding is that we are using a base image that we can extend so
when we import the base image in line number 1 and then set the
WORKDIR and port in line number 2 and 3, will they be used by following commands
that use this image?
Question 2: Why are we setting WORKDIR /src for the SDK image and not WORKDIR /app?
Question 3: Are paths in copy commands relevant to Dockerfile or Docker-compose.yml file?
In the line COPY ["Services/Orders/Orders.csproj", "Services/Orders/"], the path that we are specifying seems to be relevant to the docker-compose.yml file and not the DockerFile which is nested down further in folders. Does this mean that paths in Dockerfile need to be relevant to docker-compose.yml? I am asking this because I am thinking that if I run the docker build command using this Dockerfile then I will get an error since the path in the copy command will not be valid.
For anyone coming to this thread later and facing a similar issue, I am going to share what I have learned so far based on Leo’s answer and other sources.
Docker has a feature called multi-stage builds. This allows us to create multiple layers and use them.
ASP.Net Core app can be built using different images. The SDK image is larger in size but gives us additional tools to build and debug our code in a development environment.
In production, however, we do not need the SDK. We only want to leverage the lightweight run time in PROD. We can leverage the multi-stage builds feature of Docker to create our final container and keep it lightweight.
Coming to the Dockerfile…
Each section that starts with the “From” keyword is defining a layer. In my original Dockerfile, you can see I am creating 4 layers base, build, publish and final. We can name a layer and then use that name to create a new layer based on the first one.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
The above code will create a layer called “base” based on the aspnet:3.1 image which contains the lightweight runtime and will be used to host our final application.
We then set the working directory to /app folder so that when we use this layer later, our commands such as COPY will run in this folder.
Next, we just expose port 80 which is the default port for the web-server.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["Services/Orders/Orders.csproj", "Services/Orders/"]
RUN dotnet restore "Services/Orders/Orders.csproj"
COPY . .
WORKDIR "/src/Services/Orders"
RUN dotnet build "Orders.csproj" -c Release -o /app/build
Next, we download the SDK image which we will use to build the app and call this layer "build".
We set the working directory that will be used by the following "COPY" commands.
The Copy command copies files from local file system to the specified location in the container. So essentially I am copying my Orders.csproj file into the container and then running "dotnet restore" to restore all Nuget packages that are required for this project.
Copying only the .csproj or .sln file and then restoring NuGet packages without copying the entire code is more efficient as it utilizes caching as mentioned here and is a widely adopted practice which I didn’t know and was wondering why can’t we simply copy everything and just run "dotnet restore" command?
Once we have restored NuGet packages we can copy all the files and then run the "dotnet build" command to build our project.
FROM build AS publish
RUN dotnet publish "Orders.csproj" -c Release -o /app/publish
Nest, we create a new layer called "publish" based on our previous layer "build" and simply publish our project with release configuration in the "app/publish" directory.
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Orders.dll"]
The final layer uses "base" (our lightweight runtime image). Copies files from publish layer’s /app/publish location to the set working directory and then set’s the entry point of the application.
So we are using 2 separate images. SDK to build our app, and aspnet image to host our app in the container but the important thing to note here is that the final container that will be generated will only contain the aspnet runtime and will be smaller in size.
All these things are documented and can be searched but I struggled because the information was scattered. Hopefully, it will save some time to anyone that is new to ASP.Net Core and Docker in the future.
Question 1: What is the use of WORKDIR /app on line number 2?
This defines your current working directory. Following commands will by default be run from that location and relative paths will start there.
Example with the following file structure:
/app/
- README.md
- folder1/
- some.txt
WORKDIR /app
RUN ls
will print
folder1
README.md
and
WORKDIR /app/folder1
RUN ls
will print
some.txt
Question 2: Why are we setting WORKDIR /src for the SDK image and not WORKDIR /app?
See the answer to Question 1. The following COPY and dotnet restore command are executed in the /src source directory. (The build is done from the /src directory and the created artifact in /app/publish is later copied to the /app directory in the last stage, to be executed from that /app directory.)
Question 3: Are paths in copy commands relevant to Dockerfile or Docker-compose.yml file?
Copy takes two paths. The first one references the source (a file or folder from the context the docker image is build from) and the second one references the destination (a file order folder in your resulting docker image.) Hence these paths are usually only specific to the Dockerfile and independent from your docker-compose project.
However in your case the context of your docker image build is defined in the docker-compose.yml here:
build:
context: .
dockerfile: Services/Orders/Dockerfile
and therefore the context of your docker image image build seems to be the directory where your docker-compose.yml is located. You could build the same image though if you just run docker build -f Services/Orders/Dockerfile . in that folder. (It is not docker-compose specific)
Therefore you should find Orders.csproj in ./Services/Orders/ starting from the directory your docker-compose.yml is located in. This file is than copied to /src/Services/Orders/Orders.csproj in your second build stage. (The /src can be commited in the COPY statement as it is a relative path starting from your current working directory, which is defined in the line above. - See Question 1)
The reason we use "multi-stage" builds and use several images and copy files to continue instead of just sequentially carrying out steps is mainly due to trying to keep the image size small. Although disk space may be adequate, the following factors can also be relevant:
Build/deploy times. Larger images mean longer continuous integration, the more time will be spent waiting for these operations to complete.
Start-up time. When running your application in production the longer the download takes, the longer it will be before the new container can be up and running.
Therefore using a multi-stage approach as below we are able to omit the sdk which is much bigger and is only needed for building not running the application:
Notice that we use ENTRYPOINT in the second image as the "dotnet run" is only available in the sdk so we make a minor adjustment to just directly run the .dll to get the same outcome.
I am trying to contain my asp.net-core application into a docker container. As I use the Microsoft-secret-store for saving credentials, I need to run a dotnet user-secrets command withing my container. The application needs to read these credentials when starting, so I have to run the command prior to starting my application. When trying to do that in my Dockerfile I get the error:
---> Running in 90f974a06d83
Could not find a MSBuild project file in '/app'. Specify which project to use with the --project option.
I tried building my application first and then building a container with the already build dll, but that gave me the same error. I also tried connecting to the container with ENTRYPOINT ["/bin/bash"] and then looking around in the container. It seems that the /app folder that gets created does not have the .csproj files included. Im not sure if that could be an error.
My Dockerfile
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Joinme.Facade/Joinme.Facade.csproj", "Joinme.Facade/"]
COPY ["Joinme.Domain/Joinme.Domain.csproj", "Joinme.Domain/"]
COPY ["Joinme.Business/Joinme.Business.csproj", "Joinme.Business/"]
COPY ["Joinme.Persistence/Joinme.Persistence.csproj", "Joinme.Persistence/"]
COPY ["Joinme.Integration/Joinme.Integration.csproj", "Joinme.Integration/"]
RUN dotnet restore "Joinme.Facade/Joinme.Facade.csproj"
COPY . .
WORKDIR "/src/Joinme.Facade"
RUN dotnet build "Joinme.Facade.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Joinme.Facade.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
RUN dotnet user-secrets set "jwt:secret" "some_password"
ENTRYPOINT ["dotnet", "Joinme.Facade.dll"]
My expected results are that the secret gets set, so I can start the container without it crashing.
Plain and simple: the operation is failing because at this stage, there is no *.csproj file(s), which the user-secrets command requires. However, this is not what you should be doing anyways for a few reasons:
User secrets are not for production. You can just as easily, or in fact more easily, set an environment variable here instead, which doesn't require dotnet or the SDK.
ENV jwt:secret some_password
You should not actually be storing secrets in your Dockerfile, as that goes into your source control, and is exposed as plain text. Use Docker secrets, or an external provider like Azure Key Vault.
You don't want to build your final image based on the SDK, anyways. That's going to make your container image huge, which means both longer transfer times to/from the container registry and higher storage/bandwidth costs. Your final image should be based on the runtime, or even something like alpine, if you publish self-contained (i.e. keep it as small as possible).
I have a .NET Core 2.1 console app. I want to run this console app in a Docker image. I'm new to Docker and just trying to figure it out.
At this time, I have a Dockerfile, which was inspired from Microsoft's Example. My file actually looks like this:
FROM microsoft/dotnet:2.1-runtime-nanoserver-1709 AS base
WORKDIR /app
FROM microsoft/dotnet:2.1-sdk-nanoserver-1709 AS build
WORKDIR /src
COPY MyConsoleApp/MyConsoleApp.csproj MyConsoleApp/
RUN dotnet restore MyConsoleApp/MyConsoleApp.csproj
COPY . .
WORKDIR /src/MyConsoleApp
RUN dotnet build MyConsoleApp.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish MyConsoleApp.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyConsoleApp.exe"]
My question is, what is the difference between: microsoft/dotnet:2.1-runtime-nanoserver-1709, microsoft/dotnet:2.1-sdk, and microsoft/dotnet:2.1-runtime? Which one should I use for my .NET 2.1 console app? I'm confused as to if I a) build my console app then deploy it to a docker image or b) build a docker image, get the code, and build the console app on the image itself. Any help is appreciated.
For anyone interested in actual Dockerfile reference:
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /app
COPY <your app>.csproj .
RUN dotnet restore <your app>.csproj
COPY . .
RUN dotnet publish -c Release -o out
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=build /app/out ./
ENTRYPOINT ["dotnet", "<your app>.dll"]
It's a question more not about docker itself, but .net docker images
You can go to the https://hub.docker.com/r/microsoft/dotnet/ click in particular image find Dockerfile and understand what exactly software builds in this image and find the difference between different Dockerfile's
Regarding second question better to "build a docker image, get the code, and build the console app on the image itself." So you build the image, expose port, mount your code as volume, and your code executes inside the container with help of software installed in the container
microsoft/dotnet:2.1-runtime-nanoserver-1709 - This image is used to setup .net runtime, under which any .net app runs.
microsoft/dotnet:2.1-sdk-nanoserver-1709 - This image is of SDK used to compile your .net core app
Refer this blog for how to create & run .net core console app in docker:
My answer for the first question is:
microsoft/dotnet:2.1-runtime-nanoserver-1709 is a docker image you can run .net core apps,
And microsoft/dotnet:2.1-sdk-nanoserver-1709 is a docker image you build apps , you can also run app with it.
And my answer for the second question is:
If you just want to run app in docker, build your app with visual studio on you local machine (this will be the most easy way to build it), use the microsoft/dotnet:2.1-runtime-nanoserver-1709 , build a docker image.
I put an example on Github on how to do this for a .net core webapi application, so should be pretty much the same for a console application.
My docker file looks like:
https://github.com/MikeyFriedChicken/MikeyFriedChicken.DockerWebAPI.Example
This is doing pretty much the same as yours.
Just to clarify previous answers what your dockerfile is doing is creating 'temporary' images which have the ability to build your software. Once it is all built these images just get scrapped.
If you look at your last lines in your dockerfile you can see it copies the output from the build images to your final image which is based on 'microsoft/dotnet:2.1-runtime-nanoserver-1709' which has just enough libraries and dependencies for running your .net core application.