How to create a docker container using a project solution where lib projects are located one level higher than the building context - docker

I have a VS2017 (v5.18.0) solution which contains a .NET Core 2.0 console application "ReferenceGenerator" as the "startup" application. The solution contains also two .Net Core lib 2.0 projects FwCore and LibReferenceGenerator, which are "homegrown" libs. I have added docker support (Linux) and so all files needed to create a docker application are added. I can debug the application in the "docker-compose" mode with "docker for windows in Linux mode". And the application works fine. If I try to build a release version I get an error that a COPY occurs from an illegal path. The docker file looks like this:
FROM microsoft/dotnet:2.0-runtime AS base
WORKDIR /app
FROM microsoft/dotnet:2.0-sdk AS build
WORKDIR /src
COPY ReferenceGenerator/ReferenceGenerator.csproj
ReferenceGenerator/
COPY ../LibReferenceGenerator/LibReferenceGenerator.csproj ../LibReferenceGenerator/
COPY ../FwCore/FwCore/FwCore.csproj ../FwCore/FwCore/
RUN dotnet restore
ReferenceGenerator/ReferenceGenerator.csproj
COPY . .
WORKDIR /src/ReferenceGenerator
RUN dotnet build ReferenceGenerator.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish ReferenceGenerator.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ReferenceGenerator.dll"]
The line with beneath content:
COPY ../LibReferenceGenerator/LibReferenceGenerator.csproj ../LibReferenceGenerator/
Is causing error:
Step 6/17 : COPY ../LibReferenceGenerator/LibReferenceGenerator.csproj ../LibReferenceGenerator/
1>Service 'referencegenerator' failed to build: COPY failed: Forbidden path outside the build context: ../LibReferenceGenerator/LibReferenceGenerator.csproj ()
I have read that relative paths are not allowed, so be it. But the output of the compiler is already complete in the bin directory of the project ReferenceGenerator. I already tried to remove the two copy lines referencing the libs but then the build complains about the missing lib project files at the dotnet build stage.
Having some "homebuild" lib projects being included in an solution seems to me a very common situation. I am a newbee on docker containers and I have no idea how to fix this, anyone?
Additional info my file structure looks like this:
/Production/ReferenceGenerator/ReferenceGenerator.sln
/Production/ReferenceGenerator/ReferenceGenerator/ReferenceGenerator.csproj
/Production/LibReferenceGenerator/LibReferenceGenerator.csproj
/Production/FwCore/FwCore/FwCore.csproj
/Production/ReferenceGenerator/ReferenceGenerator/Dockerfile
Please anyone. The people that tried to help me have not succeeded in doing so. I'm completely stuck in development....

The answer is, there is no solution...
If you need libraries you must include them by using (private) nuget libraries.
It is not a neat solution because while debugging you do not have the sources of your libraries available but including libs outside the build context is a no go I learned researching the internet...
Also in a micro-service environment sharing code should be minized to avoid teams breaking code of other teams. Sorry for all developers who liked to have a solution for this problem, again beside a workaround using nuget packages there is none!

As the error says, you can't copy files that exist outside of the build context. When you run a command like docker image build ., that last argument (.) specifies the build context. That context is copied to the Docker engine for building. Files outside of that (e.g., ../LibReferenceGenerator/LibReferenceGenerator.csproj) simply don't exist.
So, for your example to work, you need to adjust your build context up one level in order to access LibReferenceGenerator and FwCore. Then, make the source of your COPY instructions relative to that one-level up context.
Note that the default location of the Dockerfile is a file named Dockerfile at your build context. You'll need to either move your Dockerfile, or specify a custom path using the -f, --file option.
docker image build documentation

You are missing one level in the copy.
It should be:
COPY ../../LibReferenceGenerator/LibReferenceGenerator.csproj ../LibReferenceGenerator/

Related

How can I cache a nix derivations's dependencies when built via Docker?

FROM nixos/nix#sha256:af330838e838cedea2355e7ca267280fc9dd68615888f4e20972ec51beb101d8
# FROM nixos/nix:2.3
ADD . /build
WORKDIR /build
RUN nix-build
ENTRYPOINT /build/result/bin/app
I have the very simple Dockerfile above that can succesfully build my application. However each time I modify any of the files within the application directory (.), it'll have to rebuild from scratch + download all the nix store dependencies.
Can I somehow grab a "list" of store dependencies downloaded and then add them in on the beginning of the Dockerfile for the purpose of caching them independently (for the ultimate goal of saving time + bandwidth)?
I'm aware I could build this docker image using nix natively which has it's own caching functionality (well the nix store), but I'm trying to have this buildable in a non nix environment (hence using docker).
I can suggest split source in two parts. The idea is to create a separate Docker layer with dependencies only, which changes rarely:
FROM nixos/nix:2.3
ADD ./default.nix /build
# if you have any other Nix files, put them to ./nix subdirectory
ADD ./nix /build/nix
# now let's download all the dependencies
RUN nix-shell --run exit
# At this point, Docker has cached all the dependencies. We can perform the build
ADD . /build
WORKDIR /build
RUN nix-build
ENTRYPOINT /build/result/bin/app

ASP.NET 4.7.2 - Docker - COPY Failed: CreateFile + The system cannot find the file specified

I have created a brand new ASP.NET 4.7.2 MVC Web Application using Visual Studio 2019. When initializing the solution in Visual Studio 2019, I selected the Docker support checkbox. This added a file named Dockerfile to my project that looks like this:
FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019
ARG source
WORKDIR /inetput/wwwroot
COPY ${source:-obj/Docker/publish} .
I attempt to build the Docker image using the build command from a Docker task in Azure DevOps. When I do this, I get the following:
Step 1/13 : FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019
---> [id1]
Step 2/13 : ARG source
---> Running in [id2]
Removing immediate container [id2]
---> [id3]
Step 3/13 : WORKDIR / inetpub/wwwroot
---> Running in [id4]
Removing immediate container [id4]
COPY failed: CreateFile \\?\C:\ProgramData\docker\tmp\docker-build-[id5]\obj\Docker\publish: The system cannot find the path specified.
---> [id6]
STEP 4/13 : COPY ${source:-obj/Docker/publish} .
What is wrong? Why did the copy fail? Clearly it's because it "cannot find the file". But, I'm not sure where it should be. I did not change the Dockerfile. I simply tried to build a Docker image from a "Hello World" ASP.NET 4.7.2 web app in Azure DevOps. I'm trying to learn how to use Docker with Azure DevOps.
Thank you
COPY ${source:-obj/Docker/publish} .
First, we need to know what does this script line mean. What it expressed is:
Hi Docker, here has a request. Please COPY the files you find from the path $source into the current directory . of the image. If there's nothing in $source or it does not exists, just go the default path obj/Docker/publish and use it.
In one word, this script is is trying to copy your compiled .NET application bits from obj/Docker/publish
For the script format, you can refer to this guide.
Represent the above compilation process in one sentence, it is trying to copy the compiled ASP.NET application from obj/Docker/publish.
So, in Azure Devops, you need to add one build task to build the solution before you build the image. Then use Copy file task to copy the build files into the docker image. Otherwise, the ${source:-obj/Docker/publish} folder won't display.
Since your project is a ASP.NET project, you can follow this doc and this blog to achieve your project built.
Also, you can refer to this thread.
The solutions provided in both 2 answers are correct, just the direction are not same. The first solution is to modify the dockerfile to make it suitable in Azure devops without any solution build. And in the second answer, it is using tasks I mentioned above in Azure Devops, to achieve the project can be executed with docker build by using the default dockerfile.

Build multiple docker images without building binaries in each Dockerfile

I have a .NET Core solution with 6 runnable applications (APIs) and multiple netstandard projects. In a build pipeline on Azure DevOps I need to create 6 Docker images and push them to the Azure Registry.
Right now what I do is I build image by image and every one of these 6 Dockerfiles builds the solution from scratch (restores, builds, publishes). This takes a few minutes and the whole pipeline goes almost to 30 minutes.
My goal is to optimize the time of the build. I figured two possible, parallel, ways of doing that:
remove restore and build, run just publish (because it restores references and does the same thing as build)
publish the code once (for all runnable applications) and in Dockerfiles just copy binaries, without building again
Are both ways doable? I can't figure out how to make the second one work - should I just run dotnet publish for each runnable application and then gather all Dockerfiles within the folder with binaries and run docker build? My concern is - I will need to copy required .dll files to the image but how do I choose which ones, without explicitly specifying them?
EDIT:
I'm using Linux containers. I don't write my Dockerfiles - they are autogenerated by Visual Studio. I'll show you one example:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Application.WebAPI/Application.WebAPI.csproj", "Application.WebAPI/"]
COPY ["Processing.Dependency/Processing.Dependency.csproj", "Processing.Dependency/"]
COPY ["Processing.QueryHandling/Processing.QueryHandling.csproj", "Processing.QueryHandling/"]
COPY ["Model.ViewModels/Model.ViewModels.csproj", "Model.ViewModels/"]
COPY ["Core.Infrastructure/Core.Infrastructure.csproj", "Core.Infrastructure/"]
COPY ["Model.Values/Model.Values.csproj", "Model.Values/"]
COPY ["Sql.Business/Sql.Business.csproj", "Sql.Business/"]
COPY ["Model.Events/Model.Events.csproj", "Model.Events/"]
COPY ["Model.Messages/Model.Messages.csproj", "Model.Messages/"]
COPY ["Model.Commands/Model.Commands.csproj", "Model.Commands/"]
COPY ["Sql.Common/Sql.Common.csproj", "Sql.Common/"]
COPY ["Model.Business/Model.Business.csproj", "Model.Business/"]
COPY ["Processing.MessageBus/Processing.MessageBus.csproj", "Processing.MessageBus/"]
COPY ["Processing.CommandHandling/Processing.CommandHandling.csproj", "Processing.CommandHandling/"]
COPY ["Processing.EventHandling/Processing.EventHandling.csproj", "Processing.EventHandling/"]
COPY ["Sql.System/Sql.System.csproj", "Sql.System/"]
COPY ["Application.Common/Application.Common.csproj", "Application.Common/"]
RUN dotnet restore "Application.WebAPI/Application.WebAPI.csproj"
COPY . .
WORKDIR "/src/Application.WebAPI"
RUN dotnet build "Application.WebAPI.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Application.WebAPI.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Application.WebApi.dll"]
One more thing - The problem is that azure devops has this job which builds an image and I just copied this job 6 times, pointing every copy to other Dockerfile. That's why they don't reuse the code - I would love to change that so they base on the same binaries. Here are steps in Azure DevOps:
Get sources
Build and push image no. 1
Build and push image no. 2
Build and push image no. 3
Build and push image no. 4
Build and push image no. 5
Build and push image no. 6
Every single 'Build and push image' does:
dotnet restore
dotnet build
dotnet publish
I want to get rid of this overhead - is it possible?
It's hard to say without seeing your Dockerfiles, but you probably are making some mistakes that are adding time to the image build. For example, each command in a Dockerfile results in a layer. Docker caches these layers and only rebuilds the layer if it or previous layers have changed.
A very common mistake people make is to copy their entire project with all the files within first, and then run dotnet restore. When you do that, any change to any file invalidates that copy layer and thus also the dotnet restore layer, meaning that you have to restore packages every single build. The only thing necessary for the dotnet restore is the project file(s), so if you copy just those, run dotnet restore, and then copy all the files, those layers will be cached, unless the project file itself changes. Since that normally only happens when you change packages (add, update, remove, etc.), most of the time, you will not have to repeat the restore step, and the build will go much quicker.
Another issue can occur when you're using npm and Linux images on Windows. This one bit me personally. In order to support Linux images, Docker uses a Linux VM (MobyLinux). At the start of a build, Docker lifts the entire filesystem context (i.e. where you run the docker command) into the MobyLinux VM, first, as all the Dockerfile commands will be run actually in the VM, and thus the files will need to reside there. If you have a node_modules directory, it can take a significant amount of time to move all that over. You can solve this by adding node_modules to your .dockerignore file.
There's other similar types of mistakes you might be making. We'd really need to see your Dockerfiles to help you further. Regardless, you should not go with either of your proposed approaches. Just running publish will suffer from the same issues described above, and gives you no recourse to alleviate the problem at that point. Publishing outside of the image can lead to platform inconsistencies and other problems unless you're very careful. It also adds a bunch of manual steps to the image building process, which defeats a lot of the benefit Docker provides. Your images will be larger as well, unless you just happen to publish on exactly the same architecture as what the image will use. If you're developing on Windows, but using Linux images, for example, you'll have to include the full ASP.NET Core runtime. If you build and publish within the image, you can include the SDK only in a stage to build and publish, and then target something like alpine linux, with a self-contained architecture-specific publish.

Safe way to include a NuGet private source in a Docker container

I'm setting up a Docker container for my ASP.NET Core server and need to find a safe way for restoring NuGet packages before building and running the project.
I've managed to mount a drive containing a new NuGet.config file solely created for this purpose, as my team doesn't include the config file as a part of the Git repository, but it feels wrong.
As the official Docker image for .NET Core runtime/sdk doesn't include nuget as a part of the library, some have suggesting downloading a windows image just to run nuget source add but that seems terrible as well.
My Dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS base
WORKDIR /app
EXPOSE 5050
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
WORKDIR /src
COPY . .
#Config file needs to be in root of solution or in User/share
RUN dotnet restore "MyProject.csproj"
Adding a private NuGet source should be achievable without downloading a 2GB windows image or copying an existing config file that includes the password.
Have a nuget.config file that lists only the package sources, not credentials, that's commit in your repo with your source code.
Use cross platform authentication providers to allow devs and CI machines to authenticate to your private feeds.
set the nuget's source path is not good enough?
RUN dotnet restore -s https://api.nuget.org/v3/index.json -s https://my-local-private-nuget-source/nuget/nuget

How to run tests in Dockerfile using xunit

So I have an ASP.NET project in a folder (src) and a test project in a folder right next to the other folder (tests). What I am trying to achieve is to be able to run my tests and deploy the application using docker, however I am really stuck.
Right now there is a Dockerfile in the src folder, which builds the application and deploys it just fine. There is also a Dockerfile for the test project in the tests folder, which should just run my tests.
The tests/Dockerfile currently looks like this:
FROM microsoft/dotnet:2.2.103-sdk AS build
WORKDIR /tests
COPY ["tests.csproj", "Tests/"]
RUN dotnet restore "Tests/tests.csproj"
WORKDIR /tests/Tests
COPY . .
RUN dotnet test
But if i run docker build, the tests fail, I am guessing because the application's code to be tested is missing.
I am getting a lot of:
The type or namespace name 'MyService' could not be found (are you missing a using directive or an assembly reference?
I do have a projectreference in my .csproj file so what could be the problem?
Your test code references some files (containing the type MyService) that have not been copied to the image.
This happens because your COPY . . instruction is executed after the WORKDIR /tests/Tests instruction, therefore you are copying everything inside the /tests/Tests folder, and not the referenced code which, according to your description, resides in the src folder.
Your problem should be solved by performing COPY . . in your second line, right after the FROM instruction. That way, all the required files will be correctly copied to the image. If you proceed this way, you can simplify your Dockerfile to something like this (not tested):
FROM microsoft/dotnet:2.2.103-sdk AS build
COPY . . # Copy all files
WORKDIR /tests/Tests # Go to tests directory
ENTRYPOINT ["dotnet", "test"] # Run tests (this will perform a restore + build before launching the tests)

Resources