I have an ASP.NET core project running on Docker Desktop on Windows and I want to use the <DockerfileRunArguments> in my .csproj to bind mount a file from the host.
How can I mount a file with a relative path?
This is the structure of the project:
.
+-- source
| +-- project
| | +-- file.json
| | +-- Dockerfile
| +-- project2
| +-- project3
This is what I've tried but it throws an error:
<DockerfileContext>..\..</DockerfileContext>
<DockerfileRunArguments>-v "$(pwd)/source/project/file.json":"/whatever/file.json"</DockerfileRunArguments>
If I try an absolute path the file is mounted just fine.
Related
What I want to do
I want to set up docker dev environment fo Go.
Code
// directory
project(absolute path: /Uses/[username]/project)
|--- app
| |--- config
| | |___ config.go
| |--- main.go
| |___ config.ini
|--- docker-compose.yml
|___ Dockerfile
// main.go
package main
import (
"app/config"
"fmt"
)
func main() {
fmt.Println("Hello World")
fmt.Println(config.Config.ApiKey)
fmt.Println(config.Config.ApiSecrete)
}
// docker-compose.yml
version: '3.8'
services:
app:
build: .
tty: true
volumes:
- ./app:/go/src/app
// Dockerfile
FROM golang:latest
RUN mkdir /go/src/app
WORKDIR /go/src/app
ENV GO111MODULE=on
ENV GOPATH /go
ADD ./app /go/src/app/
Dev Environment
When I run docker-compose exec app go env, I get
GOPATH="/go"
GOROOT="/usr/local/go"
Problem
When I run docker-compose up -d --build,
I get
package app/config is not in GOROOT (/usr/local/go/src/app/config).
So, I can't import "app/config" in main.go file.
I want to know how import self-made package when I set up GO dev environment with docker.
You can follow the go-env-series about how to use Docker to define your Go development environment in code.
Example: chris-crone/containerized-go-dev
The second article does mention go mod init, which helps with import path:
Let’s make the current directory the root of a module by using go mod init:
$ go mod init example.com/hello
go: creating new go.mod: module example.com/hello
The go.mod file only appears in the root of the module.
Packages in subdirectories have import paths consisting of the module path plus the path to the subdirectory.
For example, if we created a subdirectory world, we would not need to (nor want to) run go mod init there.
The package would automatically be recognized as part of the example.com/hello module, with import path example.com/hello/world.
Thanks to you, I've solved this problem, so I'll share.
I run go mod init app, and go mod tidy.
Then, I change Dockerfile to set GO111MODULE=on.
version: '3.8'
services:
app:
build: .
tty: true
volumes:
- ./app:/go/src/app
environment:
- "GO111MODULE=on"
// directory
project(absolute path: /Uses/[username]/project)
|--- app
| |--- config
| | |___ config.go
| |--- main.go
| |--- config.ini
| |--- go.mod
| |___ go.sum
|--- docker-compose.yml
|___ Dockerfile
cf.
1
2
I'm trying to find over the net how to manage properly Dockerfile in order to make the best possible image, but unfortunately no good way appeared to me. That's why I ask here.
This is my context :
I'm developping Net Core 3 web API
I'm using template from VS2019
I'm using the original DockerFile with some modifications
Here is my Dockerfile :
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
RUN apt-get update;apt-get install libfontconfig1 -y
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Src/API/API.csproj", "Src/API/"]
RUN dotnet restore "Src/API/API.csproj"
COPY . .
WORKDIR "/src/Src/API"
RUN dotnet build "API.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "API.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "API.dll"]
There is my solution structure :
.
| .dockerignore
| mySolution.sln
+---Docs
+---Src
| \---API
| | API.csproj
| | API.csproj.user
| | appsettings.Development.json
| | appsettings.json
| | appsettings.Staging.json
| | Dockerfile
| | Dockerfile.original
| | Program.cs
| | Startup.cs
| +---.config
| | dotnet-tools.json
| +---bin
| +---Controllers (source files)
| +---Data (source files)
| +---Database (source files)
| +---Dtos (source files)
| +---Helpers (source files)
| +---Mail (source files)
| +---Migrations (EF source files)
| +---Models (source files)
| +---obj
| +---PDF (source files)
| +---Properties
| | | launchSettings.json
| +---Services (source files)
| \---wwwroot
| +---Templates
| \---uploads
\---Tests
As you can see, if I want to build my image without VS2019, I have to put the Dockerfile to the root directory (where is the .sln file is).
For now, if I use this Dockerfile, Docker will copy all files / directories from Src directory, including bin / obj directories, and wwwroot directory which can contains some files from my upload tests.
If I check in Visual Studio the file structure in my container :
As you can see, I don't need to all files, only my sources in order to build and deploy my app.
How can I upgrade my Dockerfile in order to make the most proper image ?
Some tips:
For security/portability use alpine instead of buster slim in the final image.
At the final image, use "USER NOBODY" to run Dockerfile as non-root.
That will require usage of ports above 1024.
For the building purpose control the current context using '-f' so you can leave Dockerfile inside but use context from solution root and even if you have CI/CD pipelines.
Run your unit tests inside Dockerfile before last stage so if it fails, it would stop.
Think about secrets and that depends where you will run your container because AppConfigs aren't recommended.
Using the sam build command I was expecting not to include the aws-sdk package as the Node.js Lambda runtime already includes it.
As I understood the sam build for nodejs is a port of claudia pack command from claudiajs, but I do not see any --no-optional-dependencies flag when I run sam build --help.
I tried installing aws-sdk as an optional dependency but still included.
Is there a way to exclude a dependency from the node_modules directory using the sam build command?
From my experimentation I found a couple of options:
Install aws-sdk as a dev dependency
npm i -D aws-sdk
Install aws-sdk as an optional dependency and then use a .npmrc file to disable installing optional decencies on npm install
npm i -O aws-sdk
# .npmrc
optional = false
My folder structure looks something like this:
-- project
|-- lambdas
| |-- lambda1
| | |-- node_modules
| | | `-- ...
| | |-- .npmrc
| | |-- index.js
| | |-- package-lock.json
| | `-- package.json
| `-- lambda2
| |-- node_modules
| | `-- ...
| |-- .npmrc
| |-- index.js
| |-- package-lock.json
| `-- package.json
|-- package-lock.json
|-- package.json
`-- template.yml
Running sam build in both these instances bundles the packages without the unwanted dependencies for me.
McShaman answer is valid for NPM 6.
NPM config has changed in NPM 7 - "optional" was removed.
You should use "omit" instead, to ignore optional dependencies:
https://docs.npmjs.com/cli/v7/using-npm/config#omit
# .npmrc
omit=optional
Given a file structure like this:
project root
|-- X.sln
|-- src
| |-- Foo
| | |-- Foo.fsproj
| | |-- Foo.fs
| |-- Bar
| |-- Bar.fsproj
| |-- Bar.fs
|-- test
|-- Baz
|-- Baz.fsproj
I'd like to first add all .fsproj files to my Docker image, then run a command, then add the rest of the files. I tried the following, but of course it didn't work:
COPY X.sln .
COPY **/*.fsproj .
RUN dotnet restore
COPY . .
RUN dotnet build
The idea is that after the first two COPY steps, the file tree on the image is like this:
working dir
|-- X.sln
|-- src
| |-- Foo
| | |-- Foo.fsproj
| |-- Bar
| |-- Bar.fsproj
|-- test
|-- Baz
|-- Baz.fsproj
and the rest of the tree is only added in after RUN dotnet restore.
Is there a way to emulate this behavior, preferably without resorting to scripts outside of the dockerfile?
You can use two RUN commands to solve this problem, using the shell commands (find, sed, and xargs).
Follow the steps:
Find all fsproj files, with regex extract the filename without extension and with xargs use this data to create directories with mkdir;
Based on the previous script change the regex to create from-to syntax and use the mv command to move files to newly created folders.
Example:
COPY *.sln ./
COPY */*.fsproj ./
RUN find *.fsproj | sed -e 's/.fsproj//g' | xargs mkdir
RUN find *.fsproj | sed -r -e 's/((.+).fsproj)/.\/\1 .\/\2/g' | xargs -I % sh -c 'mv %'
References:
how to use xargs with sed in search pattern
If you use the dotnet command to manage your solution you can use this piece of code:
Copy the solution and all project files to the WORKDIR
List projects in the solution with dotnet sln list
Iterate the list of projects and move the respective *proj files into newly created directories.
COPY *.sln ./
COPY */*/*.*proj ./
RUN dotnet sln list | \
tail -n +3 | \
xargs -I {} sh -c \
'target="{}"; dir="${target%/*}"; file="${target##*/}"; mkdir -p -- "$dir"; mv -- "$file" "$target"'
One pattern that can be used to achieve what you want without resorting to a script outside the Dockerfile is this:
COPY <project root> .
RUN <command to tar/zip the directory to save a copy inside the container> \
<command the removes all the files you don't want> \
dotnet restore \
<command to unpack tar/zip and restore the files> \
<command to remove the tar/zip> \
dotnet build
This would keep all of your operations inside the container. I've bundled them all in one RUN command to keep all of that activity into a single layer of the build. You can break them out if you need to.
Here's just one example on linux of how to recursively remove all files except the ones you don't want: https://unix.stackexchange.com/a/15701/327086. My assumption, based on your example, is that this won't be a costly operation for you.
Great question and I believe I have found the solution.
Have .dockerignore like this
# Ignore everything except *.fsproj.
**/*.*
!**/*.fsproj
Have your Dockerfile-AA like this (please update the ls)
FROM your-image
USER root
RUN mkdir -p /home/aa
WORKDIR /home/aa
COPY . .
RUN ls
RUN ls src
RUN ls src/p1
RUN ls src/p2
RUN ls src/p3
RUN dotnet restore
Have your docker command like this
sudo docker build --rm -t my-new-img -f Dockerfile-AA .
Run it the first time, it will show only the fsproj file being copied.
Run it again, you cannot see the ls result because it is using cache, great.
Obviously, you cannot have dotnet restore in the same Dockerfile, have another docker file like Dockerfile-BB
FROM my-new-img
COPY . .
RUN dotnet build
So, have your script like this
docker build --rm -t my-new-img -f Dockerfile-AA .
rm -f .dockerignore
docker build --rm -t my-new-img2 -f Dockerfile-BB .
This should work. When building my-new-img, it is going to be blazing fast. You need to try this at least 2 times, because the cache was not created right away in my past experience. This is way better than copying the project file line by line.
Command docker run create empty container. I mean that docker image builds successfully but when I run container there is no my changes. I am working on ubuntu 16.04 with vagrant. My docker file:
FROM node:6.2.0-wheezy
MAINTAINER Lukasz Migut
RUN mkdir -p /app
ADD package.json /app/package.json
RUN cd /app/ && \
npm cache clear && \
npm install --no-bin-links --quiet
RUN mkdir -p /var/www/backend
WORKDIR /var/www/backend
COPY entrypoint.sh /entrypoint.sh
CMD ["/bin/bash", "/entrypoint.sh"]
EXPOSE 8080
This is output after docker build .
Sending build context to Docker daemon 6.144 kB
Step 1 : FROM node:6.2.0-wheezy
6.2.0-wheezy: Pulling from library/node
47994b92ab73: Already exists
a3ed95caeb02: Already exists
9b7b75987c3c: Already exists
d66c4af59bfb: Already exists
26df7d6a7371: Already exists
b656b9b4e7eb: Already exists
e3753c84bc68: Already exists
Digest: sha256:9a04df0cd52487e2fb6d6316890380780209f9211f4767934c5f80f2da83a6bf
Status: Downloaded newer image for node:6.2.0-wheezy
---> ecbd08787958
Step 2 : MAINTAINER Lukasz Migut
---> Running in 2a1d31015aea
---> 6d6ff7769ec5
Removing intermediate container 2a1d31015aea
Step 3 : ADD package.json /app/package.json
---> 5a28cc87577c
Removing intermediate container 3df429908e6c
Step 4 : RUN cd /app/ && npm cache clear && npm install --no- bin-links --quiet
---> Running in 1fc442eb449a
npm info it worked if it ends with ok
npm info using npm#3.8.9
npm info using node#v6.2.0
npm info ok
blog-backend#0.0.1 /app
`-- hapi#13.4.1
+-- accept#2.1.1
+-- ammo#2.0.1
+-- boom#3.2.0
+-- call#3.0.1
+-- catbox#7.1.1
+-- catbox-memory#2.0.2
+-- cryptiles#3.0.1
+-- heavy#4.0.1
+-- hoek#4.0.0
+-- iron#4.0.1
+-- items#2.1.0
+-- joi#8.1.0
| +-- isemail#2.1.2
| `-- moment#2.13.0
+-- kilt#2.0.1
+-- mimos#3.0.1
| `-- mime-db#1.23.0
+-- peekaboo#2.0.1
+-- shot#3.0.1
+-- statehood#4.0.1
+-- subtext#4.0.3
| +-- content#3.0.1
| +-- pez#2.1.1
| | +-- b64#3.0.1
| | `-- nigel#2.0.1
| | `-- vise#2.0.1
| `-- wreck#7.2.1
`-- topo#2.0.1
---> ad5bf17db156
Removing intermediate container 1fc442eb449a
Step 5 : WORKDIR /var/www/backend
---> Running in 3f75e64f3880
---> 477162d999c0
Removing intermediate container 3f75e64f3880
Step 6 : COPY entrypoint.sh /entrypoint.sh
---> b0918e5611e2
Removing intermediate container b1c46f9175dd
Step 7 : CMD /bin/bash /entrypoint.sh
---> Running in fd2c72465c11
---> 275911ac22ca
Removing intermediate container fd2c72465c11
Step 8 : EXPOSE 8080
---> Running in d54c25afb6a1
---> f4ba799427cc
Removing intermediate container d54c25afb6a1
Successfully built f4ba799427cc
Command docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> f4ba799427cc 28 minutes ago 514.2 MB
node 6.2.0-wheezy ecbd08787958 8 days ago 503.9 MB
Next i try to run container with docker run -it node:6.2.0-wheezy /bin/bash
and when i login on container i can't see for example created folders form Dockerfile and no node_modules.
What i do wrong, why i can't see changes?
You built an image... it is the one tagged "< none >" in your docker images command. However, when you tried to run a container, you used the old image that you based your new image on. Your changes are in that new image, not the old one.
To get it to work you have to tag your new image with a name and use that name...
docker build -t MYIMAGENAME .
docker run -it MYIMAGENAME /bin/bash