Visual Studio Dockerfile EntryPoint Override Explained? - docker

I am new to Docker and trying to understand but I have noticed the Visual Studio does a lot of 'magic' behind the scenes. I have managed to figure out all my questions about the docker run command VS uses when you debug an ASP.NET Core app with Docker support except one.
docker run
-dt
-v "C:\Users\jnhaf\vsdbg\vs2017u5:/remote_debugger:rw"
-v "D:\ProtoTypes\WebAppDockerOrNot\WebAppDockerOrNot:/app"
-v "C:\Users\jnhaf\AppData\Roaming\ASP.NET\Https:/root/.aspnet/https:ro"
-v "C:\Users\jnhaf\AppData\Roaming\Microsoft\UserSecrets:/root/.microsoft/usersecrets:ro"
-v "C:\Users\jnhaf\.nuget\packages\:/root/.nuget/fallbackpackages2"
-v "C:\Program Files\dotnet\sdk\NuGetFallbackFolder:/root/.nuget/fallbackpackages"
-e "DOTNET_USE_POLLING_FILE_WATCHER=1"
-e "ASPNETCORE_ENVIRONMENT=Development"
-e "ASPNETCORE_URLS=https://+:443;http://+:80"
-e "ASPNETCORE_HTTPS_PORT=44328"
-e "NUGET_PACKAGES=/root/.nuget/fallbackpackages2"
-e "NUGET_FALLBACK_PACKAGES=/root/.nuget/fallbackpackages;/root/.nuget/fallbackpackages2"
-p 4800:80
-p 44328:443
--entrypoint tail webappdockerornot:dev -f /dev/null
The final argument --entrypoint tail webappdockerornot:dev -f /dev/null is the one that confuses me. I get that VS is overriding the entry point setup in the Dockerfile but what I do not understand nor can find online is what tail webappdockerornot:dev and the -f /dev/null. I figured out that webappdockerornot:dev is the docker image but can someone explain how this argument works or provide a link to something that explains it.

We can break down that command line a little differently as
docker run \
... some other arguments ... \
--entrypoint tail \
webappdockerornot:dev \
-f /dev/null
and match this against a general form
docker run [OPTIONS] [IMAGENAME:TAG] [CMD]
So the --entrypoint tail option sets the entry point to tail, and the "command" part is -f /dev/null. When Docker actually launches the container, it passes the command as additional arguments to the entrypoint. In the end, the net effect of this is
Ignore what the Dockerfile said to do; after setting up the container runtime environment, run tail -f /dev/null instead.
which in turn is a common way to launch a container that doesn't do anything but also stays running. Then you can use docker exec and similar debugging-oriented tools to do things inside the container.

Related

What does it mean when Docker is simultaneously run in interactive and detatched modess

I'm new to Docker and came across this confusing (to me) command in one of the Docker online manuals (https://docs.docker.com/storage/bind-mounts/):
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app,readonly \
nginx:latest
What I found confusing was the use of both the -it flag and the -d flag. I thought -d means to run the container in the background, but -it means to allow the user to interact with the container via the current shell. What does it mean that both flags are present? What am I not understanding here?
The -i and -t flags influence how stdin and stdout are connected, even in the presence of the -d flag. Furthermore, you can always attach to a container in the future using the docker attach command.
Consider: If I try to start an interactive shell without passing -i...
$ docker run -d --name demo alpine sh
...the container will exit immediately, because stdin is closed. If I want to run that detached, I need:
$ docker run -itd --name demo alpine sh
This allows me to attach to the container in the future and interact with the shell:
$ docker attach demo
/ #

why would we want to use both --detach switch with --interactive and --tty in docker?

I'm reading the docker documentations, and I've seen this command:
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app,readonly \
nginx:latest
As far as I know, using -d or --detach switch run the command outside of the current terminal emulator, and return the control of terminal back to the user. And also using --tty -t and --interactive -i is completely the opposite. Why would anyone want to use them in a command?
For that specific command, it doesn't make sense, since nginx does not have an interactive component. But in general, it allows you to later attach to the container with docker attach. E.g.
$ docker run --name test-no-input -d busybox /bin/sh
92c0447e0c19de090847b7a36657d3713e3795b72e413576e25ab2ce4074d64b
$ docker attach test-no-input
You cannot attach to a stopped container, start it first
$ docker run --name test-input -dit busybox /bin/sh
57e4adcc14878261f64d10eb7839b35d5fa65c841bbcb3cd81b6bf5b8fe9d184
$ docker attach test-input
/ # echo hello from the container
hello from the container
/ # exit
The first container stopped since it was running a shell, and there was no input on stdin (no -i). A shell exits when it finishes reading input (e.g. the end of a shell script).

Store docker command result in varaible in Makefile

We are trying to store the container names in my Makefile but I see below error when executing the build, someone please advise. Thanks.
.PHONY: metadata
metadata: .env1
docker pull IMAGE_NAME
docker run $IMAGE_NAME;
ID:= $(shell docker ps --format '{{.Names}}')
#echo ${ID}
docker cp ${ID}:/app/.env .env2
Container names are not shown in below "ID" Variable when executing the makefile from Jenkins
ID:=
/bin/sh: ID:=: command not found
There are a couple of things you can do in terms of pure Docker mechanics to simplify this.
You can specify an alternate command when you docker run an image: anything after the image name is taken as the image to run. For instance, you can cat the file as the main container command, and replace everything you have above as:
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
docker run --rm \
-e "ARTIFACTORY_USER=${ARTIFACTORY_CREDENTIALS_USR}" \
-e "ARTIFACTORY_PASSWORD=${ARTIFACTORY_CREDENTIALS_PSW}" \
--env-file .env1 \
"${ARTIFACTDATA_IMAGE_NAME}" \
cat /app/.env \
> $#
(It is usually better to avoid docker cp, docker exec, and other imperative-type commands; it is fairly inexpensive and better practice to run a new container when you need to.)
If you can't do this, you can docker run --name your choice of names, and then use that container name in the docker cp option.
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
docker run --name getmetadata ...
docker cp getmetadata:/app/.env $#
docker stop getmetadata
docker rm getmetadata
If you really can't avoid this at all, each line of the Makefile runs in a separate shell. On the one hand this means you need to join together lines if you want variables from one line to be visible in a later line; on the other, it means you have normal shell functionality available and don't need to use the GNU Make $(shell ...) extension (which evaluates when the Makefile is loaded and not when you're running the command).
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
# Note here:
# $$ escapes $ for the shell
# Multiple shell commands joined together with && \
# Beyond that, pure Bourne shell syntax
ID=$$(docker run -d ...) && \
echo "$$ID" && \
docker cp "$$ID:/app/.env" "$#"

What is tail command with docker run entrypoint in visual studio 2019?

I am running Windows 10 pro, docker installed and linux containers.
With Visual Studio 2019, I created a basic .net core web api app, and enabled docker support(linux).
I built the solution, and in the output window (View -> Output or Ctrl + Alt + O) I selected "Container Tools" in the Show Output From drop down. Scroll till the end(see the scroll bar in the below image)
and you see the entry point option to the docker run command as follows.
--entrypoint tail webapp:dev -f /dev/null
The entire docker run command for your ref is as follows.
docker run -dt -v "C:\Users\MyUserName\vsdbg\vs2017u5:/remote_debugger:rw" -v "D:\Trials\Docker\VsDocker\src\WebApp:/app" -v "D:\Trials\Docker\VsDocker\src:/src" -v "C:\Users\UserName\.nuget\packages\:/root/.nuget/fallbackpackages" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -e "ASPNETCORE_ENVIRONMENT=Development" -e "NUGET_PACKAGES=/root/.nuget/fallbackpackages" -e "NUGET_FALLBACK_PACKAGES=/root/.nuget/fallbackpackages" -P --name WebApp --entrypoint tail webapp:dev -f /dev/null
So my question is what is this "tail". I saw two so questions(this and this) but could not get much. Also from here, tail seems to be a linux command(and I am running a linux container) but what does it do here?
Please enlighten me.
Entrypoint is the binary that is being executed.
Example: --entrypoint=bash --entrypoint=helm like this.
The tail linux utility displays the contents of file or, by default, its standard input, to the standard output /dev/null.
/dev/null redirects the command standard output to the null device, which is a special device which discards the information written to it. So when you run a tail -f /dev/null in a terminal it prints nothing.
If you would like to keep your container running in detached mode, you need to run something in the foreground. An easy way to do this is to tail the /dev/null device as the CMD or ENTRYPOINT command of your Docker image.

escaping double quote characters with Jenkins docker plugin

I want to spin up a container using the Jenkins docker plugin as follows:
docker.image('microsoft/mssql-server-linux').run("\"ACCEPT_EULA=Y\" -e \"SA_PASSWORD=P#ssword1\" --name SQLLinuxMaster -d -i -p 15565:1433")
my initial thoughts were that \" should work, however when I run a build the command is failing, I look in the Jenkins log and it appears that (what I think should be) the escaped double quotes are not appearing.
Can someone please point me in the right direction as to how I should be correctly escaping the the double quote characters in the run argument.
Using the conventional docker command line the following spins up the container as desired:
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=P#ssword1" --name SQLLinuxChris -d -i -p 15565:1433 microsoft/mssql-server-linux
You can use
docker.image('microsoft/mssql-server-linux').run("-e ACCEPT_EULA=Y -e SA_PASSWORD=P#ssword1 --name SQLLinuxMaster -d -i -p 15565:1433")
You don't need double quotes. Also you were missing a -e at the start earlier, which may have caused the issue

Resources