I am currently using visual studio build an console applocation that has docker support, the problem with this is the application does not seem to start in a external command prompt, but the
seem to outputting in the internal console window of visual studio, how do i make it execute in a command prompt window?
It seems that the commands it uses forces it to outputted into the dev console window
docker exec -i -w "/app" b6375046a58cba92571a425d937a16bd222d87b537af1c1d64ca6b4c845616c9 sh -c ""dotnet" --additionalProbingPath /root/.nuget/fallbackpackages2 --additionalProbingPath /root/.nuget/fallbackpackages "bin/Debug/netcoreapp3.1/console.dll" | tee /dev/console"
how do i the exec command line such that it outputs to a different window?
And is it somehow possible to deploy these containered application into an locally running kubernetes cluster?
Thus utilizing kubernetes services - instead of specifying ip address and etc?
There is no meaining "different window".
You can run your app in foreground or in detached mode(-d).
To start a container in detached mode, you use -d=true or just -d option.
In foregroung you shouldn't spesified the -d flag
In foreground mode (the default when -d is not specified), docker run can start the process in the container and attach the console to the process’s standard input, output, and standard error
And, of course, you can deploy your applications into kubernates cluster. Without any troubles try minikube to achieve all what you need.
And kubernets services that is another way to represent your app to the world or other local place.
An abstract way to expose an application running on a set of Pods as a network service.
Related
In my CentOS server I use docker created a container,
I opened two sessions connected to the container by command:
docker attach container-name
but there is an issue, in each window I execute command the other window is display the same information.
so I cannot control the container when it is installing package.
is it possible to avoid this issue?
The docker attach command attaches to the currently running process as defined by CMD. You can attach as many times as you want, but they all connect to the same process.
If you want to access the container and have different sessions to it, use:
docker exec -it container-name bash
Or whatever shell is available. bash is common, but you may need to use sh or find out what's used, if any is there at all. Some containers are super stripped down.
The -it flag enables "interactive" mode, as otherwise it just runs that command and shows you the output.
Running docker run with a -d option is described as running the container in the background. This is what most tutorials do when they don't want to interact with the container. On another tutorial I saw the use of bash style & to send the process to the background instead of adding the -d option.
Running docker run -d hello_world only outputs the container ID. On the other hand docker run hello_world & still gives me the same output as if i had run docker run hello_world.
If I do the both experiments with docker run nginx I get the same behavior on both (at least as far as I can see), and both show up if I run docker ps.
Is the process the same in both cases(apart from the printing of the ID and output not being redirected with &)? If not, what is going on behind the scenes in each?
docker is designed as C-S architecture: docker client, docker daemon(In fact still could fined to container-d, shim, runc etc).
When you execute docker run, it will just use docker client to send things to docker daemon, and let daemon call runc etc to start a container.
So:
docker run -d: It will let runc run container in background, you can use docker logs $container_name to see all logs later, the background happend on server side.
docker run &: It will make the linux command run at background, which means the docker run will be in background, this background run on client side. So you still can see stdout etc in terminal. More, if you leave the terminal(even you bash was set as nohup), you will not see it in terminal, you still need docker logs to see them.
Like most docker users, I periodically need to connect to a running container and execute various arbitrary commands via bash.
I'm using 17.06-CE with an ubuntu 16.04 image, and as far as I understand, the only way to do this without installing ssh into the container is via docker exec -it <container_name> bash
However, as is well-documented, for each bash shell process you generate, you leave a zombie process behind when your connection is interrupted. If you connect to your container often, you end up with 1000s of idle shells -a most undesirable outcome!
How can I ensure these zombie shell processes are killed upon disconnection -as they would be over ssh?
One way is to make sure the linux init process runs in your container.
In recent versions of docker there is an --init option to docker run that should do this. This uses tini to run init which can also be used in previous versions.
Another option is something like the phusion-baseimage project that provides a base docker image with this capability and many others (might be overkill).
I am new to docker and I tried to run the linuxconfig/lemp-php7 image. Everything worked fine and I could access the nginx web server installed on the container. To run this image I used this command:
sudo docker run linuxconfig/lemp-php7
When I tried to run the image with the following command to gain access over the container through bash I couldn't connect to nginx and I got the connection refused error message. Command: sudo docker run -ti linuxconfig/lemp-php7 bash
I tried this several times so I'm pretty sure it's not any kind of coincidence.
Why does this happen? Is this a problem specific to this particular image or is this a general problem. And how can I gain access to the shell of the container and access the web server at the same time?
I'd really like to understand this behavior to improve my general understanding of docker.
docker run runs the specified command instead of what that container would normally run. In your case, it appears to be supervisord, which presumably in turn runs the web server. So you're preventing any of that from happening.
My preferred method (except in cases where I'm trying to debug cases where the container won't even start properly) is to do the following after running the container normally:
docker exec -i -t $CONTAINER_ID /bin/bash
What is detached mode in the docker world? I read this article
Link, but it does not explain exactly what detached mode mean.
You can start a docker container in detached mode with a -d option. So the container starts up and run in background. That means, you start up the container and could use the console after startup for other commands.
The opposite of detached mode is foreground mode. That is the default mode, when -d option is not used. In this mode, the console you are using to execute docker run will be attached to standard input, output and error. That means your console is attached to the container's process.
In detached mode, you can follow the standard output of your docker container with docker logs -f <container_ID>.
Just try both options. I always use the detached mode to run my containers. I hope I could explain it a little bit clearer.
The detach option on the docker command line indicates that the docker client (docker) will make a request to the server (dockerd), and then the client will exit while that request continues on the server. Part of the confusion may be that docker looks like a single process, where in reality it is a client/server application where the client is just a thin frontend on a REST API to send every command to the server.
With a docker container run --detach, this means the container will be created, the server will respond with a container id if successful, and the container will continue to run on the server while you are free to run other commands. This is often used for a server (e.g. nginx) you want to start in the background while you continue to run other commands. Note that you can still configure a container with the --interactive and -tty options (often abbreviated -it) and later run a docker container attach to connect to an already running container. (Note, until you attach to the container running with -itd, any attempt by the container to read from stdin would hang, instead of seeing an end of input that often triggers an immediate exit if you just passed -d.)
If you run without the detach option, the client will immediately run an attach API call after the container is created so you can see the output and optionally provide input to the running process on the container. This is useful if your container is running something interactive (e.g. /bin/bash).
Several other commands allow the detach option, including docker-compose up -d which will start an entire project and leave it running on the server in the background. There's also many of the docker service commands which will either detach after submitting the change to the server to create or update a service's target state, or if you do not detach, the client will wait until the service's current state matches the target state and you can see the progress of the deployment. Note with docker service commands, you may have to pass --detach=false to remain attached, the behavior has changed over the past year depending on your version.
What is detached mode in the docker world?
Detached means that the container will run in the background, without being attached to any input or output stream.
docker provide --detach (or -d in short) option and started the program in the background.
This means that the program started but isn’t attached to your terminal.
Example docker run --detach --name web nginx:latest # Note the detach flag.
Why do we need --detach mode?
Server software is generally run in detached containers because it is rare that the software depends on an attached terminal.
Running detached containers is a perfect fit for programs that sit quietly in the background.
Note
Generally, This type of program is called a daemon, or a service. A daemon generally interacts with other programs (or humans over a network) or some other communication channel. When you launch a daemon or other program in a container that you want to run in the background, remember to use either the --detach flag or its short form, -d.
To understand what does mean that -d in docker world, let me explain more clearly with the simulation within it. Docker detached mode means that you can tell the docker that do process(container) until if I am going to write the command docker stop container-id or container-name otherwise I mean without detached mode docker run the process(container) either you press ctrl+c or close a terminal, in other words you have not any choice when docker run the process(container) if you run the process(container) without -d. But you have any choice that docker return id of the container when you type the command with -d because of running process(container) in the background.
docker run -d -t ubuntu:14.04
docker run - Create an instance from docker image as docker container.
(if image not available locally it pulls from docker hub)
ubuntu - Image name
14.04 - Tag
-d, --detach - Detach mode
-t, --tty - Allocate a pseudo-TTY