A command to tell whether we are inside docker - docker

As a novice user of a complicated CI system, trying out scripts, I am confused whether my scripts are executed directly by my system's bash, or from a docker container running on the same system. Hence the question: what command (environment variable query or whatever) could tell me whether I am in docker or not?

I guess you are trying to find out whether your script is run from within the context of a docker container OR from within the host machine which runs docker.
Another way of looking at this is: you have a script which is running and this script is actually a process. And any given process has an associated PID.
You might want to find out if this process is running within a docker container or directly within the host machine.
Let's say that your process runs within docker container, then we can conclude that docker process is the parent of your process
Running the top command would list all the processes in the machine. Then using another command ps -axfo pid,uname,cmd would give full listing of processes
Let's say you have identified the parent process id (for eg: 2871). Now you can run
docker ps | awk '{ print $1}' | xargs docker inspect -f '{{ .State.Pid }} {{ .Config.Hostname }}' | grep 2871
Using this you can identify the container containing the process
If we run pstree, we could the process tree all the way upto boot process
Courtesy:
Finding out to which docker container a process belongs to
how-do-i-get-the-parent-process-id-of-a-given-child-process
Hope this helps

If you find yourself in a container, you must have execute a command to enter that container.
If you forgot where you are, type docker ps. If it fails, you are in a docker container.
Edit :
Obviously, this simple trick does not work when you run docker in docker.

Related

How to dump jvm memery in docker swarm mode?

My springcloud applications are running in docker swarm mode, and recently I found that the memory usage is not in the normal level. I want to use some tools like jmap or something else to dump the heap so that I can change the params of jvm and solve the problem.
I have tried to use some tools like arthas ,but it failed because of the pid 1 problem. We can not attach to a process which id is 1-5.
How can I know the heap usage ( eden、survivor etc) ?
The easiest way I think is to get into container's shell and install necessary tools right there. Use this on a node with the application container to start interactive session:
docker exec -u root -it <container_name> sh
Stop the container after you finished to recreate it and thus clean up.
If at some point you need to extract a file from container (e.g. dump) use docker cp from another console on the node:
docker cp <container_name>:<path_in_container> <local_path>

Is there a point in Docker start?

So, is there a point in the command "start"? like in "docker start -i albineContainer".
If I do this, I can't really do anything with the albine inside the container, I would have to do a run and create another container with the "-it" command and "sh" after (or "/bin/bash", don't remember it correctly right now).
Is that how it will go most of the times? delete and rebuilt containers and do the command "-it" if you want to do stuff in them? or would it more depend on the Dockerfile, how you define the cmd.
New to Docker in general and trying to understand the basics on how to use it. Thanks for the help.
Running docker run/exec with -it means you run the docker container and attach an interactive terminal to it.
Note that you can also run docker applications without attaching to them, and they will still run in the background.
Docker allows you to run a program (which can be bash, but does not have to be) in an isolated environment.
For example, try running the jenkins docker image: https://hub.docker.com/_/jenkins.
this will create a container, without you having attach to it, and you would still be able to use it.
You can also attach to an existing, running container by using docker exec -it [container_name] bash.
You can also use docker logs to peek at the stdout of a certain docker container, without actually attaching to its shell interactively.
You almost never use docker start. It's only possible to use it in two unusual circumstances:
If you've created a container with docker create, then docker start will run the process you named there. (But it's much more common to use docker run to do both things together.)
If you've stopped a container with docker stop, docker start will run its process again. (But typically you'll want to docker rm the container once you've stopped it.)
Your question and other comments hint at using an interactive shell in an unmodified Alpine container. Neither is a typical practice. Usually you'll take some complete application and its dependencies and package it into an image, and docker run will run that complete packaged application. Tutorials like Docker's Build and run your image go through this workflow in reasonable detail.
My general day-to-day workflow involves building and testing a program outside of Docker. Once I believe it works, then I run docker build and docker run, and docker rm the container once I'm done. I rarely run docker exec: it is a useful debugging tool but not the standard way to interact with a process. docker start isn't something I really ever run.

A windows container cannot be stopped succesfully

I use the dotnet3.5 image to run containers on win10 with docker desktop 2.1.0.1(37199). Sadly, I found that after I had created a container, did nothing to it, and left it alone for 4 days, the container automotically became unstoppable. The snapshot tells the story.
The container seemed existing there when docker ps -a, but I cannot get into the container by docker exec. And for I cannot stop it--the docker stop process hangs there after I use docker stop container2--I cannot rm the container.
The only way to resolve this issue is to restore docker desktop's factory setting.
By the way, although in the snapshot the running image is aspnet:3.5-windowsservercore-10.0.14393.953, this issue also happens when the aspnet:3.5
Does anyone have good ideas to the unstoppable container? Any suggestions are welcome.
The command used above is incorrect. There is a difference between the commands and options we use. "# docker ps" or "# docker container ls" will give you the list of currently running processes or active containers.
Whereas "-a" will give you all the list of all those which are used to date which contains the list of active and deleted containers.
In your case, the container was is not there and you are trying to access the one which is non-existing, which is why it is stuck.

Which process does ` docker attach ` attaches to?

The doc says
docker attach: Attach local standard input, output, and error streams to a running container
From my understanding, a running container can have many running processes, including those started using docker exec. So When using docker attach, which process am I attaching to exactly?
It should attach rather to the attach terminal’s standard input, output, and error, displaying the ongoing output or to control it interactively of the ENTRYPOINT/CMD process.
So it does not seem to be related to a specific process.
docker attach adds:
You can attach to the same contained process multiple times simultaneously, from different sessions on the Docker host.
Still the same process though.
Whatever process has pid 1 in the container. If the image declared an ENTRYPOINT in the Dockerfile (or if you docker run --entrypoint ...), it's that program; if not, it's the command passed on the docker run command line or the Dockerfile's CMD.

Using docker swarm to execute singular containers rather than "services"

I really enjoy the concept of having a cluster of docker machines available to execute docker services. I also like the additional features not available to singular docker containers (such as docker secret).
But I really have no need for long-standing services. My use case is to simply execute a bash script to use the docker swarm to take in an arbitrary number of finite commands, and execute each as a running docker container on the same docker image, while using the secrets loaded up with docker swarm's secrets.
Can I do this?
I do not want to have this container be "long running". I want it to run, and then exit with the output when the bash script loaded into the container is finished.
You can apply the ideas presented in "One-shot containers on Docker Swarm" from alex ellis.
You still neeeds to create a service, but with the right restart policy.
For instance, for a quick web server:
docker service create --restart-condition=none --name crawler1 -e url=http://blog.alexellis.io -d crawl_site alexellis2/href-counter
(--restart-condition, not --restart-policy, as commented by ethergeist)
So by setting a restart condition of 0, the container will be scheduled somewhere in the swarm as a (task). The container will execute and then when ready - it will exit.
If the container fails to start for a valid reason then the restart policy will mean the application code never executes. It would also be ideal if we could immediately return the exit code (if non-zero) and the accompanying log output, too.
For the last part, use his tool: alexellis/jaas.
Run your first one-shot container:
# jaas -rm -image alexellis2/cows:latest
The -rm flag removes the Swarm service that was used to run your container.
The exit code from your container will also be available, you can check it with echo $?.

Resources