How to solve "Rather than invoking init scripts...." on Ubuntu 14 - docker

I'm using a docker container of Ubuntu 14.
$ cat /etc/lsb-release # this is in the container
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS"
When I typed service cron start in the container, I have the following error. I think the following error doesn't make sence for me because the error should be showed when I use /etc/init.d/cron start.
$ service cron start
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service cron start
Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start cron
When I typed /etc/init.d/cron start in the container. The same error is showed as service cron start
Could you tell me how to solve the error and how to start cron in the docker container?

To run cron daemon you can simply invoke 'cron':
root#89bdd8666c95:# cron
root#89bdd8666c95:# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 14:56 ? 00:00:00 bash
root 88 1 0 15:02 ? 00:00:00 cron
root 89 1 0 15:02 ? 00:00:00 ps -ef

Related

Get stats for all processes in a container

I am interested in getting the cpu and mem info for each individual process in a container. I know docker stats gives me the info for the entire container and docker container top tells me the processes in a container. Is it possible to combine these two actions and get the stats for each process in a container?
One option would be to use the ps command inside the container. I looked into using htop, but I believe that's designed to be used interactively:
# start example ubuntu container
docker run -d --name ubuntu ubuntu:latest tail -f /dev/null
# execute ps aux inside container
docker exec -it ubuntu ps aux
Output:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 2548 516 ? Ss 15:41 0:00 tail -f /dev/nu
root 7 0.0 0.1 5892 2924 pts/0 Rs+ 15:42 0:00 ps aux

'docker stop' for crond times out

I'm trying to understand why my Docker container does not stop gracefully and just times out. The container is running crond:
FROM alpine:latest
ADD crontab /etc/crontabs/root
RUN chmod 0644 /etc/crontabs/root
CMD ["crond", "-f"]
And the crontab file is:
* * * * * echo 'Working'
# this empty line required by cron
Built with docker build . -t periodic:latest
And run with docker run --rm --name periodic periodic:latest
This is all good, but when I try to docker stop periodic from another terminal, it doesn't stop gracefully, the time out kicks in and is killed abruptly. It's like crond isn't responding to the SIGTERM.
crond is definitely PID 1
/ # ps
PID USER TIME COMMAND
1 root 0:00 crond -f
6 root 0:00 ash
11 root 0:00 ps
However, if I do this:
docker run -it --rm --name shell alpine:latest ash and
docker exec -it shell crond -f in another terminal, I can kill crond from the first shell with SIGTERM so I know it can be stopped with SIGTERM.
Thanks for any help.
Adding an init process to the container (init: true in docker-compose.yml) solved the problem.
EDIT: I read this https://blog.thesparktree.com/cron-in-docker to understand the issues and solutions around running cron in Docker. From this article:
"Finally, as you’ve been playing around, you may have noticed that it’s difficult to kill the container running cron. You may have had to use docker kill or docker-compose kill to terminate the container, rather than using ctrl + C or docker stop.
Unfortunately, it seems like SIGINT is not always correctly handled by cron implementations when running in the foreground.
After researching a couple of alternatives, the only solution that seemed to work was using a process supervisor (like tini or s6-overlay). Since tini was merged into Docker 1.13, technically, you can use it transparently by passing --init to your docker run command. In practice you often can’t because your cluster manager doesn’t support it."
Since my original post and answer, I've migrated to Kubernetes, so init in docker-compose.yml won't work. My container is based on Debian Buster, so I've now installed tini in the Dockerfile, and changed the ENTRYPOINT to ["/usr/bin/tini", "--", "/usr/local/bin/entrypoint.sh"] (my entrypoint.sh finally does exec cron -f)
The key is that you cannot stop a pid=1 process in docker. It supposes that docker stops (or kills if it was launched with --rm).
That's why if you run -it ... ash, shell has pid 1 and you can kill other processes.
If you want your cron is killable without stopping/killing docker, just launch another process as entrypoint:
Launch cron after docker entrypoint (For example, run as cmd tail -F /dev/null and then launch cron docker run -d yourdocker service cron start)

How to log all the processes running inside a Docker container?

After having logged into the container using the command -
docker exec -it <container_name>
How do I check for all the processes running inside the container? Is "ps aux" the correct way to do it? Are there any better alternatives/approaches?
You can use dedicated command top to list process in docker container, regardless of the operating system in container.
docker top <container>
It is possible to show all the processes running inside a container without login to terminal by using the following command. Of course, it is just like how one can see by using ps -eaf, so just add it to docker exec.
bash $ sudo docker exec -it test1 ps -eaf
PID USER TIME COMMAND
1 root 0:00 sh
7 root 0:00 sh
60 root 0:00 /bin/sh
67 root 0:00 /bin/sh
84 root 0:00 ps -eaf
Like it was mentioned, if you are already inside of a container, then just use ps -eaf command to see the running processes.
By the way, it is recommended to have one user application / process per container.
Extending from the answer of #Slawomir
And With ps option, docker top [--help] CONTAINER [ps OPTIONS]
docker top <container_id> -eo pid,cmd

Debugging a bash script in a container gives a process on the host?

I start a container with name pg.I wanted to debug a bash script in a container, so I installed bashdb in the container. I started it:
root#f8693085f270:/# /usr/share/bin/bashdb docker-entrypoint.sh postgres
I go back to the host, and do:
[eric#almond volume]$ docker exec -ti pg bash
root#f8693085f270:/# ps ajxw
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
0 1 1 1 ? 3746 Ss 0 0:00 bash
1 3746 3746 1 console 3746 S+ 0 0:00 /bin/bash
[eric#almond postgres]$ ps ajxw | grep docker
30613 3702 3702 30613 pts/36 3702 Sl+ 1000 0:01 docker run --name pg -v /home/eric/tmp/bashdb:/bashdb -it postgres bash
3760 8049 8049 3760 pts/19 8049 S+ 0 0:00 /bin/bash /usr/share/bin/bashdb docker-entrypoint.sh postgres
4166 8294 8294 4166 pts/9 8294 Sl+ 1000 0:00 docker exec -ti pg bash
So in the container I see a TTY entry console, which I have never seen before, and I see the debugging entry in ps on the host!
What is going on?
Docker isolates a container from the host, it doesn't isolate the host from the container. That means the host can see the processes run inside containers, though from a different name space so the pids will be different.
Attaching to console appears to be something from bashdb. It has automatic detection for the tty to direct output to, and may be getting thrown off by the Docker isolation.

Stop a running Docker container by sending SIGTERM

I have a very very simple Go app listening on port 8080
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
w.Header().Set("Content-Type", "text-plain")
w.Write([]byte("Hello World!"))
})
log.Fatal(http.ListenAndServe(":8080", http.DefaultServeMux))
I install it in a Docker container and start it like so:
FROM golang:alpine
ADD . /go/src/github.com/myuser/myapp
RUN go install github.com/myuser/myapp
ENTRYPOINT ["/go/bin/myapp"]
EXPOSE 8080
I then run the container using docker run:
docker run --publish 8080:8080 first-app
I expect that, like most programs, I can send a SIGTERM to the process running docker run and this will cause the container to stop running. I observe that sending SIGTERM has no effect, and instead I need to use a command like docker kill or docker stop.
Is this intended behavior? I've asked in the forums and on IRC and gotten no answer.
Any process run with docker must handle signals itself.
Alternatively use the --init flag to run the tini init as PID 1
The sh shell can become the PID 1 process depending on how you specify a command (CMD).
Detail
A SIGTERM is propagated by the docker run command to the Docker daemon by default but it will not take effect unless the signal is specifically handled in main process being run by Docker.
The first process you run in a container will have PID 1 in that containers context. This is treated as a special process by the linux kernel. It will not be sent a signal unless the process has a handler installed for that signal. It is also PID 1's job to forward signals onto other child processes.
docker run and other commands are API clients for the Remote API hosted by the docker daemon. The docker daemon runs as a seperate process and is the parent for the commands you run inside a container context. This means that there is no direct sending of signals between run and the daemon, in the standard unix manner.
The docker run and docker attach command have a --sig-proxy flag that defaults signal proxying to true. You can turn this off if you want.
docker exec does not proxy signals.
In a Dockerfile, be careful to use the "exec form" when specifying CMD and ENTRYPOINT defaults if you don't want sh to become the PID 1 process (Kevin Burke):
CMD [ "executable", "param1", "param2" ]
Signal Handling Go Example
Using the sample Go code here: https://gobyexample.com/signals
Run both a regular process that doesn't handle signals and the Go daemon that traps signals and put them in the background. I'm using sleep as it's easy and doesn't handle "daemon" signals.
$ docker run busybox sleep 6000 &
$ docker run gosignal &
With a ps tool that has a "tree" view, you can see the two distinct process trees. One for the docker run process under sshd. The other for the actual container processes, under docker daemon.
$ pstree -p
init(1)-+-VBoxService(1287)
|-docker(1356)---docker-containe(1369)-+-docker-containe(1511)---gitlab-ci-multi(1520)
| |-docker-containe(4069)---sleep(4078)
| `-docker-containe(4638)---main(4649)
`-sshd(1307)---sshd(1565)---sshd(1567)---sh(1568)-+-docker(4060)
|-docker(4632)
`-pstree(4671)
The details of docker hosts processes:
$ ps -ef | grep "docker r\|sleep\|main"
docker 4060 1568 0 02:57 pts/0 00:00:00 docker run busybox sleep 6000
root 4078 4069 0 02:58 ? 00:00:00 sleep 6000
docker 4632 1568 0 03:10 pts/0 00:00:00 docker run gosignal
root 4649 4638 0 03:10 ? 00:00:00 /main
Killing
I can't kill the docker run busybox sleep command:
$ kill 4060
$ ps -ef | grep 4060
docker 4060 1568 0 02:57 pts/0 00:00:00 docker run busybox sleep 6000
I can kill the docker run gosignal command that has the trap handler:
$ kill 4632
$
terminated
exiting
[2]+ Done docker run gosignal
Signals via docker exec
If I docker exec a new sleep process in the already running sleep container, I can send an ctrl-c and interrupt the docker exec itself, but that doesn't forward to the actual process:
$ docker exec 30b6652cfc04 sleep 600
^C
$ docker exec 30b6652cfc04 ps -ef
PID USER TIME COMMAND
1 root 0:00 sleep 6000 <- original
97 root 0:00 sleep 600 <- execed still running
102 root 0:00 ps -ef
So there are two factors at play here:
1) If you specify a string for an entrypoint, like this:
ENTRYPOINT /go/bin/myapp
Docker runs the script with /bin/sh -c 'command'. This intermediate script gets the SIGTERM, but doesn't send it to the running server app.
To avoid the intermediate layer, specify your entrypoint as an array of strings.
ENTRYPOINT ["/go/bin/myapp"]
2) I built the app I was trying to run with the following string:
docker build -t first-app .
This tagged the container with the name first-app. Unfortunately when I tried to rebuild/rerun the container I ran:
docker build .
Which didn't overwrite the tag, so my changes weren't being applied.
Once I did both of those things, I was able to kill the process with ctrl+c, and bring down the running container.
A very comprehensive description of this problem and the solutions can be found here:
https://vsupalov.com/docker-compose-stop-slow
In my case, my app expects to receive SIGTERM signal for graceful shutdown didn't receive it because the process started by a bash script which called from a dockerfile in this form: ENTRYPOINT ["/path/to/script.sh"]
so the script didn't propagate the SIGTERM to the app.
The solution was to use exec from the script run the command that starts the app:
e.g. exec java -jar ...

Resources