Map process id of application on docker container with process id on host - docker

i am running application in docker container only and not on host machine.. Application has some process ID on docker container. That application also has process id on host . Process Id on host and process ID on container are differerent. How can I see process ID of application running on docker container from host ? How can I map the process ID of application running on container only (and not on host ) with process ID of this application on host ? I searched on internet , but could not find correct set of commands

Running a command like this should get you the PID of the container's main process (ID 1) on the host.
docker container top
$ docker container top cf1b
UID PID PPID C STIME TTY TIME CMD
root 3289 3264 0 Aug24 pts/0 00:00:00 bash
root 9989 9963 99 Aug24 ? 6-07:24:43 java -javaagent:/apps/docker-custom/newrelic/newrelic.jar -Xmx4096m -Xms4096m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:-TieredCompilation -XX:+ParallelRefProcEnabled -jar /apps/service/app.jar
So in this case PID 1 in my container maps to ID 9989 on the host.
If a process is indeed ONLY in your container, that becomes more chellenging. It You can use tools like nsenter to peek into the name spaces but if you have exec privelages to your container then that would achieve the same thing, but the docker container top command on the host combined with the ps command within the container can give you an idea of what is happening.
If you can clarify what your end goal is, we might be able to provide more clear guidance.

In order to get the mapping between container process ID and host process ID, one could run ps -ef on container and docker top <container> on the host. The CMD column present in both of these outputs will help in the decision. Below is the sample output in my environment:
container1:/$ ps -ef
UID PID PPID C STIME TTY TIME CMD
2033 10 0 0 11:08 pts/0 00:00:00 postgres -c config_file=/etc/postgresql/postgresql_primary.conf
host1# docker top warehouse_db
UID PID PPID C STIME TTY TIME CMD
bbharati 11677 11660 0 11:08 pts/0 00:00:00 postgres -c config_file=/etc/postgresql/postgresql_primary.conf
As we can see, the container process with PID=10 maps to the host process with PID=11677

Related

Get the PID on the host from inside the container?

Before Linux Kernel v4, I was able to obtain the host PID from inside the docker container from the process scheduling information.
For instance, if I run sleep command inside the container and my local PID is 37, then I can check the actual PID on the host via:
root#helloworld-595777cb8b-gjg4j:/# head /proc/37/sched
sleep (27062, #threads: 1)
I can verify on the host that the PID 27062 corresponds to the process within the container.
root 27062 0.0 0.0 4312 352 pts/0 S 16:29 0:00 sleep 3000
I have tried this with on RHEL7 (Kernel: Linux 3.10) with Docker version: 17.09.0-ce.
I am not able to reproduce the same result on RHEL8 (Kernel: Linux 4.18) with Docker version: 20.10. In fact, I always get the local PID from the scheduling information.
/ # head /proc/8/sched
sleep (8, #threads: 1)
I might be wrong but my assumption is that something is changed within the Kernel which forbids to obtain the host PID?
So the question is how to obtain the host PID from within the container?
The bug (or "feature" if you prefer) that allowed the host PID to be discovered from /proc/PID/sched in the container was fixed (or "broken" if you prefer) in Linux kernel 4.14 by commit 74dc3384fc79 ("sched/debug: Use task_pid_nr_ns in /proc/$pid/sched").
As a result of the change, the container cannot get the host PID of a process (at least via /proc/PID/sched) if PID namespaces are in use.

Create a single container instead of 3 different containers

I saw you were setting up a Docker-compose file but it which creates 3 different containers but wanted to combine those 3 containers to a single container/image instead of setting it up as multiple containers at deployment system.
My current list of containers are as follow:
my main container containing my code that I built using Docker File
rest 2 are containers of Redis and Postress but wanted to combine them in 1.
Is there any way to do so?
First of all, running redis, postgres and your "main container" in one container is NOT best practice.
Typically you should have 3 separate containers (single app per container) communicating over the network. Sometimes we want to run two or more lightweight services inside the same container but redis and postgres aren't such services.
I recommend reading: best practices for building containers.
However, it's possible to have multiple services in the same docker container using the supervisord process management system.
I will run both redis and postgres services in one docker container (it's similar to your issue) to illustrate you how it works. It's for demonstration purposes only.
This is a directory structure, we only need Dockerfile and supervisor.conf (supervisord config file):
$ tree example_container/
example_container/
├── Dockerfile
└── supervisor.conf
First, I created a supervisord configuration file with redis and postgres services defined:
$ cat example_container/supervisor.conf
[supervisord]
nodaemon=true
[program:redis]
command=redis-server # command to run redis service
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
[program:postgres]
command=/usr/lib/postgresql/12/bin/postgres -D /var/lib/postgresql/12/main/ -c config_file=/etc/postgresql/12/main/postgresql.conf # command to run postgres service
autostart=true
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
user=postgres
environment=HOME="/var/lib/postgresql",USER="postgres"
Next I created a simple Dockerfile:
$ cat example_container/Dockerfile
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
# Installing redis and postgres
RUN apt-get update && apt-get install -y supervisor redis-server postgresql-12
# Copying supervisor configuration file to container
ADD supervisor.conf /etc/supervisor.conf
# Initializing redis and postgres services using supervisord
CMD ["supervisord","-c","/etc/supervisor.conf"]
And then I built the docker image:
$ docker build -t example_container:v1 .
Finally I ran and tested docker container using the image above:
$ docker run --name multi_services -dit example_container:v1
472c7b2eac7441360126f8fcd0cc80e0e63ac3039f8195715a3a400f6288a236
$ docker exec -it multi_services bash
root#472c7b2eac74:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.7 0.1 27828 23372 pts/0 Ss+ 10:04 0:00 /usr/bin/python3 /usr/bin/supervisord -c /etc/supervisor.conf
postgres 8 0.1 0.1 212968 28972 pts/0 S 10:04 0:00 /usr/lib/postgresql/12/bin/postgres -D /var/lib/postgresql/12/main/ -c config_file=/etc/postgresql/12/main/postgresql.conf
root 9 0.1 0.0 47224 6216 pts/0 Sl 10:04 0:00 redis-server *:6379
...
root#472c7b2eac74:/# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 9/redis-server *:6
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 8/postgres
tcp6 0 0 :::6379 :::* LISTEN 9/redis-server *:6
As you can see it is possible to have multiple services in a single container but this is a NOT recommended approach that should be used ONLY for testing.
Regarding Kubernetes, you can group your containers in a single pod, as a deployment unit.
A Pod is the smallest deployable units of computing that you can create and manage in Kubernetes.
It is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
A Pod's contents are always co-located and co-scheduled, and run in a shared context.
That would be more helpful than trying to merge containers together in one container.

how to change configuration of freeradius-server in docker container?

I'm trying to bulid a freeradius-server using docker and pull a image "freeradius/freeradius server". The first time I used given command
docker run --name my-radius -t -d freeradius/freeradius-server -X
to build a containner adn successfully start debug mode. But I don't know how to quit so I used ctrl+c to stop the containner. And then I used commands below to get in the containner and want to start debug mode again so that I can change configuration or parameters.
docker start my-radius
docker exec -it my-radius /bin/bash
I got in the containner and used freeradius -X but failed. It present
Failed binding to auth address 127.0.0.1 port 18120 bound to server inner-tunnel: Address already in use
/etc/freeradius/sites-enabled/inner-tunnel[33]: Error binding to port for 127.0.0.1 port 18120
I used Google to look for solutions but failed. I guess it means the radius-server started automatically so that the address 127.0.0.1 and port 18120 were used. But I don't know how to stop it in the containner .
The official FreeRADIUS docker image will start FreeRADIUS when the container starts. This means that if you start the container and then exec a shell into it, FreeRADIUS will already be running.
The container will exit as soon as the FreeRADIUS process stops, meaning it is not possible to start the container in this way, stop FreeRADIUS running, and then continue to use the container.
In this situation, trying to run FreeRADIUS a second time in another shell will fail because the ports are already open, as you have discovered.
This can be see thus:
$ docker run --name my-radius -d freeradius/freeradius-server
106cdbc81e8e5c0257f22bebad221ed1b4ba0a14f40ce1e4110ec388380c7e62
$ docker exec -it my-radius /bin/bash
root#106cdbc81e8e:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
freerad 1 0 1 23:10 ? 00:00:00 freeradius -f
root 12 0 1 23:10 pts/0 00:00:00 /bin/bash
root 22 12 0 23:10 pts/0 00:00:00 ps -ef
root#106cdbc81e8e:/# exit
exit
$ docker stop my-radius
my-radius
$ docker rm my-radius
my-radius
$
To be able to run FreeRADIUS yourself you can do two things. Firstly, don't start the container in the background, but start it in the foreground with FreeRADIUS in debug mode. The docker entrypoint will let you pass arguments directly to the daemon. This is the easiest way if you don't need to actually do anything inside the container, but just run FreeRADIUS in debug mode:
$ docker run --name my-radius -it freeradius/freeradius-server -X
FreeRADIUS Version 3.0.21
Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE
...
Listening on auth address * port 1812 bound to server default
Listening on acct address * port 1813 bound to server default
Listening on auth address :: port 1812 bound to server default
Listening on acct address :: port 1813 bound to server default
Listening on auth address 127.0.0.1 port 18120 bound to server inner-tunnel
Listening on proxy address * port 38640
Listening on proxy address :: port 49445
Ready to process requests
^C$
(note hit Ctrl-C to quit).
The alternative is to start it in the background, but instead of running FreeRADIUS run some other process. You can then exec into the container and run FreeRADIUS manually. This means you get a full shell inside the container without FreeRADIUS already running. For instance:
$ docker run --name my-radius -d freeradius/freeradius-server sleep 999999999999
23b5ddd4825a31a8fb417e1594028c6533267be4ff20a448d3844203b805dbd9
$ docker exec -it my-radius /bin/bash
root#23b5ddd4825a:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 23:16 ? 00:00:00 sleep 999999999999
root 7 0 0 23:17 pts/0 00:00:00 /bin/bash
root 17 7 0 23:17 pts/0 00:00:00 ps -ef
root#23b5ddd4825a:/# freeradius -X
FreeRADIUS Version 3.0.21
Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE
...
Listening on proxy address * port 46662
Listening on proxy address :: port 40284
Ready to process requests
^Croot#23b5ddd4825a:/# exit
exit
$ docker container kill my-radius
my-radius
$ docker container rm my-radius
my-radius
The sleep command used here will obviously quit at some point, so use a number large enough that it runs for long enough, as when that process exits the container will shut down.

Where exactly do the logs of kubernetes pods come from (at the container level)?

I'm looking to redirect some logs from a command run with kubectl exec to that pod's logs, so that they can be read with kubectl logs <pod-name> (or really, /var/log/containers/<pod-name>.log). I can see the logs I need as output when running the command, and they're stored inside a separate log directory inside the running container.
Redirecting the output (i.e. >> logfile.log) to the file which I thought was mirroring what is in kubectl logs <pod-name> does not update that container's logs, and neither does redirecting to stdout.
When calling kubectl logs <pod-name>, my understanding is that kubelet gets them from it's internal /var/log/containers/ directory. But what determines which logs are stored there? Is it the same process as the way logs get stored inside any other docker container?
Is there a way to examine/trace the logging process, or determine where these logs are coming from?
Logs from the STDOUT and STDERR of containers in the pod are captured and stored inside files in /var/log/containers. This is what is presented when kubectl log is run.
In order to understand why output from commands run by kubectl exec is not shown when running kubectl log, let's have a look how it all works with an example:
First launch a pod running ubuntu that are sleeping forever:
$> kubectl run test --image=ubuntu --restart=Never -- sleep infinity
Exec into it
$> kubectl exec -it test bash
Seen from inside the container it is the STDOUT and STDERR of PID 1 that are being captured. When you do a kubectl exec into the container a new process is created living alongside PID 1:
root#test:/# ps -auxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 7 0.0 0.0 18504 3400 pts/0 Ss 20:04 0:00 bash
root 19 0.0 0.0 34396 2908 pts/0 R+ 20:07 0:00 \_ ps -auxf
root 1 0.0 0.0 4528 836 ? Ss 20:03 0:00 sleep infinity
Redirecting to STDOUT is not working because /dev/stdout is a symlink to the process accessing it (/proc/self/fd/1 rather than /proc/1/fd/1).
root#test:/# ls -lrt /dev/stdout
lrwxrwxrwx 1 root root 15 Nov 5 20:03 /dev/stdout -> /proc/self/fd/1
In order to see the logs from commands run with kubectl exec the logs need to be redirected to the streams that are captured by the kubelet (STDOUT and STDERR of pid 1). This can be done by redirecting output to /proc/1/fd/1.
root#test:/# echo "Hello" > /proc/1/fd/1
Exiting the interactive shell and checking the logs using kubectl logs should now show the output
$> kubectl logs test
Hello

SCADA LTS - HTTP Status 404

After starting a SCADA LTS Docker container as suggested on https://github.com/SCADA-LTS/Scada-LTS with the following command:
docker run -it -e DOCKER_HOST_IP=docker-machine ip-p 81:8080 scadalts/scadalts /root/start.sh
...The container works well for some time and then suddenly a "HTTP Status 404" error is shown, like the following:
http://[IP]/ScadaBR/
HTTP Status 404 - /ScadaBR/
type Status report
message /ScadaBR/
description The requested resource is not available.
Apache Tomcat/7.0.85
Where [IP] is the default Docker IP address and port, most of the times is localhost:81.
Any idea how to solve it?
Thank you in advance!
TL;DR
After some time running the MySQLservice dies. Is necessary to restart it manually with this:
docker exec scada service mysql restart
docker exec scada killall tail
DETAILED REPORT
When the error is shown, you can check if all the services are running on the container (in this case named 'scada'):
>docker exec scada ps -A
PID TTY TIME CMD
1 ? 00:00:00 start.sh
790 ? 01:00:22 java
791 ? 00:01:27 tail
858 ? 00:00:00 ps
As can be seen, no MySQL service is running. This explains why Tomcat is running but SCADA-LTS don't.
You can restart MySQL service inside the container with:
docker exec scada service mysql restart
After that SCADA-LTS is still down and you have to restart tomcat which can be done in this way:
docker exec scada killall tail
After a minute or less, all the services are running:
>docker exec scada ps -A
PID TTY TIME CMD
1 ? 00:00:00 start.sh
43 ? 00:00:00 mysqld_safe
398 ? 00:00:00 mysqld
481 ? 00:00:31 java
482 ? 00:00:00 sleep
618 ? 00:00:00 ps
Now SCADA-LTS is running!

Resources