docker system df -v output as json - docker

I am doing some work with docker container management, and need to get docker container status.
My current approach is using ssh client by execute shell cmd to grab.
For example i can get container stats by execute cmd:
docker stats --no-stream --format '{"container":"{{ .Name }}","memory":{"raw":"{{ .MemUsage }}","percent":"{{ .MemPerc }}"},"cpu":"{{ .CPUPerc }}","networkIO":"{{.NetIO}}","BlockIO":"{{.BlockIO}}"}'
output:
{"container":"postgresql","memory":{"raw":"255.4MiB / 31.21GiB","percent":"0.80%"},"cpu":"0.00%","networkIO":"1.03GB / 476MB,"BlockIO":"545MB / 7.67GB"}
{"container":"pgadmin","memory":{"raw":"146.1MiB / 31.21GiB","percent":"0.46%"},"cpu":"0.03%","networkIO":"26.2kB / 0B,"BlockIO":"200MB / 8.19kB"}
{"container":"pis_middle_layer_flask","memory":{"raw":"849.9MiB / 31.21GiB","percent":"2.66%"},"cpu":"13.48%","networkIO":"26.4kB / 0B,"BlockIO":"65.9MB / 0B"}
So how can i get similar text with docker system df -v?
Cause i want to get each container size and their volume size.
I did with same cmd:
docker system df -v --format '{"container":"{{ .Name }}","memory":{"raw":"{{ .MemUsage }}","percent":"{{ .MemPerc }}"},"cpu":"{{ .CPUPerc }}","networkIO":"{{.NetIO}}","BlockIO":"{{.BlockIO}}"}'
but occurred with error:
{"container":"template: :1:17: executing "" at <.Name>: can't evaluate field Name in type *formatter.diskUsageContext
I know i got wrong go template keywords, but really can't find anything doc about it.

Al right...just got it.
docker system df -v --format "{{ json . }}"
this cmd will return as json so i can parse it with json.loads

Related

In docker config.json, how to get the IPAddress of the containers in the custom psFormat?

According to the documentation at: https://docs.docker.com/engine/reference/commandline/cli/#customize-the-default-output-format-for-commands
I want to customize the docker ps output so that it shows the IP of the containers in the table results.
What I've tried so far is:
$ cat ~/.docker/config.json
{
"psFormat": "table {{.ID}}\\t{{.Image}}\\t{{.IPAddress}}\\t{{.Ports}}\\t{{.Names}}"
}
but then it raises this error:
$ docker ps
Template parsing error: template: :1:33: executing "" at <.IPAddress>:
can't evaluate field IPAddress in type *formatter.ContainerContext
I also know that docker inspect accept a --format argument having kind of the same structure:
$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}} {{.Name}}' my-project_app_1
172.19.0.2 /my-project_app_1
So I also naively tried to copy/paste that format structure into the docker config.json file:
$ cat ~/.docker/config.json
{
"psFormat": "table {{.ID}}\\t{{.Image}}\\t{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}\\t{{.Ports}}\\t{{.Names}}"
}
but then this error shows up:
Template parsing error: template: :1:55: executing "" at <.NetworkSettings.Networks>:
can't evaluate field NetworkSettings in type *formatter.ContainerContext
Question
How would you get the IP of the container in the table formatted output of a custom docker ps command?
System info:
Ubuntu: 18.04.6 LTS
Kernel: 5.4.0-94-generic x86_64 GNU/Linux
Docker: Docker version 20.10.12, build e91ed57

who and w commands in CentOS 8 Docker container

While playing with CentOs 8 on Docker container I found out, that outputs of who and w commands are always empty.
[root#9e24376316f1 ~]# who
[root#9e24376316f1 ~]# w
01:01:50 up 7:38, 0 users, load average: 0.00, 0.04, 0.00
USER TTY FROM LOGIN# IDLE JCPU PCPU WHAT
Even when I'm logged in as a different user in second terminal.
When I want to write to this user it shows
[root#9e24376316f1 ~]# write test
write: test is not logged in
Is this because of Docker? Maybe it works in some way that disallow sessions to see each other?
Or maybe that's some other issue. I would really appreciate some explanation.
These utilities obtain the information about current logins from the utmp file (/var/run/utmp). You can easily check that in ordinary circumstances (e.g. on the desktop system) this file contains something like the following string (here qazer is my login and tty7 is a TTY where my desktop environment runs):
$ cat /var/run/utmp
tty7:0qazer:0�o^�
while in the container this file is (usually) empty:
$ docker run -it centos
[root#5e91e9e1a28e /]# cat /var/run/utmp
[root#5e91e9e1a28e /]#
Why?
The utmp file is usually modified by programs which authenticate the user and start the session: login(1), sshd(8), lightdm(1). However, the container engine cannot rely on them, as they may be absent in the container file system, so "logging in" and "executing on behalf of" is implemented in the most primitive and straightforward manner, avoiding relying on anything inside the container.
When any container is started or any command is execd inside it, the container engine just spawns the new process, arranges some security settings, calls setgid(2)/setuid(2) to forcibly (without any authentication) alter the process' UID/GID and then executes the required binary (the entry point, the command, and so on) within this process.
Say, I start the CentOS container running its main process on behalf of UID 42:
docker run -it --user 42 centos
and then try to execute sleep 1000 inside it:
docker exec -it $CONTAINER_ID sleep 1000
The container engine will perform something like this:
[pid 10170] setgid(0) = 0
[pid 10170] setuid(42) = 0
...
[pid 10170] execve("/usr/bin/sleep", ["sleep", "1000"], 0xc000159740 /* 4 vars */) = 0
There will be no writes to /var/run/utmp, thus it will remain empty, and who(1)/w(1) will not find any logins inside the container.

Issue accessing vespa outside docker container

Installed Docker on Mac and trying to run Vespa on Docker following steps specified in following link
https://docs.vespa.ai/documentation/vespa-quick-start.html
I did n't had any issues till step 4. I see vespa container running after step 2 and step 3 returned 200 OK response.
But Step 5 failed to return 200 OK response. Below is the command I ran on my terminal
curl -s --head http://localhost:8080/ApplicationStatus
I keep getting
curl: (52) Empty reply from server whenever I run without -s option.
So I tried to see listening ports inside my vespa container and don't see anything for 8080 but can see for 19071(used in step 3)
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 8080'
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 19071'
tcp 0 0 0.0.0.0:19071 0.0.0.0:* LISTEN
Below doc has info related to vespa ports
https://docs.vespa.ai/documentation/reference/files-processes-and-ports.html
I'm assuming port 8080 should be active after docker run(step 2 of quick start link) and can be accessed outside container as port mapping is done.
But I don't see 8080 port active inside container in first place.
A'm I missing something. Do I need to perform any additional step than mentioned in quick start? FYI I installed Jenkins inside my docker and was able to access outside container via port mapping. But not sure why it's not working with vespa.I have been trying from quiet sometime but no progress. Please advice me if I'm missing something here.
You have too low memory for your docker container, "Minimum 6GB memory dedicated to Docker (the default is 2GB on Macs).". See https://docs.vespa.ai/documentation/vespa-quick-start.html
The deadlock detector warnings and failure to get configuration from configuration server (which is likely oom killed) indicates that you are too low on memory.
My guess is that your jdisc container had not finished initialize or did not initialize properly? Did you try to check the log?
docker exec vespa bash -c '/opt/vespa/bin/vespa-logfmt /opt/vespa/logs/vespa/vespa.log'
This should tell you if there was something wrong. When it is ready to receive requests you would see something like this:
[2018-12-10 06:30:37.854] INFO : container Container.org.eclipse.jetty.server.AbstractConnector Started SearchServer#79afa369{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
[2018-12-10 06:30:37.857] INFO : container Container.org.eclipse.jetty.server.Server Started #10280ms
[2018-12-10 06:30:37.857] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Switching to the latest deployed set of configurations and components. Application switch number: 0
[2018-12-10 06:30:37.859] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Initializing new set of configurations and components. Application switch number: 1

docker - driver "devicemapper" failed to remove root filesystem after process in container killed

I am using Docker version 17.06.0-ce on Redhat with devicemapper storage. I am launching a container running a long-running service. The master process inside the container sometimes dies for whatever reason. I get the following error message.
/bin/bash: line 1: 40 Killed python -u scripts/server.py start go
I would like the container to exit and to be restarted by docker. However docker never exits. If I do it manually I get the following error:
Error response from daemon: driver "devicemapper" failed to remove root filesystem.
After googling, I tried a bunch of things:
docker rm -f <container>
rm -f <pth to mount>
umount <pth to mount>
All result in device is busy. The only remedy right now is to reboot the host system which is obviously not a long-term solution.
Any ideas?
I had the same problem and the solution was a real surprise.
So here is the error om docker rm:
$ docker rm 08d51aad0e74
Error response from daemon: driver "devicemapper" failed to remove root filesystem for 08d51aad0e74060f54bba36268386fe991eff74570e7ee29b7c4d74047d809aa: remove /var/lib/docker/devicemapper/mnt/670cdbd30a3627ae4801044d32a423284b540c5057002dd010186c69b6cc7eea: device or resource busy
Then I did the following (basically go through all processes and look for docker in mountinfo):
$ grep docker /proc/*/mountinfo | grep 958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac
/proc/20416/mountinfo:629 574 253:15 / /var/lib/docker/devicemapper/mnt/958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,relatime shared:288 - xfs /dev/mapper/docker-253:5-786536-958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota
This got be the PID of the offending process keeping it busy - 20416 (the item after /proc/)
So I did a ps -p and to my surprise find:
[devops#dp01app5030 SeGrid]$ ps -p 20416
PID TTY TIME CMD
20416 ? 00:00:19 ntpd
A true WTF moment. So I pair problem solved with Google and found this:
Then found this https://github.com/docker/for-linux/issues/124
Turns out I had to restart ntp daemon and that fixed the issue!!!

Where is a log file with logs from a container?

I am running several containers using docker-compose. I can see application logs with command docker-compose logs. However I would like to access raw log file to send it somewhere for example? Where is it located? I guess it's separate log per each container (inside container?) but where I can find it?
A container's logs can be found in :
/var/lib/docker/containers/<container id>/<container id>-json.log
(if you use the default log format which is json)
You can docker inspect each container to see where their logs are:
docker inspect --format='{{.LogPath}}' $INSTANCE_ID
And, in case you were trying to figure out where the logs were to manage their collective size, or adjust parameters of the logging itself you will find the following relevant.
Fixing the amount of space reserved for the logs
This is taken from Request for the ability to clear log history (issue 1083)):
Docker 1.8 and docker-compose 1.4 there is already exists a method to limit log size using docker compose log driver and log-opt max-size:
mycontainer:
...
log_driver: "json-file"
log_opt:
# limit logs to 2MB (20 rotations of 100K each)
max-size: "100k"
max-file: "20"
In docker compose files of version '2' , the syntax changed a bit:
version: '2'
...
mycontainer:
...
logging:
#limit logs to 200MB (4rotations of 50M each)
driver: "json-file"
options:
max-size: "50m"
max-file: "4"
(note that in both syntaxes, the numbers are expressed as strings, in quotes)
Possible issue with docker-compose logs not terminating
issue 1866: command logs doesn't exit if the container is already stopped
To see how much space each container's log is taking up, use this:
docker ps -qa | xargs docker inspect --format='{{.LogPath}}' | xargs ls -hl
(you might need a sudo before ls).
docker inspect <containername> | grep log
On Windows, the default location is: C:\ProgramData\Docker\containers\<container-id>-json.log.
Here is the location for
Windows 10 + WSL 2 (Ubuntu 20.04), Docker version 20.10.2, build 2291f61
Lets say
DOCKER_ARTIFACTS == \\wsl$\docker-desktop-data\version-pack-data\community\docker
Location of container logs can be found in
DOCKER_ARTIFACTS\containers\[Your_container_ID]\[Your_container_ID]-json.log
Here is an example
To directly view the logfile in less, I use:
docker inspect $1 | grep 'LogPath' | sed -n "s/^.*\(\/var.*\)\",$/\1/p" | xargs sudo less
run as ./viewLogs.sh CONTAINERNAME
As of 8/22/2018, the logs can be found in :
/data/docker/containers/<container id>/<container id>-json.log
To see the size of logs per container, you can use this bash command :
for cont_id in $(docker ps -aq); do cont_name=$(docker ps | grep $cont_id | awk '{ print $NF }') && cont_size=$(docker inspect --format='{{.LogPath}}' $cont_id | xargs sudo ls -hl | awk '{ print $5 }') && echo "$cont_name ($cont_id): $cont_size"; done
Example output:
container_name (6eed984b29da): 13M
elegant_albattani (acd8f73aa31e): 2.3G

Resources