How to get Docker metrics using docker remote API - docker

I was able to get information of my docker containers via the command:
echo -e "GET /containers/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock
I also was able to found information on a specific container:
echo -e "GET /containers/<containerId>json HTTP/1.0\r\n" | nc -U /var/run/docker.sock
but I'm unable to get information about memory, cpu and I/O usage of a specific container
is it possible to get it via the remote api, or must I go directly to the file system under /sys/fs/cgroup/...?

the API will continuously report a live stream of CPU, memory, I/O, and network metrics consumed by a specific container.
$ echo -ne "GET /containers/$CONTAINER_ID/stats HTTP/1.1\r\n\r\n" | sudo nc -U /var/run/docker.sock
note : replace $CONTAINER_ID with the id of the required container
hope this helps

Related

Docker - failed to load listeners, cannot assign requested address

So i have recently changed my ISP over and theyve given me a new router. All was good and well until i plugged my network switch in. All of a sudden, docker started crying and now i can't run any services.
All i know that has changed in the local ip address from 192.168.1.195 to 192.168.4.31.
This was the first time trying to start the service again
mhudson#Guardian:~$ sudo dockerd
INFO[2023-01-16T19:25:41.710343685Z] Starting up
failed to load listeners: listen tcp 192.168.1.195:2376: bind: cannot assign requested address
This is a command i saw on a thread.
mhudson#Guardian:~$ sudo echo `ifconfig enp1s0 | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2;exit }' | cut -f2 -d:`
192.168.4.31
If someone could help as i rely on the services on the daily. Please and thank you.

Pass docker host ip as env var into devcontainer

I am trying to pass an environment variable into my devcontainer that is the output of a command run on my dev machine. I have tried the following in my devcontainer.json with no luck:
"initializeCommand": "export DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"",
"containerEnv": {
"DOCKER_HOST_IP1": "${localEnv:DOCKER_HOST_IP}",
"DOCKER_HOST_IP2": "${containerEnv:DOCKER_HOST_IP}"
},
and
"runArgs": [
"-e DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"
],
(the point of the ifconfig/grep piped command is to provide me with the IP of my docker host which is running via Docker for Desktop (Mac))
Some more context
Within my devcontainer I am running some kubectl deployments (to a cluster running on Docker for Desktop) where I would like to configure a hostAlias for a pod (docs) such that that pod will direct requests to https://api.cancourier.local to the ip of the docker host (which would then hit an ingress I have configured for that CNAME).
I could just pass in the output of the ifconfig command to my kubectl command when running from within the devcontainer. The problem is that I get two different results from this depending on whether I am running it on my host (10.0.0.89) or from within the devcontainer (10.1.0.1). 10.0.0.89 in this case is the "correct" IP as if I curl this from within my devcontainer, or my deployed pod, I get the response I'd expect from my ingress.
I'm also aware that I could just use the name of my k8s service (in this case api) to communicate between pods, but this isn't ideal. As for why, I'm running a Next.js application in a pod. The Next.js app on this pod has two "contexts":
my browser - the app serves up static HTML/JS to my browser where communicating with https://api.cancourier.local works fine
on the pod itself - running some things (ie. _middleware) on the pod itself, where the pod does not currently know what https://api.cancourier.local
What I was doing to temporarily get around this was to have a separate config on the pod, one for the "browser context" and the other for things running on the pod itself. This is less than ideal as when I go to deploy this Next.js app (to Vercel) it won't be an issue (as my API will be deployed on some publicly accessible CNAME). If I can accomplish what I was trying to do above, I'd be able to avoid this.
So I didn't end up finding a way to pass the output of a command run on the host machine as an env var into my devcontainer. However I did find a way to get the "correct" docker host IP and pass this along to my pod.
In my devcontainer.json I have this:
"runArgs": [
// https://stackoverflow.com/a/43541732/3902555
"--add-host=api.cancourier.local:host-gateway",
"--add-host=cancourier.local:host-gateway"
],
which augments the devcontainer's /etc/hosts with:
192.168.65.2 api.cancourier.local
192.168.65.2 cancourier.local
then in my Makefile where I store my kubectl commands I am simply running:
deploy-the-things:
DOCKER_HOST_IP = $(shell cat /etc/hosts | grep 'api.cancourier.local' | awk '{print $$1}')
helm upgrade $(helm_release_name) $(charts_location) \
--install \
--namespace=$(local_namespace) \
--create-namespace \
-f $(charts_location)/values.yaml \
-f $(charts_location)/local.yaml \
--set cwd=$(HOST_PROJECT_PATH) \
--set dockerHostIp=$(DOCKER_HOST_IP) \
--debug \
--wait
then within my helm chart I can use the following for the pod running my Next.js app:
hostAliases:
- ip: {{ .Values.dockerHostIp }}
hostnames:
- "api.cancourier.local"
Highly recommend following this tutorial: Container environment variables
In this tutorial, 2 methods are mentioned:
Adding individual variables
Using env file
Choose which is more comfortable for you, good luck))

Docker failed to load listeners, cannot assign requested address

I'm using this guide to try and run up Docker using WSL2. I've got everything starting however there is an issue when I actually try to run up Docker. Once I run the command sudo dockerd -H `ifconfig eth0 | grep -E "([0-9]{1,3}.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d:
WARN[2022-02-01T11:07:40.033323500-06:00] Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network. host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:40.033991800-06:00] Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there! host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.036303800-06:00] Binding to an IP address without --tlsverify is deprecated. Startup is intentionally being slowed down to show this message host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.043536700-06:00] Please consider generating tls certificates with client validation to prevent exposing unauthenticated root access to your network host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.044564400-06:00] You can override this by explicitly specifying '--tls=false' or '--tlsverify=false' host="tcp://169.254.77.26:2375"
WARN[2022-02-01T11:07:41.045654100-06:00] Support for listening on TCP without authentication or explicit intent to run without authentication will be removed in the next release host="tcp://169.254.77.26:2375"
failed to load listeners: listen tcp 169.254.77.26:2375: bind: cannot assign requested address
I'm not too familiar with Docker so not sure what I can adjust to make it launch properly. Any suggestions are appreciated, thanks!
I'm doing exactly the same.
What worked for me was this comment https://dev.to/nelsonpena/comment/1jmkb . But it was not too explicit
I opened windows PowerShell and used the command
wsl --set-version Ubuntu 2
if you have another distro of linux it would be
wsl --set-version <distroname> 2
I closed wsl and opened it again. and executed the command
echo `ifconfig eth0 | grep -E "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2;exit }' | cut -f2 -d:`
and got API listen on [the IP]

Run X application in a Docker container reliably on a server connected via SSH without "--net host"

Without a Docker container, it is straightforward to run an X11 program on a remote server using the SSH X11 forwarding (ssh -X). I have tried to get the same thing working when the application runs inside a Docker container on a server. When SSH-ing into a server with the -X option, an X11 tunnel is set up and the environment variable "$DISPLAY" is automatically set to typically "localhost:10.0" or similar. If I simply try to run an X application in a Docker, I get this error:
Error: GDK_BACKEND does not match available displays
My first idea was to actually pass the $DISPLAY into the container with the "-e" option like this:
docker run -ti -e DISPLAY=$DISPLAY name_of_docker_image
This helps, but it does not solve the issue. The error message changes to:
Unable to init server: Broadway display type not supported: localhost:10.0
Error: cannot open display: localhost:10.0
After searching the web, I figured out that I could do some xauth magic to fix the authentication. I added the following:
SOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge -
chmod 777 $XAUTH
docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH name_of_docker_image
However, this only works if also add "--net host" to the docker command:
docker run -ti -e DISPLAY=$DISPLAY -v $XSOCK:$XSOCK -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH --net host name_of_docker_image
This is not desirable since it makes the whole host network visible for the container.
What is now missing in order to get it fully to run on a remote server in a docker without "--net host"?
I figured it out. When you are connecting to a computer with SSH and using X11 forwarding, /tmp/.X11-unix is not used for the X communication and the part related to $XSOCK is unnecessary.
Any X application rather uses the hostname in $DISPLAY, typically "localhost" and connects using TCP. This is then tunneled back to the SSH client. When using "--net host" for the Docker, "localhost" will be the same for the Docker container as for the Docker host, and therefore it will work fine.
When not specifying "--net host", the Docker is using the default bridge network mode. This means that "localhost" means something else inside the container than for the host, and X applications inside the container will not be able to see the X server by referring to "localhost". So in order to solve this, one would have to replace "localhost" with the actual IP-address of the host. This is usually "172.17.0.1" or similar. Check "ip addr" for the "docker0" interface.
This can be done with a sed replacement:
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
Additionally, the SSH server is commonly not configured to accept remote connections to this X11 tunnel. This must then be changed by editing /etc/ssh/sshd_config (at least in Debian) and setting:
X11UseLocalhost no
and then restart the SSH server, and re-login to the server with "ssh -X".
This is almost it, but there is one complication left. If any firewall is running on the Docker host, the TCP port associated with the X11-tunnel must be opened. The port number is the number between the : and the . in $DISPLAY added to 6000.
To get the TCP port number, you can run:
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
Then (if using ufw as firewall), open up this port for the Docker containers in the 172.17.0.0 subnet:
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
All the commands together can be put into a script:
XSOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | sudo xauth -f $XAUTH nmerge -
sudo chmod 777 $XAUTH
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
sudo ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
sudo docker run -ti --rm -e DISPLAY=$DISPLAY -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH name_of_docker_image
Assuming you are not root and therefore need to use sudo.
Instead of sudo chmod 777 $XAUTH, you could run:
sudo chown my_docker_container_user $XAUTH
sudo chmod 600 $XAUTH
to prevent other users on the server from also being able to access the X server if they know what you have created the /tmp/.docker.auth file for.
I hope this should make it properly work for most scenarios.
If you set X11UseLocalhost = no, you're allowing even external traffic to reach the X11 socket. That is, traffic directed to an external IP of the machine can reach the SSHD X11 forwarding. There are still two security mechanism which might apply (firewall, X11 auth). Still, I'd prefer leaving a system global setting alone if you're fiddling with a user- or even application-specific issue like in this case.
Here's an alternative how to get X11 graphics out of a container and via X11 forwarding from the server to the client, without changing X11UseLocalhost in the sshd config.
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------- veth123#if5 --|-- eth0#if6 |
| (bridge) (veth pair) | (veth pair) |
| | |
| 127.0.0.1 +-------------------------+
routing +- lo
| (loopback)
|
| 192.168.1.2
+- ens33
(physical host interface)
With the default X11UseLocalhost yes, sshd listens only on 127.0.0.1 on the root network namespace. We need to get the X11 traffic from inside the docker network namespace to the loopback interface in the root net ns. The veth pair is connected to the docker0 bridge and both ends can therefore talk to 172.17.0.1 without any routing. The three interfaces in the root net ns (docker0, lo and ens33) can communicate via routing.
We want to achieve the following:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123#if5 --|-< eth0#if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo >--ssh x11 fwd-+
(loopback) |
v
192.168.1.2 |
<-- ssh -- ens33 ------<-----+
(physical host interface)
We can let the X11 application talk directly to 172.17.0.1 to "escape" the docker net ns. This is achieved by setting the DISPLAY appropriately: export DISPLAY=172.17.0.1:10:
+ docker container net ns+
| |
172.17.0.1 | 172.17.0.2 |
docker0 --------- veth123#if5 --|-- eth0#if6 -----< xeyes |
(bridge) (veth pair) | (veth pair) |
| |
127.0.0.1 +-------------------------+
lo
(loopback)
192.168.1.2
ens33
(physical host interface)
Now, we add an iptables rule on the host to route from 172.17.0.1 to 127.0.0.1 in the root net ns:
iptables \
--table nat \
--insert PREROUTING \
--proto tcp \
--destination 172.17.0.1 \
--dport 6010 \
--jump DNAT \
--to-destination 127.0.0.1:6010
sysctl net.ipv4.conf.docker0.route_localnet=1
Note that we're using port 6010, that's the default port on which SSHD performs X11 forwarding: It's using display number 10, which is added to the port "base" 6000. You can check which display number to use after you've established the SSH connection by checking the DISPLAY environment variable in a shell started by SSH.
Maybe you can improve on the forwarding rule by only routing traffic from this container (veth end). Also, I'm not quite sure why the route_localnet is needed, to be honest. It appears that 127/8 is a strange source / destination for packets and therefore disabled for routing by default. You can probably also reroute traffic from the loopback interface inside the docker net ns to the veth pair, and from there to the loopback interface in the root net ns.
With the commands given above, we end up with:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123#if5 --|-< eth0#if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo
(loopback)
192.168.1.2
ens33
(physical host interface)
The remaining connection is established by SSHD when you establish a connection with X11 forwarding. Please note that you have to establish the connection before attempting to start an X11 application inside the container, since the application will immediately try to reach the X11 server.
There is one piece missing: authentication. We're now trying to access the X11 server as 172.17.0.1:10 inside the container. The container however doesn't have any X11 authentication, or not a correct one if you're bind-mounting the home directory (outside the container it's usually something like <hostname>:10). Use Ruben's suggestion to add a new entry visible inside the docker container:
# inside container
xauth add 172.17.0.1:10 . <cookie>
where <cookie> is the cookie set up by the SSH X11 forwarding, e.g. via xauth list.
You might also have to allow traffic ingress to 172.17.0.1:6010 in your firewall.
You can also start an application from the host inside the docker container network namespace:
sudo nsenter --target=<pid of process in container> --net su - $USER <app>
Without the su, you'll be running as root. Of course, you can also use another container and share the network namespace:
sudo docker run --network=container:<other container name/id> ...
The X11 forwarding mechanism shown above applies to the entire network namespace (actually, to everything connected to the docker0 bridge). Therefore, it will work for any applications inside the container network namespace.
In my case, I sit at "remote" and connect to a "docker_container" on "docker_host":
remote --> docker_host --> docker_container
To make debugging scripts easier with VScode, I installed SSHD into the "docker_container", reporting on port 22, mapped to another port (say 1234) on the "docker_host".
So I can connect directly with the running container via ssh (from "remote"):
ssh -Y -p 1234 appuser#docker_host.local
(where appuser is the username within the "docker_container". I am working on my local subnet now, so I can reference my server via the .local mapping. For external IPs, just make sure your router is mapped to this port to this machine.)
This creates a connection directly from my "remote" to "docker_container" via ssh.
remote --> (ssh) --> docker_container
Inside the "docker_container", I installed sshd with
sudo apt-get install openssh-server (you can add this to your Dockerfile to install at build time).
To allow X11 forwarding to work, edit the /etc/ssh/sshd_config file as such:
X11Forwarding yes
X11UseLocalhost no
Then restart the ssh within the container. You should do this from shell executed into the container, from the "docker_host", not when you are connected to the "docker_container" via ssh: (docker exec -ti docker_container bash)
Restart sshd:
sudo service ssh restart
When you connect via ssh to the "docker_container", check the $DISPLAY environment variable. It should say something like
appuser#3f75a98d67e6:~/data$ echo $DISPLAY
3f75a98d67e6:10.0
Test by executing your favorite X11 graphics program from within "docker_container" via ssh (like cv2.imshow())
I use an automated approach which can be executed entirely from within the docker container.
All that is needed is to pass the DISPLAY variable to the container, and mounting .Xauthority.
Moreover, it only uses the port from the DISPLAY variable, so it will also work in cases where DISPLAY=localhost:XY.Z.
Create a file, source-me.sh, with the following content:
# Find the containers address in /etc/hosts
CONTAINER_IP=$(grep $(hostname) /etc/hosts | awk '{ print $1 }')
# Assume the docker-host IP only differs in the last byte
SUBNET=$(echo $CONTAINER_IP | sed 's/\.[^\.]$//')
DOCKER_HOST_IP=${SUBNET}.1
# Get the port from the DISPLAY variable
DISPLAY_PORT=$(echo $DISPLAY | sed 's/.*://' | sed 's/\..*//')
# Create the correct display-name
export DISPLAY=$DOCKER_HOST_IP:$DISPLAY_PORT
# Find an existing xauth entry for the same port (DISPLAY_PORT),
# and copy everything except the dispay-name
# filtering out entries containing /unix: which correspond to "same-machine" connections
ENTRY=$(xauth -n list | grep -v '/unix\:' | grep "\:${DISPLAY_PORT}" | head -n 1 | sed 's/^[^ ]* *//')
# Prepend our display-name
ENTRY="$DOCKER_HOST_IP:$DISPLAY_PORT $ENTRY"
# Add the new xauth entry.
# Because our .Xauthority file is mounted, a new file
# named ${HOME}/.Xauthority-n will be created, and a warning
# is printed on std-err
xauth add $ENTRY 2> /dev/null
# replace the content of ${HOME}/.Xauthority with that of ${HOME}/.Xauthority-n
# without creating a new i-node.
cat ${HOME}/.Xauthority-n > ${HOME}/.Xauthority
Create the following Dockerfile for testing:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y xauth
COPY source-me.sh /root/
RUN cat /root/source-me.sh >> /root/.bashrc
# xeyes for testing:
RUN apt-get install -y x11-apps
Build and run:
docker build -t test-x .
docker run -ti \
-v $HOME/.Xauthority:/root/.Xauthority:rw \
-e DISPLAY=$DISPLAY \
test-x \
bash
Inside the container, run:
xeyes
To run non-interactively, you must ensure source-me.sh is sourced:
docker run \
-v $HOME/.Xauthority:/root/.Xauthority:rw \
-e DISPLAY=$DISPLAY \
test-x \
bash -c "source source-me.sh ; xeyes"

Docker remote api don't restart after my computer restart

Last week I struggled to make my docker remote api working. As it is running on VM, I have not restart my VM since then. Today I finally restarted my VM and it is not working any more (docker and docker-compose are working normally, but not docker remote api). My docker init file looks like this: /etc/init/docker.conf.
description "Docker daemon"
start on filesystem and started lxc-net
stop on runlevel [!2345]
respawn
script
/usr/bin/docker -H tcp://0.0.0.0:4243 -d
end script
# description "Docker daemon"
# start on (filesystem and net-device-up IFACE!=lo)
# stop on runlevel [!2345]
# limit nofile 524288 1048576
# limit nproc 524288 1048576
respawn
kill timeout 20
.....
.....
Last time I made setting indicated here this
I tried nmap to see if port 4243 is opened.
ubuntu#ubuntu:~$ nmap 0.0.0.0 -p-
Starting Nmap 7.01 ( https://nmap.org ) at 2016-10-12 23:49 CEST
Nmap scan report for 0.0.0.0
Host is up (0.000046s latency).
Not shown: 65531 closed ports
PORT STATE SERVICE
22/tcp open ssh
43978/tcp open unknown
44672/tcp open unknown
60366/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 1.11 seconds
as you can see, the port 4232 is not opened.
when I run:
ubuntu#ubuntu:~$ echo -e "GET /images/json HTTP/1.0\r\n" | nc -U
This is nc from the netcat-openbsd package. An alternative nc is available
in the netcat-traditional package.
usage: nc [-46bCDdhjklnrStUuvZz] [-I length] [-i interval] [-O length]
[-P proxy_username] [-p source_port] [-q seconds] [-s source]
[-T toskeyword] [-V rtable] [-w timeout] [-X proxy_protocol]
[-x proxy_address[:port]] [destination] [port]
I run this also:
ubuntu#ubuntu:~$ sudo docker -H=tcp://0.0.0.0:4243 -d
flag provided but not defined: -d
See 'docker --help'.
I restart my computer many times and try a lot of things with no success.
I already have a group named docker and my user is in:
ubuntu#ubuntu:~$ groups $USER
ubuntu : ubuntu adm cdrom sudo dip plugdev lpadmin sambashare docker
Please tel me what is wrong.
Your startup script contains an invalid command:
/usr/bin/docker -H tcp://0.0.0.0:4243 -d
Instead you need something like:
/usr/bin/docker daemon -H tcp://0.0.0.0:4243
As of 1.12, this is now (but docker daemon will still work):
/usr/bin/dockerd -H tcp://0.0.0.0:4243
Please note that this is opening a port that gives remote root access without any password to your docker host.
Anyone that wants to take over your machine can run docker run -v /:/target -H your.ip:4243 busybox /bin/sh to get a root shell with your filesystem mounted at /target. If you'd like to secure your host, follow this guide to setting up TLS certificates.
I finally found www.ivankrizsan.se and it is working find now. Thanks to this guy (or girl) ;).
This settings work for me on ubuntu 16.04. Here is how to do :
Edit this file /lib/systemd/system/docker.service and replace the line ExecStart=/usr/bin/dockerd -H fd:// with
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:4243
Save the file
restart with :sudo service docker restart
Test with : curl http://localhost:4243/version
Result: you should see something like this:
{"Version":"1.11.0","ApiVersion":"1.23","GitCommit":"4dc5990","GoVersion" "go1.5.4","Os":"linux","Arch":"amd64","KernelVersion":"4.4.0-22-generic","BuildTime":"2016-04-13T18:38:59.968579007+00:00"}
Attention :
Remain aware that 0.0.0.0 is not good for security, for more security, you should use 127.0.0.1

Resources