"docker ps" does not show PORT details of a Kubernetes-controlled container - docker

I have two Redis containers running on a K8s worker node. One is controlled by a Deployment (redisdeploymet1) and the other is a standalone Docker container that I created locally on worker1 (outside the knowledge of K8s)“:
root#worker1:~# docker ps | head -1 ; docker ps | grep redis | grep -v pause
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bc7b6fd74187 redis "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 6379/tcp nervous_vaughan
3c6fc536e265 redis "docker-entrypoint.s…" 42 minutes ago Up 42 minutes k8s_redis_redisdeploymet1-847d97
Why shouldn’t we see the PORT details on both entries above? I have actually tested them; both are indeed listening on 6379.
My ultimate goal is to identify which ports a specific Pod is listening on. Let's say the Dockerfile is not available.
Thanks

You can use docker port command.
docker port <container_id>: List port mappings or a specific mapping for the container

Related

Understanding docker port mapping output of docker ps

If I run my docker container with:
docker run -it -p 5432:5432 postgres-words
Output of docker ps
CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES
512416e853e1 postgres-words "docker-entrypoint.s…" Up 5 seconds 80/tcp, 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp busy_chatelet
But with docker run -it -p 0.0.0.0:5432:5432 postgres-words,
docker ps reports:
CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES
44131e2fa6ff postgres-words "docker-entrypoint.s…" Up 4 seconds 80/tcp, 0.0.0.0:5432->5432/tcp festive_chandrasekhar
My question is that what is the significance/meaning of extra :::5432->5432/tcp in the first case.
:::5432->5432/tcp is referring to IPv6. :: in IPv6 has the same meaning as 0.0.0.0 in IPv4, because you can omit zeros in an IPv6 address and replace them with ::. It is also called the unspecified address. For reference you can also look at this question.

In SWARM not able to access service from worker node

I am new to the docker world. During learning I have created the below setup:
1.Virtual machine - Ubuntu 20 running on VMware workstation 15 Player. IP - 192.168.0.106. I am able to access the internet from this VM(say it VM1) and able to ping that system from my physical system OS( Windows 10)
2.Virtual Machine - Ubuntu 20 running on VMware workstation 15 Player. IP - 192.168.0.105. I am able to access the internet from this VM(say it VM2) and able to ping that system from my physical system OS( Windows 10)
Now I have created the swarm as follows from VM1:
sudo docker swarm init --advertise-addr 192.168.0.106:2377 --listen-addr 192.168.0.106:2377
Then I added the VM2 in this swarm as follows:
sudo docker swarm join --token SWMTKN-1-4i56y47l6o4aycrmg7un21oegmfmwnllcsxaf4zxd05ggqg0zh-9qp67bejerq9dhl3f0suaauvl 192.168.0.106:2377 --advertise-addr 192.168.0.105:2377 --listen-addr 192.168.0.105:2377
After that I checked the swarm details:
sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
ogka7rdjohri9elcbjjcpdlbp * ubuntumaster Ready Active Leader 19.03.12
7qu9kiprcz7oowfk2ol31k1mx ubuntuslave Ready Active 19.03.13
Then deployed the nginx service as follows from VM1:
sudo docker service create -d --name myweb1 --mode global -p9090:80 nginx:1.19.3
Service status:
sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
e1o9cbm3e0t myweb1 global 2/2 nginx:1.19.3 *:9090->80/tcp
Service details:
sudo docker service ps zf6kfw7aqhag
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
egd8oliwngf3 myweb1.ogka7rdjohri9elcbjjcpdlbp nginx:1.19.3 ubuntumaster Running Running 14 minutes ago
1o4q8dlt94jj myweb1.7qu9kiprcz7oowfk2ol31k1mx nginx:1.19.3 ubuntuslave Running Running 14 minutes ago
Now I am able to access the nginx from VM1 using URL: 192.168.0.106:9090 and localhost:9090. But I am not able to access nginx from VM2 using URL: 192.168.0.105:9090 and localhost:9090. My understanding that the nginx are running on both the VMs and can be accessible on both.
in both the VM1 I am able to see the nginx container is running.
VM1 :
sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7a4e13e49dfd nginx:1.19.3 "/docker-entrypoint.…" 16 minutes ago Up 15 minutes 80/tcp myweb1.ogka7rdjohri9elcbjjcpdlbp.egd8oliwngf35wwpjcieew323
VM2:
sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
999062110f0 nginx:1.19.3 "/docker-entrypoint.…" 16 minutes ago Up 16 minutes 80/tcp myweb1.7qu9kiprcz7oowfk2ol31k1mx.1o4q8dlt94jj4uufysnhsbamd
Please guide me on this if I am doing any mistakes.
TIA,
Deb
Problem solved! it was an issue was the ip clashing. Restarted the whole systems including the VM and router to solve this issue.

client access to docker swarm

I have a docker swarm cluster consisting of one manager and one worker node. Then I configured (tls and DOCKER_HOST) a client from my laptop to get access to this cluster.
When I run docker ps I see only containers from the worker node (and not all containers of worker node (!)).
For example, from my client:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a129d9402aeb progrium/consul "/bin/start -rejoi..." 2 weeks ago Up 22 hours IP:8300-8302->8300-8302/tcp, IP:8400->8400/tcp, IP:8301-8302->8301-8302/udp, 53/tcp, 53/udp, IP:8500->8500/tcp, IP:8600->8600/udp hadoop1103/consul-agt2-hadoop
As well as I run docker ps at worker node:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4fec7fbf0b00 swarm "/swarm join --advert" 16 hours ago Up 16 hours 2375/tcp join
a129d9402aeb progrium/consul "/bin/start -rejoin -" 2 weeks ago Up 22 hours 0.0.0.0:8300-8302->8300-8302/tcp, 0.0.0.0:8400->8400/tcp, 0.0.0.0:8301-8302->8301-8302/udp, 53/tcp, 53/udp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8600->8600/udp consul-agt2-hadoop
So two questions: Why docker ps doesn't show containers from manager machine and not all containers from worker node?
Classic swarm (run as a container) by default hides the swarm management containers from docker ps output. You can show these containers with a docker ps -a command instead.
This behavior may be documented elsewhere, but the one location I've seen the behavior documented is in the api differences docs:
GET "/containers/json"
Containers started from the swarm official image are hidden by default, use all=1 to display them.
The all=1 api syntax is the equivalent of the docker ps -a cli.

Docker Swarm Linking

I want to create a Docker Swarm Cluster running an elastic search instance, a MongoDB instance and a grails app, each on a separate machine. I'm using Docker Machine to set up my Docker Swarm Cluster
swarm-01:
mongodb
mongodb_ambassador
swarm-02:
elasticsearch
elasticsearch_ambassador
swarm-03:
mongodb_ambassador
elasticsearch_ambassador
grails
The last step of my setup, running the actual grails app, using the following command:
docker run -p 8080:8080 -d --name grails-master --volumes-from maven --link mongo:mongo-master --link es:es-master my-grails-image
fails with error:
Error response from daemon: Unable to find a node fulfilling all
dependencies: --volumes-from=maven --link=mongo:mongo-master
--link=es:es-master
The ambassador containers and the maven data container are all running on the same node.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74677dad09a7 svendowideit/ambassador "/bin/sh -c 'env | gr" 18 minutes ago Up 18 minutes 9200/tcp, 9300/tcp swarm-03/es
98b38c4fc575 svendowideit/ambassador "/bin/sh -c 'env | gr" 18 minutes ago Up 18 minutes 27107/tcp swarm-03/mongo
7d45fb82eacc debian:jessie "/bin/bash" 20 minutes ago swarm-03/maven
I'm not able to get the Grails app running on the Swarm cluster; any advice would be appreciated. Running all containers on a single machine works, so I guess I'm making a mistake linking the mongo and es instances to the grails app.
Btw I'm using latest Docker Toolbox installation on OS X.
"linking" is deprecated in docker. Don't use it. It's complicated and not flexible enough.
Just create an overlay network for swarm mode.
docker network create -d overlay mynetwork
In swarm mode (even in single container mode), just add every service who should communicate with another service to the same network.
docker service create --network mynetwork --name mymongodb ...
Other services in the same network can reach your mongodb service just over the hostname mymongodb. That's all. Docker swarm mode has battery included.

docker container not available at port 80 like it should

Im using docker registry and the docker frontend is listed as running when I invoke docker ps but it is not available at localhost:80:
e2a54694e434 konradkleine/docker-registry-frontend "/bin/sh -c $START_S 26 seconds ago Up 2 seconds 443/tcp, 0.0.0.0:8080->80/tcp serene_tesla
Do you use boot2docker or docker-machine? If so, you should use the VMs IP address instead of localhost.
for boot2docker usually 192.168.59.103.
for docker-machines IP address type docker-machine ip <yourmachine>.

Resources