How do I properly shut down these k8 containers? - docker

I was doing some self-learning with Kubernetes and I have these containers that will not permanently shut down:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e08ecdf12c2 fadcc5d2b066 "/usr/local/bin/kube…" About a minute ago Up About a minute k8s_kube-proxy_kube-proxy-mtksn_kube-system_08f1149a-4ac6-11e9-bea5-080027db2e61_0
744282ae4605 40a817357014 "kube-controller-man…" About a minute ago Up About a minute k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_17eea6fd9342634d7d40a04d577641fd_0
0473a3e3fedb f59dcacceff4 "/coredns -conf /etc…" About a minute ago Up About a minute k8s_coredns_coredns-86c58d9df4-l6mdf_kube-system_08f82a2f-4ac6-11e9-bea5-080027db2e61_0
6e9a0a03dff1 4689081edb10 "/storage-provisioner" About a minute ago Up About a minute k8s_storage-provisioner_storage-provisioner_kube-system_0a7e1c9d-4ac6-11e9-bea5-080027db2e61_0
4bb4356e57e7 dd862b749309 "kube-scheduler --ad…" About a minute ago Up About a minute k8s_kube-scheduler_kube-scheduler-minikube_kube-system_4b52d75cab61380f07c0c5a69fb371d4_0
973e42e849c8 f59dcacceff4 "/coredns -conf /etc…" About a minute ago Up About a minute k8s_coredns_coredns-86c58d9df4-l6hqj_kube-system_08fd4db1-4ac6-11e9-bea5-080027db2e61_1
338b58983301 9c16409588eb "/opt/kube-addons.sh" About a minute ago Up About a minute k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_5c72fb06dcdda608211b70d63c0ca488_4
3600083cbb01 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-addon-manager-minikube_kube-system_5c72fb06dcdda608211b70d63c0ca488_3
97dffefb7a4b ldco2016/multi-client "nginx -g 'daemon of…" About a minute ago Up About a minute k8s_client_client-deployment-6d89489556-mgznt_default_1f1f77f2-4c5d-11e9-bea5-080027db2e61_1
55224d847c72 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-proxy-mtksn_kube-system_08f1149a-4ac6-11e9-bea5-080027db2e61_3
9a66d39da906 3cab8e1b9802 "etcd --advertise-cl…" About a minute ago Up About a minute k8s_etcd_etcd-minikube_kube-system_8490cea1bf6294c73e0c454f26bdf714_6
e75a57524b41 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_etcd-minikube_kube-system_8490cea1bf6294c73e0c454f26bdf714_5
5a1c02eeea6a fc3801f0fc54 "kube-apiserver --au…" About a minute ago Up About a minute k8s_kube-apiserver_kube-apiserver-minikube_kube-system_d1fc269f154a136c6c9cb809b65b6899_3
2320ac2ab58d k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-apiserver-minikube_kube-system_d1fc269f154a136c6c9cb809b65b6899_3
0195bb0f048c k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-scheduler-minikube_kube-system_4b52d75cab61380f07c0c5a69fb371d4_3
0664e62bf425 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_coredns-86c58d9df4-l6mdf_kube-system_08f82a2f-4ac6-11e9-bea5-080027db2e61_4
546c4195391e k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-controller-manager-minikube_kube-system_17eea6fd9342634d7d40a04d577641fd_4
9211bc0ce3f8 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_client-deployment-6d89489556-mgznt_default_1f1f77f2-4c5d-11e9-bea5-080027db2e61_3
c22e7c931f46 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_coredns-86c58d9df4-l6hqj_kube-system_08fd4db1-4ac6-11e9-bea5-080027db2e61_3
e5b9a76b8d68 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_storage-provisioner_kube-system_0a7e1c9d
What is the most efficient way to shut them all down in one go and stop them from restarting?
I ran a minikube stop and that took care of it, but I am unclear as to whether that was the proper way to do it.

This looks like the output of docker ps. When using Kubernetes, you should generally not worry about things at the Docker level, and what containers Docker is running. Some of the containers that are running are part of the Kubernetes API itself, so you should only shut these down if you plan to shut down Kubernetes itself. If you plan to shut down Kubernetes itself, the right way to shut it down depends on how you started it (minkube, GKE, etc?). If you don't plan on shutting down Kubernetes itself, but want to shut down any extra containers that Kubernetes is running on your behalf (as opposed to containers that are running as part of the Kubernetes system itself) you could run kubectl get pods --all-namespaces to see all "user-land" pods that are running. "Pod" is the level of abstraction that you primarily interact with when using Kubernetes, and the specific Docker processes that are running is not something you should need to worry about.
EDIT: I see you updated your question to say that you ran minikube stop. Yes, that is the correct way to do it, nice!

Related

Airflow Docker Unhealthy trigerrer

Im trying to setup airflow on my machine using docker and the docker-compose file provided by airflow here : https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#docker-compose-yaml
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d4d8de8f7782 apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 8080/tcp airflow_airflow-scheduler_1
3315f125949c apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 8080/tcp airflow_airflow-worker_1
2426795cb59f apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp airflow_airflow-webserver_1
cf649cd645bb apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (unhealthy) 8080/tcp airflow_airflow-triggerer_1
fa6b181113ae apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 0.0.0.0:5555->5555/tcp, :::5555->5555/tcp, 8080/tcp airflow_flower_1
b6e05f63aa2c postgres:13 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes (healthy) 5432/tcp airflow_postgres_1
177475be25a3 redis:latest "docker-entrypoint.s…" 2 minutes ago Up 2 minutes (healthy) 6379/tcp airflow_redis_1
I followed all steps as described in this URL, every airflow component is working great but the airflow trigerrer shows an unhealthy status :/
Im kinda new to docker i just know the basics and i don't really know how to debug that, can anyone help me up ?
Try to follow all steps on their website including mkdir ./dags ./logs ./plugins echo -e "AIRFLOW_UID=$(id -u)\nAIRFLOW_GID=0" > .env.
I don't know but it works then, but still unhealthy,
airflow.apache.org/docs/apache-airflow/stable/start/docker.html

Docker info shows containers but docker container ls doesn't

When I run docker info, it shows that I have 18 containers running.
% docker info
Client:
Debug Mode: false
Server:
Containers: 18
Running: 18
Paused: 0
Stopped: 0
Images: 9
...
I want to delete these containers, but when I run docker container ls -a, it shows an empty list. How can I find them?
These containers are not allowing me to delete images.
% docker rmi -f 1e94481e8f30
Error response from daemon: conflict: unable to delete 1e94481e8f30 (cannot be forced) - image is being used by running container 7e9b08a0007b
You are probably running kubernetes.
After stopping kubernetes, to remove all stopped containers and all images without at least one container associated to them:
docker system prune --all
To stop all running containers
docker stop $(docker ps -a -q)
To remove all containers
docker rm $(docker ps -a -q)
You should be able to delete all images after that
Those 18 containers belong to Kubernetes. You can check this by going to the Preferences > Kubernetes > Check Show system containers (advanced).
After that, just run docker container ls -a again and you will see those 18 containers.
These are the containers you are not seeing unless you check that option:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6c3c65d4bcf4 a8c3d87a58e7 "/compose-controller…" 3 minutes ago Up 3 minutes k8s_compose_compose-6c67d745f6-sr4zj_docker_34e7ef25-166e-11ea-857d-025000000001_2
663d6419ce76 eb516548c180 "/coredns -conf /etc…" 3 minutes ago Up 3 minutes k8s_coredns_coredns-6dcc67dcbc-2twts_kube-system_0c8d1f5f-166e-11ea-857d-025000000001_2
d04a4caf922d eb516548c180 "/coredns -conf /etc…" 3 minutes ago Up 3 minutes k8s_coredns_coredns-6dcc67dcbc-chk2l_kube-system_0c8df4b4-166e-11ea-857d-025000000001_2
324d5d216b07 f3591b2cb223 "/api-server --kubec…" 3 minutes ago Up 3 minutes k8s_compose_compose-api-57ff65b8c7-svv9t_docker_34e161f1-166e-11ea-857d-025000000001_2
f9f74acd5ab6 849af609e0c6 "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-w4x7l_kube-system_0c95526c-166e-11ea-857d-025000000001_2
3cbfa75f1466 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_compose-6c67d745f6-sr4zj_docker_34e7ef25-166e-11ea-857d-025000000001_2
f11f1cc4bbba k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_coredns-6dcc67dcbc-chk2l_kube-system_0c8df4b4-166e-11ea-857d-025000000001_2
cbb52fdaf130 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_coredns-6dcc67dcbc-2twts_kube-system_0c8d1f5f-166e-11ea-857d-025000000001_2
5ad88766f27d k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_compose-api-57ff65b8c7-svv9t_docker_34e161f1-166e-11ea-857d-025000000001_2
9f326ea7db5d k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-w4x7l_kube-system_0c95526c-166e-11ea-857d-025000000001_2
ea36a7a0d248 f1e3e5f9f93e "kube-scheduler --bi…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-docker-desktop_kube-system_ef4d089e81b94aa15841e51ed8c41712_2
f3dfa711ea0f 1e94481e8f30 "kube-apiserver --ad…" 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-docker-desktop_kube-system_b1dff398070b11d23d8d2653b78d430e_2
1e5cf76eaf20 2c4adeb21b4f "etcd --advertise-cl…" 3 minutes ago Up 3 minutes k8s_etcd_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_2
d631fca0d4ac k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-docker-desktop_kube-system_ef4d089e81b94aa15841e51ed8c41712_2
20242b387b05 36a8001a79fd "kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_86e291a2049db314a5eca69a05cf6ced_2
b32c2d63090f k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-docker-desktop_kube-system_b1dff398070b11d23d8d2653b78d430e_2
4d6e49f60ead k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_2
9035ccdcae5d k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-docker-desktop_kube-system_86e291a2049db314a5eca69a05cf6ced_2

How can I stop the kafka container?

I setup my kafka containers by follow the tutorial from here: https://success.docker.com/article/getting-started-with-kafka
Then I found that I can't remove the container anymore, even though I tried docker container prune. The containers are still running.
What should I do?
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
caa0b94c1d98 qnib/plain-kafka:1.1.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) broker.2.3fij6pt90qt9sb9aco0i2dpys
b888cb6f783a qnib/plain-kafka:1.1.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) broker.3.xqmjnfnfg7ha46lf6drlrq4ki
dcdda2d778c2 qnib/plain-kafka:1.1.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) broker.1.gtgluxt6q58z2irzgfmu969ba
843def0b24fb qnib/plain-zkui "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) zkui.1.7zks618eae8sp4woc7araydix
d7ced19be88c qnib/plain-kafka-manager:2018-04-25 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) manager.1.jdu5gnprhr4d982vz50511rhg
a67ac962e682 qnib/plain-zookeeper:2018-04-25 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) 2181/tcp, 2888/tcp, 3888/tcp zookeeper.1.xar7cmdgozdj79orow0bmj3ev
880121f2fee5 qnib/golang-kafka-producer:2018-05-01.5 "kafka-producer" 3 minutes ago Up 3 minutes (healthy) producer.2.hety8za590v1twdgj2byvrmse
b6487d29812e qnib/golang-kafka-producer:2018-05-01.5 "kafka-producer" 3 minutes ago Up 3 minutes (healthy) producer.1.5oz02c8cw5oefc97xbarq5qoa
8b3a81905e90 qnib/golang-kafka-producer:2018-05-01.5 "kafka-producer" 3 minutes ago Up 3 minutes (healthy) producer.3.p8uh3hzr22fgm7u4gl1p3fiyw
I found I have to use docker service rm to remove the service due to the replicas settings.
The prune command doesn't work against running containers. You will need to either stop or kill them.
docker kill caa0b94c1d98

docker containers shutdown continuously

I follow the official tutorial of deploying docker services https://docs.docker.com/get-started/part5/#add-a-new-service-and-redeploy, in the first time i tried this the containers was running as expected but after that the containers shutdown and restarting (i notice that using the visualizer service provided by docker )
and when i execute the command :
docker stack ps getstartedlab
NAME DESIRED STATE CURRENT STATE
ERROR
getstartedlab_web.1 Running Preparing 2 seconds ago
\_ getstartedlab_web.1 Shutdown Failed 4 minutes ago
"task: non-zero exit (2)"
i read in this post https://github.com/docker/machine/issues/3747 that the problem came from the firewall that may block the icmp, i tried to ping docker.com and i had 100% loss packet but when i ping google.com it's ok with no loss packets.
the result of docker ps --all is :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f67c82da7c7 username/repo:tag "python app.py" 2 minutes ago Exited (2) 2 minutes ago getstartedlab_web.2.zcnr0ld9bioy0dffsxmn8zss5
f59e413b0780 username/repo:tag "python app.py" 4 minutes ago Exited (2) 4 minutes ago getstartedlab_web.5.ymxgnsf7n8306yr8963xyyljv
9ab631c4057c username/repo:tag "python app.py" 10 minutes ago Exited (2) 10 minutes ago getstartedlab_web.5.zr3gsvgbyxs8c51fiko5h9jxp
bee5816ce1f2 dockersamples/visualizer:stable "npm start" 15 minutes ago Up 15 minutes 8080/tcp getstartedlab_visualizer.1.oyiqwb5esq6zakcdtiw4txh8a
cadca594f8cd username/repo:tag "python app.py" 24 hours ago Exited (2) 24 hours ago getstartedlab_web.1.zehutsl9cefrccqrj86dz4ap7
576b1a6db0b0 username/repo:tag "python app.py" 24 hours ago Exited (2) 24 hours ago getstartedlab_web.5.za9xvxpo5yvl20kha9sjcimmz
2804ebc4fc0c username/repo:tag "python app.py" 24 hours ago Exited (2) 24 hours ago getstartedlab_web.1.zlk42chspvym3jxkzs2nc8k2d
03efb2b04489 dockersamples/visualizer:stable "npm start" 24 hours ago Exited (255) 16 minutes ago 8080/tcp getstartedlab_visualizer.1.qyp4egtu9vcd31kf2jxtzxko3
b85fd1600955 username/repo:tag "python app.py" 2 days ago Exited (2) 2 days ago getstartedlab_web.5.kzrj3m5c3jgkuox0ulpszizee
and the
docker logs 9f67c82da7c7
python: can't open file 'app.py': [Errno 2] No such file or directory
exuse me for representation of the results of those commands because when i copy past the results the lines breaks, how to copy past and preserve the same displaying ?
can anyone have a fix to this problem ? thanks.

What are the pause containers?

In my IBM Cloud Private, I see several pause containers.
Can anyone explain the purpose of these? Normally, I can get to the bash shell in a running container but not the ones which are pause.
# docker ps | grep pause
ee5f3f6b9fc0 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_catalog-catalog-apiserver-8qxrf_kube-system_3b4b107d-0b72-11e8-9f22-005056227136_0
d238dad0c5b8 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_auth-apikeys-bk28g_kube-system_3b731880-0b72-11e8-9f22-005056227136_0
0196efb043ca ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_icp-router-htnhz_kube-system_3b8d25d3-0b72-11e8-9f22-005056227136_0
b09dc1759d09 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_unified-router-bskb6_kube-system_3af9d44e-0b72-11e8-9f22-005056227136_0
8a392f174c24 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_auth-pap-gfh7q_kube-system_3aa5b311-0b72-11e8-9f22-005056227136_0
0ac776eb9ced ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_platform-api-zr5pz_kube-system_3b2ce527-0b72-11e8-9f22-005056227136_0
107896ebfcd6 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_icp-ds-0_kube-system_3a9200f8-0b72-11e8-9f22-005056227136_0
f95df5fbcc4a ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_image-manager-0_kube-system_3ae74f5c-0b72-11e8-9f22-005056227136_0
a9d30804f222 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_auth-idp-h4fmt_kube-system_3ad78a99-0b72-11e8-9f22-005056227136_0
eaae55900637 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_catalog-ui-jv9sq_kube-system_3af5cb32-0b72-11e8-9f22-005056227136_0
4ace18a84d8b ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_auth-pdp-9vhzx_kube-system_3ae0a074-0b72-11e8-9f22-005056227136_0
98b70f6074c7 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_calico-policy-controller-5997c6c956-cx774_kube-system_39bfecef-0b72-11e8-9f22-005056227136_0
63a0340e3de8 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_rescheduler-jqtd4_kube-system_3a6d4b05-0b72-11e8-9f22-005056227136_0
cace008e71b1 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-dns-9494dc977-7gwpx_kube-system_39a15b6b-0b72-11e8-9f22-005056227136_0
80a18b538ef3 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_platform-ui-k5g7h_kube-system_3a98aad3-0b72-11e8-9f22-005056227136_0
bea43bfc8d70 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_catalog-catalog-controller-manager-bd9f49c8c-4fqcp_kube-system_39653745-0b72-11e8-9f22-005056227136_0
f54f329e50ae ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_helm-api-5d8b6d6f9c-4rl2s_kube-system_396ade8d-0b72-11e8-9f22-005056227136_0
6812e3fee9cc ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_tiller-deploy-55fb4d8dcc-pcxbj_kube-system_396b0005-0b72-11e8-9f22-005056227136_0
69a840bc394b ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_helmrepo-5878d9d858-wlcrj_kube-system_396506a5-0b72-11e8-9f22-005056227136_0
03bc9ce0413d ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_heapster-5fd94775d5-28t6w_kube-system_396b0dd6-0b72-11e8-9f22-005056227136_0
8763167695b3 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_filebeat-ds-amd64-q54pc_kube-system_9b8515d6-0b52-11e8-99a8-005056227136_1
461e5de11ee1 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_calico-node-amd64-ql292_kube-system_53492619-0b51-11e8-99a8-005056227136_1
f73e2eb9dbaf ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_k8s-master-192.168.142.103_kube-system_c39080358687c72432da5f6de4b6fff9_1
c08f029af60e ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_k8s-mariadb-192.168.142.103_kube-system_6b640df7dae2cb064ebc450b273ce62a_1
0174b5c35963 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_k8s-proxy-192.168.142.103_kube-system_b9f441de4995384d245d71fdb65e2cc2_1
f6befd10c479 ibmcom/pause:3.0 "/pause" About an hour ago Up About an hour k8s_POD_k8s-etcd-192.168.142.103_kube-system_a5150d8f6ee1f8047b05f9b2d5cbcaba_1
The 'pause' container is a container which holds the network namespace
for the pod. Kubernetes creates pause containers to acquire the respective pod’s IP address and set up the network namespace for all other containers that join that pod.
You can access below links for details.
https://groups.google.com/forum/#!topic/kubernetes-users/jVjv0QK4b_o
https://www.ianlewis.org/en/almighty-pause-container
Pause is a secret container that runs on every pod in Kubernetes. This container’s primary job is to keep the namespace open in case all the other containers on the pod die.
yes, the pause container is part of each pod that is responsible to create shared network, assign ip address within the pod for all business containers inside this pod, also the pause container shared the volume for entire pod. If the pause container is dead, kubernetes consider the pod died and kill it and reschedule a new one.
If you docker stop the pause container, you would find that the Pod would have a new internal IP without any change in Restart count of the Pod. However, if you docker stop the container of the Pod, you would find the Pod status as Completed and then if you would docker start the same, you would find Pod in Running again with Restart count of the Pod incremented by 1 with no change in IP.

Resources