How do I delete all these kubernetes k8s_* containers - docker

New to kubernetes. I was following a tutorial on kubernetes the other day. I forgot what I was doing. Running docker ps shows many containers of k8s*.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ce38bbf370b f3591b2cb223 "/api-server --kubec…" 3 hours ago Up 3 hours k8s_compose_compose-api-57ff65b8c7-cc6qf_docker_460bc96e-dcfe-11e9-9213-025000000001_6
222239366ae5 eb516548c180 "/coredns -conf /etc…" 3 hours ago Up 3 hours k8s_coredns_coredns-fb8b8dccf-7vp79_kube-system_35ecd610-dcfe-11e9-9213-025000000001_6
0e4a5a5c23bd eb516548c180 "/coredns -conf /etc…" 3 hours ago Up 3 hours k8s_coredns_coredns-fb8b8dccf-h7tvr_kube-system_35edfd50-dcfe-11e9-9213-025000000001_6
332d3d26c082 9946f563237c "kube-apiserver --ad…" 3 hours ago Up 3 hours k8s_kube-apiserver_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_4
5778a63798ab k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_3
a0a26d6a2d08 2c4adeb21b4f "etcd --advertise-cl…" 3 hours ago Up 3 hours k8s_etcd_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_3
e6038e717c64 ac2ce44462bc "kube-controller-man…" 3 hours ago Up 3 hours k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_4
10e962e90703 004666307c5b "/usr/local/bin/kube…" 3 hours ago Up 3 hours k8s_kube-proxy_kube-proxy-pq4f7_kube-system_35ac91f0-dcfe-11e9-9213-025000000001_4
21b4a7aa37d0 953364a3ae7a "kube-scheduler --bi…" 3 hours ago Up 3 hours k8s_kube-scheduler_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_4
d9447c41bc55 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-proxy-pq4f7_kube-system_35ac91f0-dcfe-11e9-9213-025000000001_4
65248416150d k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_3
4afff5745b79 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_3
d6db038ea9b3 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_3
9ca30180ab45 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_compose-api-57ff65b8c7-cc6qf_docker_460bc96e-dcfe-11e9-9213-025000000001_4
338d226f12d9 a8c3d87a58e7 "/compose-controller…" 3 hours ago Up 3 hours k8s_compose_compose-6c67d745f6-9v5k5_docker_461b37ab-dcfe-11e9-9213-025000000001_3
6e23ff5c4b86 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_coredns-fb8b8dccf-7vp79_kube-system_35ecd610-dcfe-11e9-9213-025000000001_5
258ced5c1498 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_coredns-fb8b8dccf-h7tvr_kube-system_35edfd50-dcfe-11e9-9213-025000000001_4
0ee3d792d79e k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_compose-6c67d745f6-9v5k5_docker_461b37ab-dcfe-11e9-9213-025000000001_4
I also ran kubectl with --namespace provided. When I only execute kubectl get pods, it says no resource found.
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-frlhd 1/1 Running 1 9m30s
coredns-5644d7b6d9-xmdtg 1/1 Running 1 9m30s
etcd-minikube 1/1 Running 1 8m29s
kube-addon-manager-minikube 1/1 Running 1 8m23s
kube-apiserver-minikube 1/1 Running 1 8m42s
kube-controller-manager-minikube 1/1 Running 1 8m22s
kube-proxy-48kxn 1/1 Running 1 9m30s
kube-scheduler-minikube 1/1 Running 1 8m32s
storage-provisioner 1/1 Running 1 9m27s
I also tried stopping the containers using docker stop. It stopped but few seconds later, the containers started again.
I also ran minikube delete but it only deleted minikube. The command docker ps still showed the containers.
I'd like to start from beginning again.

Don't try to delete pause containers.
k8s.gcr.io/pause:3.1 "/pause"
You can bring multiple containers in a k8s pod and they share same network namespace.
The pause containers are meant to be a way to share network namespace.
That's how the k8s pod is created.
For more info, please go through this.

If you want to reset your cluster, you can first list all namespaces using kubectl get namespaces, then delete them using kubectl delete namespaces namespace_name.
However, you can't delete the namespaces default, kube-system, and kube-public as those are protected by the cluster. What you can do is remove all Pods from the default and kube-public namespace using kubectl delete --all pods --namespace=default; kubectl delete --all pods --namespace=kube-public. You shouldn't touch the kube-system namespace as it contains resources that are mandatory for the cluster to function.

You can try deleting the files using the below commands:
kubectl delete -f <file location>
The file which you installed using:
kubectl apply -f <file location>
you can remove all the tags associated with it using:
istioctl tag remove <profile>
Note: You can refer manifests/profiles for referring the profiles

My situation is similar with you. Forgot what I did, then finding out many *k8s* containers running in docker. Which once deleted, will be started again automatically.
Uncheck Docker Desktop -> Settings -> Kubernetes -> Enable Kubernetes works for me. Hope that helps.

Related

Docker info shows containers but docker container ls doesn't

When I run docker info, it shows that I have 18 containers running.
% docker info
Client:
Debug Mode: false
Server:
Containers: 18
Running: 18
Paused: 0
Stopped: 0
Images: 9
...
I want to delete these containers, but when I run docker container ls -a, it shows an empty list. How can I find them?
These containers are not allowing me to delete images.
% docker rmi -f 1e94481e8f30
Error response from daemon: conflict: unable to delete 1e94481e8f30 (cannot be forced) - image is being used by running container 7e9b08a0007b
You are probably running kubernetes.
After stopping kubernetes, to remove all stopped containers and all images without at least one container associated to them:
docker system prune --all
To stop all running containers
docker stop $(docker ps -a -q)
To remove all containers
docker rm $(docker ps -a -q)
You should be able to delete all images after that
Those 18 containers belong to Kubernetes. You can check this by going to the Preferences > Kubernetes > Check Show system containers (advanced).
After that, just run docker container ls -a again and you will see those 18 containers.
These are the containers you are not seeing unless you check that option:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6c3c65d4bcf4 a8c3d87a58e7 "/compose-controller…" 3 minutes ago Up 3 minutes k8s_compose_compose-6c67d745f6-sr4zj_docker_34e7ef25-166e-11ea-857d-025000000001_2
663d6419ce76 eb516548c180 "/coredns -conf /etc…" 3 minutes ago Up 3 minutes k8s_coredns_coredns-6dcc67dcbc-2twts_kube-system_0c8d1f5f-166e-11ea-857d-025000000001_2
d04a4caf922d eb516548c180 "/coredns -conf /etc…" 3 minutes ago Up 3 minutes k8s_coredns_coredns-6dcc67dcbc-chk2l_kube-system_0c8df4b4-166e-11ea-857d-025000000001_2
324d5d216b07 f3591b2cb223 "/api-server --kubec…" 3 minutes ago Up 3 minutes k8s_compose_compose-api-57ff65b8c7-svv9t_docker_34e161f1-166e-11ea-857d-025000000001_2
f9f74acd5ab6 849af609e0c6 "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-w4x7l_kube-system_0c95526c-166e-11ea-857d-025000000001_2
3cbfa75f1466 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_compose-6c67d745f6-sr4zj_docker_34e7ef25-166e-11ea-857d-025000000001_2
f11f1cc4bbba k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_coredns-6dcc67dcbc-chk2l_kube-system_0c8df4b4-166e-11ea-857d-025000000001_2
cbb52fdaf130 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_coredns-6dcc67dcbc-2twts_kube-system_0c8d1f5f-166e-11ea-857d-025000000001_2
5ad88766f27d k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_compose-api-57ff65b8c7-svv9t_docker_34e161f1-166e-11ea-857d-025000000001_2
9f326ea7db5d k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-w4x7l_kube-system_0c95526c-166e-11ea-857d-025000000001_2
ea36a7a0d248 f1e3e5f9f93e "kube-scheduler --bi…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-docker-desktop_kube-system_ef4d089e81b94aa15841e51ed8c41712_2
f3dfa711ea0f 1e94481e8f30 "kube-apiserver --ad…" 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-docker-desktop_kube-system_b1dff398070b11d23d8d2653b78d430e_2
1e5cf76eaf20 2c4adeb21b4f "etcd --advertise-cl…" 3 minutes ago Up 3 minutes k8s_etcd_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_2
d631fca0d4ac k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-docker-desktop_kube-system_ef4d089e81b94aa15841e51ed8c41712_2
20242b387b05 36a8001a79fd "kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_86e291a2049db314a5eca69a05cf6ced_2
b32c2d63090f k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-docker-desktop_kube-system_b1dff398070b11d23d8d2653b78d430e_2
4d6e49f60ead k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_2
9035ccdcae5d k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-docker-desktop_kube-system_86e291a2049db314a5eca69a05cf6ced_2

How can I stop the kafka container?

I setup my kafka containers by follow the tutorial from here: https://success.docker.com/article/getting-started-with-kafka
Then I found that I can't remove the container anymore, even though I tried docker container prune. The containers are still running.
What should I do?
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
caa0b94c1d98 qnib/plain-kafka:1.1.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) broker.2.3fij6pt90qt9sb9aco0i2dpys
b888cb6f783a qnib/plain-kafka:1.1.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) broker.3.xqmjnfnfg7ha46lf6drlrq4ki
dcdda2d778c2 qnib/plain-kafka:1.1.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) broker.1.gtgluxt6q58z2irzgfmu969ba
843def0b24fb qnib/plain-zkui "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) zkui.1.7zks618eae8sp4woc7araydix
d7ced19be88c qnib/plain-kafka-manager:2018-04-25 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) manager.1.jdu5gnprhr4d982vz50511rhg
a67ac962e682 qnib/plain-zookeeper:2018-04-25 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes (healthy) 2181/tcp, 2888/tcp, 3888/tcp zookeeper.1.xar7cmdgozdj79orow0bmj3ev
880121f2fee5 qnib/golang-kafka-producer:2018-05-01.5 "kafka-producer" 3 minutes ago Up 3 minutes (healthy) producer.2.hety8za590v1twdgj2byvrmse
b6487d29812e qnib/golang-kafka-producer:2018-05-01.5 "kafka-producer" 3 minutes ago Up 3 minutes (healthy) producer.1.5oz02c8cw5oefc97xbarq5qoa
8b3a81905e90 qnib/golang-kafka-producer:2018-05-01.5 "kafka-producer" 3 minutes ago Up 3 minutes (healthy) producer.3.p8uh3hzr22fgm7u4gl1p3fiyw
I found I have to use docker service rm to remove the service due to the replicas settings.
The prune command doesn't work against running containers. You will need to either stop or kill them.
docker kill caa0b94c1d98

Docker-compose up -d:image not created

I am trying to create a basic web page with docker-compose
This is my yml file
identidock:
build: .
ports:
- "5000:5000"
environment:
ENV: DEV
volumes:
- ./app:/app
When I run
docker-compose up -d
it shows
Starting identidock_identidock_1 ... done
But if I check images
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
identidock_identidock latest b5003205377f 12 days ago 698MB
identidock latest 8eafce868d95 12 days ago 698MB
<none> <none> de77d0555129 13 days ago 698MB
<none> <none> 2f8bfc8f0a95 13 days ago 697MB
<none> <none> a42d37d82f28 2 weeks ago 535MB
<none> <none> 592d8c832533 2 weeks ago 695MB
python 3.4 41f9e544ec6c 2 weeks ago 684MB
It is obvious that new image has not been created.If I to http://localhost:5000/,
I got
Firefox can’t establish a connection to the server at localhost:5000.
This is docker ps -a output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0414117eadd8 identidock_identidock "/cmd.sh" 12 days ago Exited (255) 11 days ago 9090/tcp, 0.0.0.0:5000->5000/tcp, 9191/tcp blissful_easley
4146fd976547 identidock_identidock:latest "/cmd.sh" 12 days ago Exited (255) 11 days ago 9090/tcp, 9191/tcp agitated_leakey
15d49655b290 identidock_identidock "/cmd.sh" 12 days ago Exited (1) 23 minutes ago identidock_identidock_1
And
docker-compose ps
Name Command State Ports
--------------------------------------------------
identidock_identidock_1 /cmd.sh Exit 1
Why?
The container may not have started. Check docker-compose ps. If the containers listed are not in Up state, then you can use docker-compose logs identidock to view the logs.

bookinfo example app crashes on istio

I am trying to evaluate istio and trying to deploy the bookinfo example app provided with the istio installation. While doing that, I am facing the following issue.
Environment: Non production
1. Server Node - red hat enterprise linux 7 64 bit VM [3.10.0-693.11.6.el7.x86_64]
Server in customer secure vpn with no access to enterprise/public DNS.
2. docker client and server: 1.12.6
3. kubernetes client - 1.9.1, server - 1.8.4
4. kubernetes install method: kubeadm.
5. kubernetes deployment mode: single node with master and slave.
6. Istio install method:
- istio version: 0.5.0
- no SSL, no automatic side car injection, no Helm.
- Instructions followed: https://istio.io/docs/setup/kubernetes/quick-start.html
- Cloned the istio github project - https://github.com/istio/istio.
- Used the istio.yaml and bookinfo.yaml files for the installation and example implementation.
Issue:
The installation of istio client and control plane components went through fine.
The control plane also starts up fine.
However, when I launch the bookinfo app, the app's proxy init containers crash with a cryptic "iptables: Chain already exists" log message.
ISTIO CONTROL PLANE
--------------------
$ kubectl get deployments,pods,svc,ep -n istio-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/istio-ca 1 1 1 1 2d
deploy/istio-ingress 1 1 1 1 2d
deploy/istio-mixer 1 1 1 1 2d
deploy/istio-pilot 1 1 1 1 2d
NAME READY STATUS RESTARTS AGE
po/istio-ca-5796758d78-md7fl 1/1 Running 0 2d
po/istio-ingress-f7ff9dcfd-fl85s 1/1 Running 0 2d
po/istio-mixer-69f48ddb6c-d4ww2 3/3 Running 0 2d
po/istio-pilot-69cc4dd5cb-fglsg 2/2 Running 0 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/istio-ingress LoadBalancer 10.103.67.68 <pending> 80:31445/TCP,443:30412/TCP 2d
svc/istio-mixer ClusterIP 10.101.47.150 <none> 9091/TCP,15004/TCP,9093/TCP,9094/TCP,9102/TCP,9125/UDP,42422/TCP 2d
svc/istio-pilot ClusterIP 10.110.58.219 <none> 15003/TCP,8080/TCP,9093/TCP,443/TCP 2d
NAME ENDPOINTS AGE
ep/istio-ingress 10.244.0.22:443,10.244.0.22:80 2d
ep/istio-mixer 10.244.0.20:9125,10.244.0.20:9094,10.244.0.20:15004 + 4 more... 2d
ep/istio-pilot 10.244.0.21:443,10.244.0.21:15003,10.244.0.21:8080 + 1 more... 2d
BOOKINFO APP
-------------
$ kubectl get deployments,pods,svc,ep
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/details-v1 1 1 1 1 2d
deploy/productpage-v1 1 1 1 1 2d
deploy/ratings-v1 1 1 1 1 2d
deploy/reviews-v1 1 1 1 1 2d
deploy/reviews-v2 1 1 1 1 2d
deploy/reviews-v3 1 1 1 1 2d
NAME READY STATUS RESTARTS AGE
po/details-v1-df5d6ff55-92jrx 0/2 Init:CrashLoopBackOff 738 2d
po/productpage-v1-85f65888f5-xdkt6 0/2 Init:CrashLoopBackOff 738 2d
po/ratings-v1-668b7f9ddc-9nhcw 0/2 Init:CrashLoopBackOff 738 2d
po/reviews-v1-5845b57d57-2cjvn 0/2 Init:CrashLoopBackOff 738 2d
po/reviews-v2-678b446795-hkkvv 0/2 Init:CrashLoopBackOff 738 2d
po/reviews-v3-8b796f-64lm8 0/2 Init:CrashLoopBackOff 738 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/details ClusterIP 10.104.237.100 <none> 9080/TCP 2d
svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70d
svc/productpage ClusterIP 10.100.136.14 <none> 9080/TCP 2d
svc/ratings ClusterIP 10.105.166.190 <none> 9080/TCP 2d
svc/reviews ClusterIP 10.110.221.19 <none> 9080/TCP 2d
NAME ENDPOINTS AGE
ep/details 10.244.0.24:9080 2d
ep/kubernetes NNN.NN.NN.NNN:6443 70d
ep/productpage 10.244.0.45:9080 2d
ep/ratings 10.244.0.25:9080 2d
ep/reviews 10.244.0.26:9080,10.244.0.28:9080,10.244.0.29:9080 2d
PROXY INIT CRASHED CONTAINERS
------------------------------
$ docker ps -a | grep -i istio | grep -i exit
9109bafcf9e7 docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" 11 seconds ago Exited (1) 10 seconds ago k8s_istio-init_details-v1-df5d6ff55-92jrx_default_b54d921c-0dcd-11e8-8de9-0050568
e45b4_740
0ed3b188d7ba docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" 27 seconds ago Exited (1) 26 seconds ago k8s_istio-init_reviews-v2-678b446795-hkkvv_default_b557b5a5-0dcd-11e8-8de9-005056
8e45b4_740
893fcec0b01e docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" About a minute ago Exited (1) About a minute ago k8s_istio-init_reviews-v1-5845b57d57-2cjvn_default_b555bb75-0dcd-11e8-8de9-005056
8e45b4_740
a2a036273402 docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" About a minute ago Exited (1) About a minute ago k8s_istio-init_productpage-v1-85f65888f5-xdkt6_default_b579277b-0dcd-11e8-8de9-00
50568e45b4_740
520beb6779e0 docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" About a minute ago Exited (1) About a minute ago k8s_istio-init_reviews-v3-8b796f-64lm8_default_b559d9ef-0dcd-11e8-8de9-0050568e45
b4_740
91a0f41f5fde docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" 3 minutes ago Exited (1) 3 minutes ago k8s_istio-init_ratings-v1-668b7f9ddc-9nhcw_default_b55128a5-0dcd-11e8-8de9-005056
8e45b4_740
PROXY PROCESSES FOR EACH ISTIO COMPONENT
-----------------------------------------
$ docker ps | grep -vi exit | grep proxy
4d9b37839e44 docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_reviews-v2-678b446795-hkkvv_default_b557b5a5-0dcd-11e8-8de9-0050568e45b4_0
1c72e3a990cb docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_productpage-v1-85f65888f5-xdkt6_default_b579277b-0dcd-11e8-8de9-0050568e45b4_0
f6ffcaf4b24b docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_reviews-v1-5845b57d57-2cjvn_default_b555bb75-0dcd-11e8-8de9-0050568e45b4_0
b66b7ab90a2d docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_ratings-v1-668b7f9ddc-9nhcw_default_b55128a5-0dcd-11e8-8de9-0050568e45b4_0
08bf2370b5be docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_reviews-v3-8b796f-64lm8_default_b559d9ef-0dcd-11e8-8de9-0050568e45b4_0
0c10d8d594bc docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_details-v1-df5d6ff55-92jrx_default_b54d921c-0dcd-11e8-8de9-0050568e45b4_0
6134fa756f35 docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_istio-pilot-69cc4dd5cb-fglsg_istio-system_5ecf54b6-0dcd-11e8-8de9-0050568e45b4_0
9a18ea74b6bf docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_istio-mixer-69f48ddb6c-d4ww2_istio-system_5e8801ab-0dcd-11e8-8de9-0050568e45b4_0
5db18d722bb1 docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-ingress_istio-ingress-f7ff9dcfd-fl85s_istio-system_5ed6333d-0dcd-11e8-8de9-0050568e45b4_0
$ docker ps | egrep -iv "proxy|pause|kube-|etcd|defaultbackend|ingress"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Docker Containers for the apps (These seem to have started up without issues)
------------------------------
61951f88b83c docker.io/istio/examples-bookinfo-reviews-v2#sha256:e390023aa6180827373293747f1bff8846ffdf19fdcd46ad91549d3277dfd4ea "/bin/sh -c '/opt/ibm" 2 days ago Up 2 days k8s_reviews_reviews-v2-678b446795-hkkvv_default_b557b5a5-0dcd-11e8-8de9-0050568e45b4_0
18d2137257c0 docker.io/istio/examples-bookinfo-productpage-v1#sha256:ce983ff8f7563e582a8ff1adaf4c08c66a44db331208e4cfe264ae9ada0c5a48 "/bin/sh -c 'python p" 2 days ago Up 2 days k8s_productpage_productpage-v1-85f65888f5-xdkt6_default_b579277b-0dcd-11e8-8de9-0050568e45b4_0
5ba97591e5c7 docker.io/istio/examples-bookinfo-reviews-v1#sha256:aac2cfc27fad662f7a4473ea549d8980eb00cd72e590749fe4186caf5abc6706 "/bin/sh -c '/opt/ibm" 2 days ago Up 2 days k8s_reviews_reviews-v1-5845b57d57-2cjvn_default_b555bb75-0dcd-11e8-8de9-0050568e45b4_0
ed11b00eff22 docker.io/istio/examples-bookinfo-reviews-v3#sha256:6829a5dfa14d10fa359708cf6c11ec9022a3d047a089e73dea3f3bfa41f7ed66 "/bin/sh -c '/opt/ibm" 2 days ago Up 2 days k8s_reviews_reviews-v3-8b796f-64lm8_default_b559d9ef-0dcd-11e8-8de9-0050568e45b4_0
be88278186c2 docker.io/istio/examples-bookinfo-ratings-v1#sha256:b14905701620fc7217c12330771cd426677bc5314661acd1b2c2aeedc5378206 "/bin/sh -c 'node rat" 2 days ago Up 2 days k8s_ratings_ratings-v1-668b7f9ddc-9nhcw_default_b55128a5-0dcd-11e8-8de9-0050568e45b4_0
e1c749eedf3c docker.io/istio/examples-bookinfo-details-v1#sha256:02c863b54d676489c7e006948e254439c63f299290d664e5c0eaf2209ee7865e "/bin/sh -c 'ruby det" 2 days ago Up 2 days k8s_details_details-v1-df5d6ff55-92jrx_default_b54d921c-0dcd-11e8-8de9-0050568e45b4_0
Docker Containers for Control Plane components
-----------------------------------------------
(CA: no ssl setup done)
5847934ca3c6 docker.io/istio/istio-ca#sha256:b3aaa5e5df2c16b13ea641d9f6b21f1fa3fb01b2f36a6df5928f17815aa63307 "/usr/local/bin/istio" 2 days ago Up 2 days k8s_istio-ca_istio-ca-5796758d78-md7fl_istio-system_5ed9a9e4-0dcd-11e8-8de9-0050568e45b4_0
(PILOT:
[1] W0209 19:13:58.364556 1 client_config.go:529] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.)
[2] warn AvailabilityZone couldn't find the given cluster node
[3] warn AvailabilityZone unexpected service-node: invalid node type (valid types: ingress, sidecar, router in the service node "mixer~~.~.svc.cluster.local"
[4] warn AvailabilityZone couldn't find the given cluster node
pattern 2, 3, 4 repeats)
f7a7816bd147 docker.io/istio/pilot#sha256:96c2174f30d084e0ed950ea4b9332853f6cd0ace904e731e7086822af726fa2b "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_discovery_istio-pilot-69cc4dd5cb-fglsg_istio-system_5ecf54b6-0dcd-11e8-8de9-0050568e45b4_0
(MIXER: W0209 19:13:57.948480 1 client_config.go:529] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.)
f4c85eb7f652 docker.io/istio/mixer#sha256:a2d5f14fd55198239817b6c1dac85651ac3e124c241feab795d72d2ffa004bda "/usr/local/bin/mixs " 2 days ago Up 2 days k8s_mixer_istio-mixer-69f48ddb6c-d4ww2_istio-system_5e8801ab-0dcd-11e8-8de9-0050568e45b4_0
(STATD EXPORTER: No issues/errors)
9fa2865b7e9b docker.io/prom/statsd-exporter#sha256:d08dd0db8eaaf716089d6914ed0236a794d140f4a0fe1fd165cda3e673d1ed4c "/bin/statsd_exporter" 2 days ago Up 2 days k8s_statsd-to-prometheus_istio-mixer-69f48ddb6c-d4ww2_istio-system_5e8801ab-0dcd-11e8-8de9-0050568e45b4_0
This question would make a fantastic
https://github.com/istio/issues/issues/new
report - thanks for all the details
Can you try adding
privileged: true
to the container that crashes ?
#laurent-demailly, thank you for your suggestion on the privileged flag.
I had posted the query on github day before yesterday, and got a response yesterday with a few suggestions, which I tried and it worked ! :-)
Now none of the containers are crashing, and I am able access the bookinfo apps via the ingress gateway.
Heres the url to the github post:
github.com/istio/issues/issues/197

Prometheus query for monitoring docker containers filtered by name and image

I have several docker containers running:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
736caaa764f4 ubuntu "/bin/bash" 2 hours ago Up 2 hours quirky_morse
e2869c98ee1a ubuntu "/bin/bash" 2 hours ago Up 2 hours sleepy_wilson
e4149472a2da ubuntu "/bin/bash" 2 hours ago Up 2 hours cranky_booth
70bb44ac5d24 grafana/grafana "/run.sh" 2 hours ago Up 2 hours 0.0.0.0:3000->3000/tcp microservicemonitoring_grafana_1
e4b30881a83e prom/prometheus "/bin/prometheus -..." 2 hours ago Up 2 hours 0.0.0.0:9090->9090/tcp prometheus
281f792380f9 prom/node-exporter "/bin/node_exporte..." 2 hours ago Up 2 hours 9100/tcp node-exporter
17810c718b29 google/cadvisor "/usr/bin/cadvisor..." 2 hours ago Up 2 hours 8080/tcp microservicemonitoring_cadvisor_1
77711de421e2 prom/alertmanager "/bin/alertmanager..." 2 hours ago Up 2 hours 0.0.0.0:9093->9093/tcp microservicemonitoring_alertmanager_1
What I want to do is to build graphs for containers filtered by name and image.
Example: built from ubuntu container (quirky_morse, sleepy_wilson, cranky_booth) and prometheus container.
I can filter containers by image with this type of query:
sum by (name) (rate(container_network_receive_bytes_total{image="ubuntu"} [1m] ) )
As you can see I get graphs of three containers (flatlines because they a re doing nothing).
Now I want to add additional filter parameter name and it dows not work
sum by (name) (rate(container_network_receive_bytes_total{image="ubuntu", name="prometheus"} [1m] ) )
What I want to get is: three graphs for containers derived from image "ubuntu" and the one with name "prometheus" no matter the origin image
You can't do this with one selector.
The proper solution here is to use Grafana, which supports graphing multiple expressions on one graph.
At this level the best you can do is rate(container_network_receive_bytes_total{image="ubuntu"} [1m] or rate(container_network_receive_bytes_total{name="prometheus"}[1m]

Resources