bookinfo example app crashes on istio - docker

I am trying to evaluate istio and trying to deploy the bookinfo example app provided with the istio installation. While doing that, I am facing the following issue.
Environment: Non production
1. Server Node - red hat enterprise linux 7 64 bit VM [3.10.0-693.11.6.el7.x86_64]
Server in customer secure vpn with no access to enterprise/public DNS.
2. docker client and server: 1.12.6
3. kubernetes client - 1.9.1, server - 1.8.4
4. kubernetes install method: kubeadm.
5. kubernetes deployment mode: single node with master and slave.
6. Istio install method:
- istio version: 0.5.0
- no SSL, no automatic side car injection, no Helm.
- Instructions followed: https://istio.io/docs/setup/kubernetes/quick-start.html
- Cloned the istio github project - https://github.com/istio/istio.
- Used the istio.yaml and bookinfo.yaml files for the installation and example implementation.
Issue:
The installation of istio client and control plane components went through fine.
The control plane also starts up fine.
However, when I launch the bookinfo app, the app's proxy init containers crash with a cryptic "iptables: Chain already exists" log message.
ISTIO CONTROL PLANE
--------------------
$ kubectl get deployments,pods,svc,ep -n istio-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/istio-ca 1 1 1 1 2d
deploy/istio-ingress 1 1 1 1 2d
deploy/istio-mixer 1 1 1 1 2d
deploy/istio-pilot 1 1 1 1 2d
NAME READY STATUS RESTARTS AGE
po/istio-ca-5796758d78-md7fl 1/1 Running 0 2d
po/istio-ingress-f7ff9dcfd-fl85s 1/1 Running 0 2d
po/istio-mixer-69f48ddb6c-d4ww2 3/3 Running 0 2d
po/istio-pilot-69cc4dd5cb-fglsg 2/2 Running 0 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/istio-ingress LoadBalancer 10.103.67.68 <pending> 80:31445/TCP,443:30412/TCP 2d
svc/istio-mixer ClusterIP 10.101.47.150 <none> 9091/TCP,15004/TCP,9093/TCP,9094/TCP,9102/TCP,9125/UDP,42422/TCP 2d
svc/istio-pilot ClusterIP 10.110.58.219 <none> 15003/TCP,8080/TCP,9093/TCP,443/TCP 2d
NAME ENDPOINTS AGE
ep/istio-ingress 10.244.0.22:443,10.244.0.22:80 2d
ep/istio-mixer 10.244.0.20:9125,10.244.0.20:9094,10.244.0.20:15004 + 4 more... 2d
ep/istio-pilot 10.244.0.21:443,10.244.0.21:15003,10.244.0.21:8080 + 1 more... 2d
BOOKINFO APP
-------------
$ kubectl get deployments,pods,svc,ep
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/details-v1 1 1 1 1 2d
deploy/productpage-v1 1 1 1 1 2d
deploy/ratings-v1 1 1 1 1 2d
deploy/reviews-v1 1 1 1 1 2d
deploy/reviews-v2 1 1 1 1 2d
deploy/reviews-v3 1 1 1 1 2d
NAME READY STATUS RESTARTS AGE
po/details-v1-df5d6ff55-92jrx 0/2 Init:CrashLoopBackOff 738 2d
po/productpage-v1-85f65888f5-xdkt6 0/2 Init:CrashLoopBackOff 738 2d
po/ratings-v1-668b7f9ddc-9nhcw 0/2 Init:CrashLoopBackOff 738 2d
po/reviews-v1-5845b57d57-2cjvn 0/2 Init:CrashLoopBackOff 738 2d
po/reviews-v2-678b446795-hkkvv 0/2 Init:CrashLoopBackOff 738 2d
po/reviews-v3-8b796f-64lm8 0/2 Init:CrashLoopBackOff 738 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/details ClusterIP 10.104.237.100 <none> 9080/TCP 2d
svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70d
svc/productpage ClusterIP 10.100.136.14 <none> 9080/TCP 2d
svc/ratings ClusterIP 10.105.166.190 <none> 9080/TCP 2d
svc/reviews ClusterIP 10.110.221.19 <none> 9080/TCP 2d
NAME ENDPOINTS AGE
ep/details 10.244.0.24:9080 2d
ep/kubernetes NNN.NN.NN.NNN:6443 70d
ep/productpage 10.244.0.45:9080 2d
ep/ratings 10.244.0.25:9080 2d
ep/reviews 10.244.0.26:9080,10.244.0.28:9080,10.244.0.29:9080 2d
PROXY INIT CRASHED CONTAINERS
------------------------------
$ docker ps -a | grep -i istio | grep -i exit
9109bafcf9e7 docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" 11 seconds ago Exited (1) 10 seconds ago k8s_istio-init_details-v1-df5d6ff55-92jrx_default_b54d921c-0dcd-11e8-8de9-0050568
e45b4_740
0ed3b188d7ba docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" 27 seconds ago Exited (1) 26 seconds ago k8s_istio-init_reviews-v2-678b446795-hkkvv_default_b557b5a5-0dcd-11e8-8de9-005056
8e45b4_740
893fcec0b01e docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" About a minute ago Exited (1) About a minute ago k8s_istio-init_reviews-v1-5845b57d57-2cjvn_default_b555bb75-0dcd-11e8-8de9-005056
8e45b4_740
a2a036273402 docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" About a minute ago Exited (1) About a minute ago k8s_istio-init_productpage-v1-85f65888f5-xdkt6_default_b579277b-0dcd-11e8-8de9-00
50568e45b4_740
520beb6779e0 docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" About a minute ago Exited (1) About a minute ago k8s_istio-init_reviews-v3-8b796f-64lm8_default_b559d9ef-0dcd-11e8-8de9-0050568e45
b4_740
91a0f41f5fde docker.io/istio/proxy_init#sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" 3 minutes ago Exited (1) 3 minutes ago k8s_istio-init_ratings-v1-668b7f9ddc-9nhcw_default_b55128a5-0dcd-11e8-8de9-005056
8e45b4_740
PROXY PROCESSES FOR EACH ISTIO COMPONENT
-----------------------------------------
$ docker ps | grep -vi exit | grep proxy
4d9b37839e44 docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_reviews-v2-678b446795-hkkvv_default_b557b5a5-0dcd-11e8-8de9-0050568e45b4_0
1c72e3a990cb docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_productpage-v1-85f65888f5-xdkt6_default_b579277b-0dcd-11e8-8de9-0050568e45b4_0
f6ffcaf4b24b docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_reviews-v1-5845b57d57-2cjvn_default_b555bb75-0dcd-11e8-8de9-0050568e45b4_0
b66b7ab90a2d docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_ratings-v1-668b7f9ddc-9nhcw_default_b55128a5-0dcd-11e8-8de9-0050568e45b4_0
08bf2370b5be docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_reviews-v3-8b796f-64lm8_default_b559d9ef-0dcd-11e8-8de9-0050568e45b4_0
0c10d8d594bc docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_details-v1-df5d6ff55-92jrx_default_b54d921c-0dcd-11e8-8de9-0050568e45b4_0
6134fa756f35 docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_istio-pilot-69cc4dd5cb-fglsg_istio-system_5ecf54b6-0dcd-11e8-8de9-0050568e45b4_0
9a18ea74b6bf docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_istio-mixer-69f48ddb6c-d4ww2_istio-system_5e8801ab-0dcd-11e8-8de9-0050568e45b4_0
5db18d722bb1 docker.io/istio/proxy#sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-ingress_istio-ingress-f7ff9dcfd-fl85s_istio-system_5ed6333d-0dcd-11e8-8de9-0050568e45b4_0
$ docker ps | egrep -iv "proxy|pause|kube-|etcd|defaultbackend|ingress"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Docker Containers for the apps (These seem to have started up without issues)
------------------------------
61951f88b83c docker.io/istio/examples-bookinfo-reviews-v2#sha256:e390023aa6180827373293747f1bff8846ffdf19fdcd46ad91549d3277dfd4ea "/bin/sh -c '/opt/ibm" 2 days ago Up 2 days k8s_reviews_reviews-v2-678b446795-hkkvv_default_b557b5a5-0dcd-11e8-8de9-0050568e45b4_0
18d2137257c0 docker.io/istio/examples-bookinfo-productpage-v1#sha256:ce983ff8f7563e582a8ff1adaf4c08c66a44db331208e4cfe264ae9ada0c5a48 "/bin/sh -c 'python p" 2 days ago Up 2 days k8s_productpage_productpage-v1-85f65888f5-xdkt6_default_b579277b-0dcd-11e8-8de9-0050568e45b4_0
5ba97591e5c7 docker.io/istio/examples-bookinfo-reviews-v1#sha256:aac2cfc27fad662f7a4473ea549d8980eb00cd72e590749fe4186caf5abc6706 "/bin/sh -c '/opt/ibm" 2 days ago Up 2 days k8s_reviews_reviews-v1-5845b57d57-2cjvn_default_b555bb75-0dcd-11e8-8de9-0050568e45b4_0
ed11b00eff22 docker.io/istio/examples-bookinfo-reviews-v3#sha256:6829a5dfa14d10fa359708cf6c11ec9022a3d047a089e73dea3f3bfa41f7ed66 "/bin/sh -c '/opt/ibm" 2 days ago Up 2 days k8s_reviews_reviews-v3-8b796f-64lm8_default_b559d9ef-0dcd-11e8-8de9-0050568e45b4_0
be88278186c2 docker.io/istio/examples-bookinfo-ratings-v1#sha256:b14905701620fc7217c12330771cd426677bc5314661acd1b2c2aeedc5378206 "/bin/sh -c 'node rat" 2 days ago Up 2 days k8s_ratings_ratings-v1-668b7f9ddc-9nhcw_default_b55128a5-0dcd-11e8-8de9-0050568e45b4_0
e1c749eedf3c docker.io/istio/examples-bookinfo-details-v1#sha256:02c863b54d676489c7e006948e254439c63f299290d664e5c0eaf2209ee7865e "/bin/sh -c 'ruby det" 2 days ago Up 2 days k8s_details_details-v1-df5d6ff55-92jrx_default_b54d921c-0dcd-11e8-8de9-0050568e45b4_0
Docker Containers for Control Plane components
-----------------------------------------------
(CA: no ssl setup done)
5847934ca3c6 docker.io/istio/istio-ca#sha256:b3aaa5e5df2c16b13ea641d9f6b21f1fa3fb01b2f36a6df5928f17815aa63307 "/usr/local/bin/istio" 2 days ago Up 2 days k8s_istio-ca_istio-ca-5796758d78-md7fl_istio-system_5ed9a9e4-0dcd-11e8-8de9-0050568e45b4_0
(PILOT:
[1] W0209 19:13:58.364556 1 client_config.go:529] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.)
[2] warn AvailabilityZone couldn't find the given cluster node
[3] warn AvailabilityZone unexpected service-node: invalid node type (valid types: ingress, sidecar, router in the service node "mixer~~.~.svc.cluster.local"
[4] warn AvailabilityZone couldn't find the given cluster node
pattern 2, 3, 4 repeats)
f7a7816bd147 docker.io/istio/pilot#sha256:96c2174f30d084e0ed950ea4b9332853f6cd0ace904e731e7086822af726fa2b "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_discovery_istio-pilot-69cc4dd5cb-fglsg_istio-system_5ecf54b6-0dcd-11e8-8de9-0050568e45b4_0
(MIXER: W0209 19:13:57.948480 1 client_config.go:529] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.)
f4c85eb7f652 docker.io/istio/mixer#sha256:a2d5f14fd55198239817b6c1dac85651ac3e124c241feab795d72d2ffa004bda "/usr/local/bin/mixs " 2 days ago Up 2 days k8s_mixer_istio-mixer-69f48ddb6c-d4ww2_istio-system_5e8801ab-0dcd-11e8-8de9-0050568e45b4_0
(STATD EXPORTER: No issues/errors)
9fa2865b7e9b docker.io/prom/statsd-exporter#sha256:d08dd0db8eaaf716089d6914ed0236a794d140f4a0fe1fd165cda3e673d1ed4c "/bin/statsd_exporter" 2 days ago Up 2 days k8s_statsd-to-prometheus_istio-mixer-69f48ddb6c-d4ww2_istio-system_5e8801ab-0dcd-11e8-8de9-0050568e45b4_0

This question would make a fantastic
https://github.com/istio/issues/issues/new
report - thanks for all the details
Can you try adding
privileged: true
to the container that crashes ?

#laurent-demailly, thank you for your suggestion on the privileged flag.
I had posted the query on github day before yesterday, and got a response yesterday with a few suggestions, which I tried and it worked ! :-)
Now none of the containers are crashing, and I am able access the bookinfo apps via the ingress gateway.
Heres the url to the github post:
github.com/istio/issues/issues/197

Related

Docker no space left on device on Mac M1

I want to run container and receive error:
docker run -ti --rm grafana/promtail:2.5.0 -config.file=/etc/promtail/config.yml
docker: Error response from daemon: mkdir /var/lib/docker/overlay2/0cad6a6645e2445a9985d5c9e9c6909fa74ee1a30425b407ddfac13684bd9d31-init: no space left on device.
At first, I thought I have a lot of volumes and images cached. So I clean docker with:
docker prune
docker builder prune
But in a while, the same error occur. When I check my Docker Desktop configuration, I can see I am using all available disk size for images:
Disk image size:
59.6 GB (59.5 GB used)
I have 13 images on my system and together its less than 5GB:
REPOSITORY TAG IMAGE ID CREATED SIZE
logstashloki latest 157966144f3b 3 days ago 761MB
minio/minio <none> 717586e37f7f 4 days ago 232MB
grafana/grafana <none> 31a8875955e5 9 days ago 277MB
docker.elastic.co/beats/filebeat 8.3.2 e7b210caf528 3 weeks ago 295MB
k8s.gcr.io/kube-apiserver v1.24.0 b62a103951f4 2 months ago 126MB
k8s.gcr.io/kube-scheduler v1.24.0 b81513b3bfb4 2 months ago 50MB
k8s.gcr.io/kube-controller-manager v1.24.0 59fad34d4fe0 2 months ago 116MB
k8s.gcr.io/kube-proxy v1.24.0 66e1443684b0 2 months ago 106MB
k8s.gcr.io/etcd 3.5.3-0 a9a710bb96df 3 months ago 178MB
grafana/promtail 2.5.0 aa21fd577ae2 3 months ago 177MB
grafana/loki 2.5.0 369cbd28ef9b 3 months ago 60MB
k8s.gcr.io/pause 3.7 e5a475a03805 4 months ago 514kB
k8s.gcr.io/coredns/coredns v1.8.6 edaa71f2aee8 9 months ago 46.8MB
From output of docker system df there is no suspicious size of container, images or volumes:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 13 13 2.35GB 69.57MB (2%)
Containers 21 21 35.15kB 0B (0%)
Local Volumes 2 0 2.186MB 2.186MB (100%)
Build Cache 20 0 0B 0B
I am new to MacOS and cannot determine what take all my space and how to clean all that space and where are all that data stored on system?

How do I delete all these kubernetes k8s_* containers

New to kubernetes. I was following a tutorial on kubernetes the other day. I forgot what I was doing. Running docker ps shows many containers of k8s*.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ce38bbf370b f3591b2cb223 "/api-server --kubec…" 3 hours ago Up 3 hours k8s_compose_compose-api-57ff65b8c7-cc6qf_docker_460bc96e-dcfe-11e9-9213-025000000001_6
222239366ae5 eb516548c180 "/coredns -conf /etc…" 3 hours ago Up 3 hours k8s_coredns_coredns-fb8b8dccf-7vp79_kube-system_35ecd610-dcfe-11e9-9213-025000000001_6
0e4a5a5c23bd eb516548c180 "/coredns -conf /etc…" 3 hours ago Up 3 hours k8s_coredns_coredns-fb8b8dccf-h7tvr_kube-system_35edfd50-dcfe-11e9-9213-025000000001_6
332d3d26c082 9946f563237c "kube-apiserver --ad…" 3 hours ago Up 3 hours k8s_kube-apiserver_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_4
5778a63798ab k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_3
a0a26d6a2d08 2c4adeb21b4f "etcd --advertise-cl…" 3 hours ago Up 3 hours k8s_etcd_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_3
e6038e717c64 ac2ce44462bc "kube-controller-man…" 3 hours ago Up 3 hours k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_4
10e962e90703 004666307c5b "/usr/local/bin/kube…" 3 hours ago Up 3 hours k8s_kube-proxy_kube-proxy-pq4f7_kube-system_35ac91f0-dcfe-11e9-9213-025000000001_4
21b4a7aa37d0 953364a3ae7a "kube-scheduler --bi…" 3 hours ago Up 3 hours k8s_kube-scheduler_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_4
d9447c41bc55 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-proxy-pq4f7_kube-system_35ac91f0-dcfe-11e9-9213-025000000001_4
65248416150d k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_3
4afff5745b79 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_3
d6db038ea9b3 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_3
9ca30180ab45 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_compose-api-57ff65b8c7-cc6qf_docker_460bc96e-dcfe-11e9-9213-025000000001_4
338d226f12d9 a8c3d87a58e7 "/compose-controller…" 3 hours ago Up 3 hours k8s_compose_compose-6c67d745f6-9v5k5_docker_461b37ab-dcfe-11e9-9213-025000000001_3
6e23ff5c4b86 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_coredns-fb8b8dccf-7vp79_kube-system_35ecd610-dcfe-11e9-9213-025000000001_5
258ced5c1498 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_coredns-fb8b8dccf-h7tvr_kube-system_35edfd50-dcfe-11e9-9213-025000000001_4
0ee3d792d79e k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_compose-6c67d745f6-9v5k5_docker_461b37ab-dcfe-11e9-9213-025000000001_4
I also ran kubectl with --namespace provided. When I only execute kubectl get pods, it says no resource found.
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-frlhd 1/1 Running 1 9m30s
coredns-5644d7b6d9-xmdtg 1/1 Running 1 9m30s
etcd-minikube 1/1 Running 1 8m29s
kube-addon-manager-minikube 1/1 Running 1 8m23s
kube-apiserver-minikube 1/1 Running 1 8m42s
kube-controller-manager-minikube 1/1 Running 1 8m22s
kube-proxy-48kxn 1/1 Running 1 9m30s
kube-scheduler-minikube 1/1 Running 1 8m32s
storage-provisioner 1/1 Running 1 9m27s
I also tried stopping the containers using docker stop. It stopped but few seconds later, the containers started again.
I also ran minikube delete but it only deleted minikube. The command docker ps still showed the containers.
I'd like to start from beginning again.
Don't try to delete pause containers.
k8s.gcr.io/pause:3.1 "/pause"
You can bring multiple containers in a k8s pod and they share same network namespace.
The pause containers are meant to be a way to share network namespace.
That's how the k8s pod is created.
For more info, please go through this.
If you want to reset your cluster, you can first list all namespaces using kubectl get namespaces, then delete them using kubectl delete namespaces namespace_name.
However, you can't delete the namespaces default, kube-system, and kube-public as those are protected by the cluster. What you can do is remove all Pods from the default and kube-public namespace using kubectl delete --all pods --namespace=default; kubectl delete --all pods --namespace=kube-public. You shouldn't touch the kube-system namespace as it contains resources that are mandatory for the cluster to function.
You can try deleting the files using the below commands:
kubectl delete -f <file location>
The file which you installed using:
kubectl apply -f <file location>
you can remove all the tags associated with it using:
istioctl tag remove <profile>
Note: You can refer manifests/profiles for referring the profiles
My situation is similar with you. Forgot what I did, then finding out many *k8s* containers running in docker. Which once deleted, will be started again automatically.
Uncheck Docker Desktop -> Settings -> Kubernetes -> Enable Kubernetes works for me. Hope that helps.

Docker-compose up -d:image not created

I am trying to create a basic web page with docker-compose
This is my yml file
identidock:
build: .
ports:
- "5000:5000"
environment:
ENV: DEV
volumes:
- ./app:/app
When I run
docker-compose up -d
it shows
Starting identidock_identidock_1 ... done
But if I check images
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
identidock_identidock latest b5003205377f 12 days ago 698MB
identidock latest 8eafce868d95 12 days ago 698MB
<none> <none> de77d0555129 13 days ago 698MB
<none> <none> 2f8bfc8f0a95 13 days ago 697MB
<none> <none> a42d37d82f28 2 weeks ago 535MB
<none> <none> 592d8c832533 2 weeks ago 695MB
python 3.4 41f9e544ec6c 2 weeks ago 684MB
It is obvious that new image has not been created.If I to http://localhost:5000/,
I got
Firefox can’t establish a connection to the server at localhost:5000.
This is docker ps -a output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0414117eadd8 identidock_identidock "/cmd.sh" 12 days ago Exited (255) 11 days ago 9090/tcp, 0.0.0.0:5000->5000/tcp, 9191/tcp blissful_easley
4146fd976547 identidock_identidock:latest "/cmd.sh" 12 days ago Exited (255) 11 days ago 9090/tcp, 9191/tcp agitated_leakey
15d49655b290 identidock_identidock "/cmd.sh" 12 days ago Exited (1) 23 minutes ago identidock_identidock_1
And
docker-compose ps
Name Command State Ports
--------------------------------------------------
identidock_identidock_1 /cmd.sh Exit 1
Why?
The container may not have started. Check docker-compose ps. If the containers listed are not in Up state, then you can use docker-compose logs identidock to view the logs.

Prometheus query for monitoring docker containers filtered by name and image

I have several docker containers running:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
736caaa764f4 ubuntu "/bin/bash" 2 hours ago Up 2 hours quirky_morse
e2869c98ee1a ubuntu "/bin/bash" 2 hours ago Up 2 hours sleepy_wilson
e4149472a2da ubuntu "/bin/bash" 2 hours ago Up 2 hours cranky_booth
70bb44ac5d24 grafana/grafana "/run.sh" 2 hours ago Up 2 hours 0.0.0.0:3000->3000/tcp microservicemonitoring_grafana_1
e4b30881a83e prom/prometheus "/bin/prometheus -..." 2 hours ago Up 2 hours 0.0.0.0:9090->9090/tcp prometheus
281f792380f9 prom/node-exporter "/bin/node_exporte..." 2 hours ago Up 2 hours 9100/tcp node-exporter
17810c718b29 google/cadvisor "/usr/bin/cadvisor..." 2 hours ago Up 2 hours 8080/tcp microservicemonitoring_cadvisor_1
77711de421e2 prom/alertmanager "/bin/alertmanager..." 2 hours ago Up 2 hours 0.0.0.0:9093->9093/tcp microservicemonitoring_alertmanager_1
What I want to do is to build graphs for containers filtered by name and image.
Example: built from ubuntu container (quirky_morse, sleepy_wilson, cranky_booth) and prometheus container.
I can filter containers by image with this type of query:
sum by (name) (rate(container_network_receive_bytes_total{image="ubuntu"} [1m] ) )
As you can see I get graphs of three containers (flatlines because they a re doing nothing).
Now I want to add additional filter parameter name and it dows not work
sum by (name) (rate(container_network_receive_bytes_total{image="ubuntu", name="prometheus"} [1m] ) )
What I want to get is: three graphs for containers derived from image "ubuntu" and the one with name "prometheus" no matter the origin image
You can't do this with one selector.
The proper solution here is to use Grafana, which supports graphing multiple expressions on one graph.
At this level the best you can do is rate(container_network_receive_bytes_total{image="ubuntu"} [1m] or rate(container_network_receive_bytes_total{name="prometheus"}[1m]

Removing docker image errors "No such id" with a different image ID

parallels#ubuntu:~$ sudo docker images
[sudo] password for parallels:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 14.10 525b6e4a4cc8 6 days ago 194.4 MB
<none> <none> 4faa69f72743 6 days ago 188.3 MB
<none> <none> 78949b1e1cfd 3 weeks ago 194.4 MB
<none> <none> 2d24f826cb16 3 weeks ago 188.3 MB
<none> <none> 1f80e9ca2ac3 3 weeks ago 131.5 MB
<none> <none> 5ba9dab47459 6 weeks ago 188.3 MB
<none> <none> c5881f11ded9 9 months ago 172.2 MB
<none> <none> 463ff6be4238 9 months ago 169.4 MB
<none> <none> 195eb90b5349 9 months ago 184.7 MB
<none> <none> 3db9c44f4520 10 months ago 183 MB
parallels#ubuntu:~$ sudo docker rmi 4faa69f72743
Error response from daemon: No such id: 2103b00b3fdf1d26a86aded36ae73c1c425def0f779a6e69073b3b77377df348
2015/03/16 20:32:38 Error: failed to remove one or more images
parallels#ubuntu:~$
Here, you can see that I've tried to remove 4faa69f72743. However, docker insists that I am trying to remove 2103b00..., and errors out because such image doesn't exist.
What could possibly cause this?
I don't understand the cause of it, but I've found a solution.
Here, docker ps -a shows exited containers with the image ID matching the problem hash.
parallels#ubuntu:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
parallels#ubuntu:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1c538a69a522 2103b00b3fdf bash 3 hours ago Exited (-1) 3 hours ago sharp_mccarthy
d9b683ddec73 2103b00b3fdf bash 3 hours ago Exited (0) 3 hours ago nostalgic_davinci
fcf8f628066f 2103b00b3fdf bash 3 hours ago Exited (0) 3 hours ago drunk_rosalind
06591014c89a 2103b00b3fdf sleep 10 3 hours ago Exited (0) 3 hours ago sleepy_goldstine
cb54fe012231 2103b00b3fdf sleep 10 3 hours ago Exited (0) 3 hours ago compassionate_leakey
de9cc4cbefe5 2103b00b3fdf sleep 10 3 hours ago Exited (0) 3 hours ago agitated_brattain
0ac0e70451cd 2103b00b3fdf ps -a 3 hours ago Exited (1) 3 hours ago berserk_goldstine
a6cc821ab7a4 2103b00b3fdf whoami 3 hours ago Exited (0) 3 hours ago distracted_pare
89f0c413787a ubuntu:14.10 whoami 3 hours ago Exited (0) 3 hours ago silly_hawking
5388489a2df2 ubuntu:14.10 whoami 3 hours ago Exited (0) 3 hours ago pensive_wozniak
1a060874271f ubuntu:14.10 pwd 3 hours ago Exited (0) 3 hours ago determined_goldstine
5bf4d049e3d2 ubuntu:14.10 pwd 3 hours ago Exited (0) 3 hours ago angry_hypatia
2033e10cb026 ubuntu:14.10 ls 3 hours ago Exited (0) 3 hours ago desperate_poincare
54f6f631cf17 ubuntu:14.10 ls 3 hours ago Exited (0) 3 hours ago cranky_davinci
c44eb12aeedf ubuntu:14.10 bash 3 hours ago Exited (0) 3 hours ago high_darwin
64f14a9cf537 ubuntu:14.10 ps 3 hours ago Exited (0) 3 hours ago goofy_morse
4b8f2516ddbd ubuntu:14.10 whoami 3 hours ago Exited (0) 3 hours ago high_ardinghelli
0e3a3a6a8582 ubuntu:14.10 whoami 3 hours ago Exited (0) 3 hours ago dreamy_turing
49397f5bf47f ubuntu:14.10 ls 3 hours ago Exited (0) 3 hours ago grave_hoover
Trying to remove the image was unsuccessful.
parallels#ubuntu:~$ sudo docker rmi 2103b00b3fdf
Error response from daemon: No such image: 2103b00b3fdf
2015/03/16 22:35:45 Error: failed to remove one or more images
However, I was able to remove the containers that were associated with the image.
parallels#ubuntu:~$ sudo docker rm 1c538
1c538
parallels#ubuntu:~$ sudo docker rm d9b fcf 065 cb5 de9 0ac a6c
d9b
fcf
065
cb5
de9
0ac
a6c
parallels#ubuntu:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
parallels#ubuntu:~$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89f0c413787a ubuntu:14.10 whoami 3 hours ago Exited (0) 3 hours ago silly_hawking
5388489a2df2 ubuntu:14.10 whoami 3 hours ago Exited (0) 3 hours ago pensive_wozniak
1a060874271f ubuntu:14.10 pwd 3 hours ago Exited (0) 3 hours ago determined_goldstine
5bf4d049e3d2 ubuntu:14.10 pwd 3 hours ago Exited (0) 3 hours ago angry_hypatia
2033e10cb026 ubuntu:14.10 ls 3 hours ago Exited (0) 3 hours ago desperate_poincare
54f6f631cf17 ubuntu:14.10 ls 3 hours ago Exited (0) 3 hours ago cranky_davinci
c44eb12aeedf ubuntu:14.10 bash 3 hours ago Exited (0) 3 hours ago high_darwin
64f14a9cf537 ubuntu:14.10 ps 3 hours ago Exited (0) 3 hours ago goofy_morse
4b8f2516ddbd ubuntu:14.10 whoami 3 hours ago Exited (0) 3 hours ago high_ardinghelli
0e3a3a6a8582 ubuntu:14.10 whoami 3 hours ago Exited (0) 3 hours ago dreamy_turing
49397f5bf47f ubuntu:14.10 ls 3 hours ago Exited (0) 3 hours ago grave_hoover
Finally, with the containers removed, I was able to remove the unused images.
parallels#ubuntu:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 14.10 525b6e4a4cc8 6 days ago 194.4 MB
<none> <none> 4faa69f72743 6 days ago 188.3 MB
<none> <none> 78949b1e1cfd 3 weeks ago 194.4 MB
<none> <none> 2d24f826cb16 3 weeks ago 188.3 MB
<none> <none> 1f80e9ca2ac3 3 weeks ago 131.5 MB
<none> <none> 5ba9dab47459 6 weeks ago 188.3 MB
<none> <none> c5881f11ded9 9 months ago 172.2 MB
<none> <none> 463ff6be4238 9 months ago 169.4 MB
<none> <none> 195eb90b5349 9 months ago 184.7 MB
<none> <none> 3db9c44f4520 10 months ago 183 MB
parallels#ubuntu:~$ sudo docker rmi 4faa
Deleted: 4faa69f72743ce3a18508e840ff84598952fc05bd1de5fd54c6bc0f8ca835884
Deleted: 76b658ecb5644a4aca23b35de695803ad2e223da087d4f8015016021bd970169
Deleted: f0dde87450ec8236a64aebd3e8b499fe2772fca5e837ecbfa97bd8ae380c605e
parallels#ubuntu:~$
Hooray, unexplainable problems and solutions!

Resources