Docker-compose up -d:image not created - docker

I am trying to create a basic web page with docker-compose
This is my yml file
identidock:
build: .
ports:
- "5000:5000"
environment:
ENV: DEV
volumes:
- ./app:/app
When I run
docker-compose up -d
it shows
Starting identidock_identidock_1 ... done
But if I check images
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
identidock_identidock latest b5003205377f 12 days ago 698MB
identidock latest 8eafce868d95 12 days ago 698MB
<none> <none> de77d0555129 13 days ago 698MB
<none> <none> 2f8bfc8f0a95 13 days ago 697MB
<none> <none> a42d37d82f28 2 weeks ago 535MB
<none> <none> 592d8c832533 2 weeks ago 695MB
python 3.4 41f9e544ec6c 2 weeks ago 684MB
It is obvious that new image has not been created.If I to http://localhost:5000/,
I got
Firefox can’t establish a connection to the server at localhost:5000.
This is docker ps -a output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0414117eadd8 identidock_identidock "/cmd.sh" 12 days ago Exited (255) 11 days ago 9090/tcp, 0.0.0.0:5000->5000/tcp, 9191/tcp blissful_easley
4146fd976547 identidock_identidock:latest "/cmd.sh" 12 days ago Exited (255) 11 days ago 9090/tcp, 9191/tcp agitated_leakey
15d49655b290 identidock_identidock "/cmd.sh" 12 days ago Exited (1) 23 minutes ago identidock_identidock_1
And
docker-compose ps
Name Command State Ports
--------------------------------------------------
identidock_identidock_1 /cmd.sh Exit 1
Why?

The container may not have started. Check docker-compose ps. If the containers listed are not in Up state, then you can use docker-compose logs identidock to view the logs.

Related

Setting up Confluent Kafka Community Locally - broker container keeps exiting

I'm trying to setup Kafka locally and facing an issue. Whenever I run docker compose up, all containers are up correctly. After sometime, the broker container stops running for some reason. There is no error in the container logs.
Below is the status of all docker containers:
0c27a63bb0e7 confluentinc/ksqldb-examples:5.5.1 "bash -c 'echo Waiti…" 6 minutes ago Up 6 minutes ksql-datagen
4e4a30204ccc confluentinc/cp-ksqldb-cli:5.5.1 "/bin/sh" 6 minutes ago Up 6 minutes ksqldb-cli
61b86ff2a6d6 confluentinc/cp-ksqldb-server:5.5.1 "/etc/confluent/dock…" 6 minutes ago Up 6 minutes (health: starting) 0.0.0.0:8088->8088/tcp, :::8088->8088/tcp ksqldb-server
2e022b64a760 cnfldemos/kafka-connect-datagen:0.3.2-5.5.0 "/etc/confluent/dock…" 6 minutes ago Exited (137) 5 minutes ago connect
3c7d273683fb confluentinc/cp-kafka-rest:5.5.1 "/etc/confluent/dock…" 6 minutes ago Exited (137) 5 minutes ago rest-proxy
6b6d36fb9d88 confluentinc/cp-schema-registry:5.5.1 "/etc/confluent/dock…" 6 minutes ago Up 6 minutes 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp schema-registry
3bb20335ecd1 confluentinc/cp-kafka:5.5.1 "/etc/confluent/dock…" 6 minutes ago Exited (137) 5 minutes ago broker
7b2f922ef8ef confluentinc/cp-zookeeper:5.5.1 "/etc/confluent/dock…" 6 minutes ago Up 6 minutes 2888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 3888/tcp zookeeper
All I want is a single node Kafka cluster with Schema Registry up and running locally. Any pointers are appreciated.
137 exit code is memory related.
If all you want is Kafka and the Schema Registry, remove KSQLDB stuff, REST Proxy, and DataGen containers
I'd also suggest using a later image tag, such as 7.3.1

Docker no space left on device on Mac M1

I want to run container and receive error:
docker run -ti --rm grafana/promtail:2.5.0 -config.file=/etc/promtail/config.yml
docker: Error response from daemon: mkdir /var/lib/docker/overlay2/0cad6a6645e2445a9985d5c9e9c6909fa74ee1a30425b407ddfac13684bd9d31-init: no space left on device.
At first, I thought I have a lot of volumes and images cached. So I clean docker with:
docker prune
docker builder prune
But in a while, the same error occur. When I check my Docker Desktop configuration, I can see I am using all available disk size for images:
Disk image size:
59.6 GB (59.5 GB used)
I have 13 images on my system and together its less than 5GB:
REPOSITORY TAG IMAGE ID CREATED SIZE
logstashloki latest 157966144f3b 3 days ago 761MB
minio/minio <none> 717586e37f7f 4 days ago 232MB
grafana/grafana <none> 31a8875955e5 9 days ago 277MB
docker.elastic.co/beats/filebeat 8.3.2 e7b210caf528 3 weeks ago 295MB
k8s.gcr.io/kube-apiserver v1.24.0 b62a103951f4 2 months ago 126MB
k8s.gcr.io/kube-scheduler v1.24.0 b81513b3bfb4 2 months ago 50MB
k8s.gcr.io/kube-controller-manager v1.24.0 59fad34d4fe0 2 months ago 116MB
k8s.gcr.io/kube-proxy v1.24.0 66e1443684b0 2 months ago 106MB
k8s.gcr.io/etcd 3.5.3-0 a9a710bb96df 3 months ago 178MB
grafana/promtail 2.5.0 aa21fd577ae2 3 months ago 177MB
grafana/loki 2.5.0 369cbd28ef9b 3 months ago 60MB
k8s.gcr.io/pause 3.7 e5a475a03805 4 months ago 514kB
k8s.gcr.io/coredns/coredns v1.8.6 edaa71f2aee8 9 months ago 46.8MB
From output of docker system df there is no suspicious size of container, images or volumes:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 13 13 2.35GB 69.57MB (2%)
Containers 21 21 35.15kB 0B (0%)
Local Volumes 2 0 2.186MB 2.186MB (100%)
Build Cache 20 0 0B 0B
I am new to MacOS and cannot determine what take all my space and how to clean all that space and where are all that data stored on system?

What does "in use" mean for an image?

What does "in use" mean and how can I get that info from the CLI?
Reference: docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.State}}\t{{.Image}}"
CONTAINER ID
NAMES
STATE
IMAGE
07bce6924796
laughing_wozniak
exited
vsc-volume-bootstrap
6d37d8744a77
angry_brahmagupta
exited
vsc-quickstarts-d91f349952ba5208420f1403c31b2955-uid
0bce117a827c
dapr_placement
running
daprio/dapr:1.5.0
1232bf715593
dapr_zipkin
running
openzipkin/zipkin
c128e546a0b6
dapr_redis
running
redis
dc44e1006831
miked
exited
my-first-fsharp-web
ce3cf77a0eb9
minikube
running
gcr.io/k8s-minikube/kicbase:v0.0.28
Reference: docker image ls
REPOSITORY
TAG
IMAGE ID
CREATED
SIZE
vsc-microsoftvscodeinsiders-572e00dcd0f79c5ee8d7d39c18e7c701-features
latest
9f05ea6535d4
12 hours ago
6.36GB
vsc-volume-bootstrap
latest
81646861762b
12 hours ago
180MB
vsc-quickstarts-d91f349952ba5208420f1403c31b2955-uid
latest
453bd2943e10
40 hours ago
9.7GB
vsc-quickstarts-d91f349952ba5208420f1403c31b2955
latest
10de525681a7
40 hours ago
9.7GB
openzipkin/zipkin
latest
6a9714eacfd9
2 days ago
153MB
miked.azurecr.io/my-first-fsharp-web
96e7948ee30c
5 days ago
211MB
my-first-fsharp-web
latest
e36fabe64a1c
5 days ago
211MB
miked.azurecr.io/my-first-fsharp-web
1
e36fabe64a1c
5 days ago
211MB
miked.azurecr.io/my-first-fsharp-web
latest
e36fabe64a1c
5 days ago
211MB
counter-image
latest
22dfe0305c55
7 days ago
208MB
redis
latest
40c68ed3a4d2
8 days ago
113MB
daprio/dapr
1.5.0
bff1855a0302
2 weeks ago
214MB
vsc-azure-container-apps-demo-41dcd784881293406771e08c255554b9
latest
1af591496e8a
4 weeks ago
337MB
gcr.io/k8s-minikube/kicbase
v0.0.28
e2a6c047bedd
8 weeks ago
1.08GB
It indicates if the image is used by a container (running or already stopped).
You cannot get this via the CLI using docker images, but listing the containers docker ps -a you can see the associated image.

How do I delete all these kubernetes k8s_* containers

New to kubernetes. I was following a tutorial on kubernetes the other day. I forgot what I was doing. Running docker ps shows many containers of k8s*.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ce38bbf370b f3591b2cb223 "/api-server --kubec…" 3 hours ago Up 3 hours k8s_compose_compose-api-57ff65b8c7-cc6qf_docker_460bc96e-dcfe-11e9-9213-025000000001_6
222239366ae5 eb516548c180 "/coredns -conf /etc…" 3 hours ago Up 3 hours k8s_coredns_coredns-fb8b8dccf-7vp79_kube-system_35ecd610-dcfe-11e9-9213-025000000001_6
0e4a5a5c23bd eb516548c180 "/coredns -conf /etc…" 3 hours ago Up 3 hours k8s_coredns_coredns-fb8b8dccf-h7tvr_kube-system_35edfd50-dcfe-11e9-9213-025000000001_6
332d3d26c082 9946f563237c "kube-apiserver --ad…" 3 hours ago Up 3 hours k8s_kube-apiserver_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_4
5778a63798ab k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_3
a0a26d6a2d08 2c4adeb21b4f "etcd --advertise-cl…" 3 hours ago Up 3 hours k8s_etcd_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_3
e6038e717c64 ac2ce44462bc "kube-controller-man…" 3 hours ago Up 3 hours k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_4
10e962e90703 004666307c5b "/usr/local/bin/kube…" 3 hours ago Up 3 hours k8s_kube-proxy_kube-proxy-pq4f7_kube-system_35ac91f0-dcfe-11e9-9213-025000000001_4
21b4a7aa37d0 953364a3ae7a "kube-scheduler --bi…" 3 hours ago Up 3 hours k8s_kube-scheduler_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_4
d9447c41bc55 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-proxy-pq4f7_kube-system_35ac91f0-dcfe-11e9-9213-025000000001_4
65248416150d k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_3
4afff5745b79 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_3
d6db038ea9b3 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_3
9ca30180ab45 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_compose-api-57ff65b8c7-cc6qf_docker_460bc96e-dcfe-11e9-9213-025000000001_4
338d226f12d9 a8c3d87a58e7 "/compose-controller…" 3 hours ago Up 3 hours k8s_compose_compose-6c67d745f6-9v5k5_docker_461b37ab-dcfe-11e9-9213-025000000001_3
6e23ff5c4b86 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_coredns-fb8b8dccf-7vp79_kube-system_35ecd610-dcfe-11e9-9213-025000000001_5
258ced5c1498 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_coredns-fb8b8dccf-h7tvr_kube-system_35edfd50-dcfe-11e9-9213-025000000001_4
0ee3d792d79e k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_compose-6c67d745f6-9v5k5_docker_461b37ab-dcfe-11e9-9213-025000000001_4
I also ran kubectl with --namespace provided. When I only execute kubectl get pods, it says no resource found.
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-frlhd 1/1 Running 1 9m30s
coredns-5644d7b6d9-xmdtg 1/1 Running 1 9m30s
etcd-minikube 1/1 Running 1 8m29s
kube-addon-manager-minikube 1/1 Running 1 8m23s
kube-apiserver-minikube 1/1 Running 1 8m42s
kube-controller-manager-minikube 1/1 Running 1 8m22s
kube-proxy-48kxn 1/1 Running 1 9m30s
kube-scheduler-minikube 1/1 Running 1 8m32s
storage-provisioner 1/1 Running 1 9m27s
I also tried stopping the containers using docker stop. It stopped but few seconds later, the containers started again.
I also ran minikube delete but it only deleted minikube. The command docker ps still showed the containers.
I'd like to start from beginning again.
Don't try to delete pause containers.
k8s.gcr.io/pause:3.1 "/pause"
You can bring multiple containers in a k8s pod and they share same network namespace.
The pause containers are meant to be a way to share network namespace.
That's how the k8s pod is created.
For more info, please go through this.
If you want to reset your cluster, you can first list all namespaces using kubectl get namespaces, then delete them using kubectl delete namespaces namespace_name.
However, you can't delete the namespaces default, kube-system, and kube-public as those are protected by the cluster. What you can do is remove all Pods from the default and kube-public namespace using kubectl delete --all pods --namespace=default; kubectl delete --all pods --namespace=kube-public. You shouldn't touch the kube-system namespace as it contains resources that are mandatory for the cluster to function.
You can try deleting the files using the below commands:
kubectl delete -f <file location>
The file which you installed using:
kubectl apply -f <file location>
you can remove all the tags associated with it using:
istioctl tag remove <profile>
Note: You can refer manifests/profiles for referring the profiles
My situation is similar with you. Forgot what I did, then finding out many *k8s* containers running in docker. Which once deleted, will be started again automatically.
Uncheck Docker Desktop -> Settings -> Kubernetes -> Enable Kubernetes works for me. Hope that helps.

docker containers shutdown continuously

I follow the official tutorial of deploying docker services https://docs.docker.com/get-started/part5/#add-a-new-service-and-redeploy, in the first time i tried this the containers was running as expected but after that the containers shutdown and restarting (i notice that using the visualizer service provided by docker )
and when i execute the command :
docker stack ps getstartedlab
NAME DESIRED STATE CURRENT STATE
ERROR
getstartedlab_web.1 Running Preparing 2 seconds ago
\_ getstartedlab_web.1 Shutdown Failed 4 minutes ago
"task: non-zero exit (2)"
i read in this post https://github.com/docker/machine/issues/3747 that the problem came from the firewall that may block the icmp, i tried to ping docker.com and i had 100% loss packet but when i ping google.com it's ok with no loss packets.
the result of docker ps --all is :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f67c82da7c7 username/repo:tag "python app.py" 2 minutes ago Exited (2) 2 minutes ago getstartedlab_web.2.zcnr0ld9bioy0dffsxmn8zss5
f59e413b0780 username/repo:tag "python app.py" 4 minutes ago Exited (2) 4 minutes ago getstartedlab_web.5.ymxgnsf7n8306yr8963xyyljv
9ab631c4057c username/repo:tag "python app.py" 10 minutes ago Exited (2) 10 minutes ago getstartedlab_web.5.zr3gsvgbyxs8c51fiko5h9jxp
bee5816ce1f2 dockersamples/visualizer:stable "npm start" 15 minutes ago Up 15 minutes 8080/tcp getstartedlab_visualizer.1.oyiqwb5esq6zakcdtiw4txh8a
cadca594f8cd username/repo:tag "python app.py" 24 hours ago Exited (2) 24 hours ago getstartedlab_web.1.zehutsl9cefrccqrj86dz4ap7
576b1a6db0b0 username/repo:tag "python app.py" 24 hours ago Exited (2) 24 hours ago getstartedlab_web.5.za9xvxpo5yvl20kha9sjcimmz
2804ebc4fc0c username/repo:tag "python app.py" 24 hours ago Exited (2) 24 hours ago getstartedlab_web.1.zlk42chspvym3jxkzs2nc8k2d
03efb2b04489 dockersamples/visualizer:stable "npm start" 24 hours ago Exited (255) 16 minutes ago 8080/tcp getstartedlab_visualizer.1.qyp4egtu9vcd31kf2jxtzxko3
b85fd1600955 username/repo:tag "python app.py" 2 days ago Exited (2) 2 days ago getstartedlab_web.5.kzrj3m5c3jgkuox0ulpszizee
and the
docker logs 9f67c82da7c7
python: can't open file 'app.py': [Errno 2] No such file or directory
exuse me for representation of the results of those commands because when i copy past the results the lines breaks, how to copy past and preserve the same displaying ?
can anyone have a fix to this problem ? thanks.

Resources