I am new to Docker and I am following the 'Getting Started' documentation at the Docker site.
I am trying to run 3 containers on a VM.
OS: Centos 7.3
Docker: 17.03.1-ce
I followed the first part and could get hello-world running on a container inside the VM.
Then I moved on to the Docker compose example.
I have the following directory structure:
home
|
- docker-compose.yml
|
- docker-test
|
- app.py
- Dockerfile
- requirements.txt
The files under docker-test are from the python app example on the docker website.
With the docker-compose, I was attempting to run 3 containers of the hello-world example.
My docker-compose.yml:
version: "3"
services:
web:
image: hello-world
deploy:
replicas: 3
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
Then I ran the following commands:
sudo docker swarm init
sudo docker stack deploy -c docker-compose.yml getstartedlab
sudo docker stack ps getstartedlab shows:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
iytr4ptz3m8l getstartedlab_web.1 hello-world:latest <node1> Shutdown Complete 16 minutes ago
s5t41txo05ex getstartedlab_web.2 hello-world:latest <node2> Shutdown Complete 16 minutes ago
91iitdnc49fk getstartedlab_web.3 hello-world:latest <node3> Shutdown Complete 16 minutes ago
However,
sudo docker ps shows no containers and when I curl http://localhost:80, it can't connect.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
What am I missing?
Your docker-compose.yml file says that the web service should use the hello-world image, which just prints a message & exits when run, leading to all of the containers being stopped. Presumably, you meant to instead use the image created by building docker-test/; to do this, simply replace the image: hello-world line with build: docker-test.
Related
I deployed my docker stack:
⇒ docker stack deploy -c docker-compose.yml my_stack
Creating network my_stack_network
Creating service my_stack_redis
Creating service my_stack_wsgi
Creating service my_stack_nodejs
Creating service my_stack_nginx
Creating service my_stack_haproxy
Creating service my_stack_postgres
But when I do docker container ls, it only shows three containers:
~|⇒ docker container ls | grep my_stack
212720bfafc3 postgres:11 "docker-entrypoint.s…" 4 minutes ago Up 3 minutes 5432/tcp my_stack_postgres.1.9nx7jb21whi61aboe9hmet6m2
3132dd980589 isiq/nginx-brotli:1.21.0 "/docker-entrypoint.…" 4 minutes ago Up 4 minutes 80/tcp my_stack_nginx.1.isl2c78z6w5ptizurm3a4cnte
62ef3c76fb9e redis:6.2.4 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 6379/tcp my_stack_redis.1.xnisrd1i6hod6jkm64623cpzj
But docker stack ps lists all of them as Running:
~|⇒ docker stack ps --no-trunc my_stack
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
1fqwlgblhi5q0cdl5cy75ucli my_stack_haproxy.1 haproxy:2.3.9#sha256:f63aabf39efcd277b04a503d38e59e80224a0c11f47b2568b13b0092698c5a3a Running New 2 minutes ago
isl2c78z6w5ptizurm3a4cnte my_stack_nginx.1 isiq/nginx-brotli:1.21.0#sha256:436cbc0d8cd051e7bdb197d7915fe90fa5a1bdadea6d02272ba117fccf30c936 tadoba Running Running 2 minutes ago
1myvtgl11qqw2xa9cv79uikcs my_stack_nodejs.1 nodejs:my_stack Running New 2 minutes ago
9nx7jb21whi61aboe9hmet6m2 my_stack_postgres.1 postgres:11#sha256:5d2aa4a7b5f9bdadeddcf87cf7f90a176737a02a30d917de4ab2e6a329bd2d45 tadoba Running Running 2 minutes ago
xnisrd1i6hod6jkm64623cpzj my_stack_redis.1 redis:6.2.4#sha256:6bc98f513258e0c17bd150a7a26f38a8ce3e7d584f0c451cf31df70d461a200a tadoba Running Running 2 minutes ago
mzmmb7a3bxjpfkfa3ea5o5w85 my_stack_wsgi.1 wsgi:my_stack Running New 2 minutes ago
Checking logs of containers not listed in docker container ls gives No such container error:
~|⇒ docker logs -f 1myvtgl11qqw2xa9cv79uikcs
Error: No such container: 1myvtgl11qqw2xa9cv79uikcs
~|⇒ docker logs -f mzmmb7a3bxjpfkfa3ea5o5w85
Error: No such container: mzmmb7a3bxjpfkfa3ea5o5w85
~|⇒ docker logs -f 1fqwlgblhi5q0cdl5cy75ucli
Error: No such container: 1fqwlgblhi5q0cdl5cy75ucli
What could be the reason? How can I debug this?
Update
It seems that services without dependencies are able to join the network. But services with dependencies on other services are not able to. I am unable to figure out the reason. Here is the gist with output of docker inspect network's output.
PS
I run watch 'docker container ls | grep my_app' in another terminal before running docker stack deploy .... But those three containers never appear in the watch list. Rest of three do appear.
I am running all nodes on the same remote machine connected through ssh. This is the output of docker node ls:
~|⇒ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
z9hovq8ry6qont3m2rbn6upy4 * tadoba Ready Active Leader 20.10.11
Here is my docker compose file for reference:
version: "3.8"
services:
postgres:
image: postgres:11
volumes:
- my_app_postgres_volume:/var/lib/postgresql/data
- type: tmpfs
target: /dev/shm
tmpfs:
size: 536870912 # 512MB
environment:
POSTGRES_DB: my_app_db
POSTGRES_USER: my_app
POSTGRES_PASSWORD: my_app123
networks:
- my_app_network
redis:
image: redis:6.2.4
volumes:
- my_app_redis_volume:/data
networks:
- my_app_network
wsgi:
image: wsgi:my_app3_stats
volumes:
- /my_app/frontend/static/
- ./wsgi/my_app:/my_app
- /my_app/frontend/clientApp/node_modules
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- postgres
- redis
ports:
- 9090
environment:
C_FORCE_ROOT: 'true'
SERVICE_PORTS: 9090
networks:
- my_app_network
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
nodejs:
image: nodejs:my_app3_stats
volumes:
- ./nodejs/frontend:/frontend
- /frontend/node_modules
depends_on:
- wsgi
ports:
- 9998:9999
environment:
BACKEND_API_URL: http://aa.bb.cc.dd:9764/api/
networks:
- my_app_network
nginx:
image: isiq/nginx-brotli:1.21.0
volumes:
- ./nginx:/etc/nginx/conf.d:ro
- ./wsgi/my_app:/my_app:ro
- my_app_nginx_volume:/var/log/nginx/
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- my_app_network
haproxy:
image: haproxy:2.3.9
volumes:
- ./haproxy:/usr/local/etc/haproxy/:ro
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- wsgi
- nodejs
- nginx
ports:
- 9764:80
networks:
- my_app_network
deploy:
placement:
constraints: [node.role == manager]
volumes:
my_app_postgres_volume:
my_app_redis_volume:
my_app_nginx_volume:
my_app_pgadmin_volume:
networks:
my_app_network:
driver: overlay
Output of docker service ps <service-name> for services not listed under tadoba node:
~/my_app|master-py3⚡
⇒ docker service ps my_app_nodejs
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
i04jpykp9ign my_app_nodejs.1 nodejs:bodhitree3_stats Running New about a minute ago
~/my_app|master-py3⚡
⇒ docker service ps my_app_haproxy
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
of4fcsxuq24c my_app_haproxy.1 haproxy:2.3.9 Running New about a minute ago
~/my_app|master-py3⚡
⇒ docker service ps my_app_wsgi
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
yt9nuhule39z my_app_wsgi.1 wsgi:bodhitree3_stats Running New 2 minutes ago
Firstly all of the services are running which is good. But docker container ls isn't cluster-aware i.e. it is showing the current containers running on this node. From the output docker stack ps --no-trunc my_stack I can see that there is another node labeled tadoba. So if you can log in to the other node you can see the running containers.
You can list the nodes on your cluster by running docker node ls.
If you want you can set up your docker contexts so you can change your docker context which will remove the need to log in and log out of the nodes. You can find more info here.
I have a docker image of Grafana 8.0.5. I created a volume using docker volume create grafana-storage
I can stop the volume, and bring it back up with no data loss.
However, if I update my docker-compose.yml to point to the latest version, 8.0.6, and re-run docker-compose up -d the volume goes back to a default install, losing any of my previously created dashboards, accounts, data sources, etc.
As far as I understand, I shouldn't lose any data, since it should all be in the volume. How do you update images without resetting the volume
docker-compose.yml:
version: "3.3"
volumes:
grafana-storage:
external: true
services:
grafana:
image: "grafana/grafana:8.0.6"
container_name: "grafana"
volumes:
- "grafana-storage:/usr/src/grafana"
Docker Version:
Docker version 20.10.7, build f0df350
Docker-Compose Version:
docker-compose version 1.29.2, build 5becea4c
docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fb6da4a8de9 grafana/grafana:8.0.6 "/run.sh" 17 minutes ago Up 17 minutes 3000/tcp grafana
046892ab0a7b traefik:v2.0 "/entrypoint.sh --pr…" 46 minutes ago Up 23 minutes 80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp traefik
docker volume ls:
DRIVER VOLUME NAME
local grafana-storage
The data is not stored in /usr/src/grafana but in /var/lib/grafana. In consequence, your volume-definition in docker-compose.yml is wrong and everytime the container is recreated, the data is lost.
Change the path to /var/lib/grafana and it should work:
services:
grafana:
[...]
volumes:
- "grafana-storage:/var/lib/grafana"
I am learning Docker and trying to follow the Docker tutorial and am in step 4 here.
Basically in this step, we are creating 2 VMs for docker swarm: 1 as swarm manager and 1 as swarm worker.
I think it pulls docker-hub pushed image to the virtual machines to get the service working in swarm. Problem is I am not pushing my built image to docker hub.
My question is, can I use local build to deploy to the swarm VM?
I tried to change image line the example docker-compose.yml to build like so:
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
# image: friendlyhello
build: .
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:
it of course does not work, which is why I am asking if there is a way to do this?
You can create a local registry on the vm or your local machine and push/pull images from local repo
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Then name/tag your images using
localhost:5000/Image_Name:Tag
Then push images using
docker push localhost:5000/Image_Name:Tag
This will let you save your images in a local registry that your swarm can use without pushing to dockerhub
/usr/local/bin/docker-compose up
I am using this command on Amazon Linux. It does not bind the ports, so I could not connect to the services running inside the container. The same configuration is working on a local development server. Not sure what I am missing.
[root#ip-10-0-1-42 ec2-user]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec6320747ef3 d8bd4345ca7f "/bin/sh -c 'gulp bu…" 30 seconds ago Up 30 seconds vigilant_jackson
Here is the docker-compose.yml:
version: '2'
services:
web:
build: .
command: gulp serve
env_file:
- .env
volumes:
- .:/app/code
ports:
- "8050:8000"
- "8005:8005"
- "8888:8888"
npm -v 5.6.0
docker -v Docker version 18.06.1-ce, build e68fc7a215d7133c34aa18e3b72b4a21fd0c6136
Are you sure the ports are not published?
Use docker inspect, I would guess that they are published. If this is the case, then my guess is that as you are on AWS, you are not ssh-ing to the opened port (8050, 8005, 8888 are ports of the AWS linux instance, if I got your question correctly).
I'm trying to execute the tutorial from the official documentation. It works fine except with Services.
When I start 5 instances of the container (with docker stack command), the containers are not starting and I get this error:
"failed to allocate gateway"
$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
imb6vgifjvq7 getstartedlab_web.1 seb/docker-whale:1.1 ns3553081.ip-XXX-YYY-ZZZ.eu Ready Rejected 4 seconds ago "failed to allocate gateway (1…"
ulm1tqdhzikd \_ getstartedlab_web.1 seb/docker-whale:1.1 ns3553081.ip-XXX-YYY-ZZZ.eu Shutdown Rejected 9 seconds ago "failed to allocate gateway (1…"
...
The docker-compose.yml contains
version: "3"
services:
web:
image: seb/docker-whale:1.1
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
to start containers I'm using the command:
$ docker stack deploy -c docker-compose.yml getstartedlab
I can start without any issue one instance of the container with the command:
$ docker run -p 80:80 seb/docker-whale:1.1
Any idea why it's not working? How can I get more details on the error?
Thanks for your help.
Answer from a beginner: Same here (version 1.13.1), the message vanished when I changed ports "80:80" to "8080:80". Port 80 was used by the host of the docker machine.