I am trying to deploy the application on multiple instances. On master node. after deployed application running the only master node. cannot deploy service different node in the docker swarm cluster.
here my docker-compose file
version: "3"
services:
mydb:
image: localhost:5000/mydb-1
environment:
TZ: "Asia/Colombo"
ports:
- 9042:9042
volumes:
- /root/data/cdb:/var/lib/cassandra
- /root/logs/cdb:/var/log/cassandra
command docker service scale mydb-1_mydb=5
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7fxxxxxxxx7 localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 5 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.2.q77i258vn2xynlgein9s7tdpb
34fcxxxx14bd localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 4 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.1.s2mzitj8yzb0zo7spd3dmpo1j
9axxxx1efb localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 8 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.3.zgyev3p4qdg7hf7h67oeedutr
f14xxxee59 localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 2 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.4.r0themodonzzr1izdbnppd5bi
e3xxx16d localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 6 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.5.bdebi4
all running only master-node. Does anyone know the issue?
Your image appears to be locally built with a name that cannot be resolved in other nodes (localhost:5000/mydb-1). In swarm, images should be pushed to a registry, and that registry needs to be accessible by all nodes. You can run your own registry service on your own node, there's a docker image, or you can push to docker hub. If the registry is private, you also need to perform a docker login on the node running the stack deploy and include registry credentials in that deploy, e.g.
docker stack deploy -c compose.yml --with-registry-auth stack-name
Thanks. I find the issue and fixed.
volumes:
- /root/data/cdb:/var/lib/cassandra
- /root/logs/cdb:/var/log/cassandra
If you bind mount a host path into your service’s containers, the path must exist on every swarm node.
docker service scale zkr_zkr=2
after scale-up service running my node
root#beta-node-1:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f9bxxx15861 localhost:5000/zookeeper:latest "/docker-entrypoint.…" 40 minutes ago Up 40 minutes 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zkr_zkr.3.qpr8qp5y
01dxxxx64bc localhost:5000/zookeeper:latest "/docker-entrypoint.…" 40 minutes ago Up 40 minutes 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zkr_zkr.1.g2uee5j
Related
The goal is to run two containers of publisher-app. One container should be mapped to port 8080 on the host machine, and the other on 8081. Here is the docker-compose:
publisher_app:
ports:
- "8080-8081:8080"
environment:
server.port: 8080
deploy:
mode: replicated
replicas: 2
Two containers are created, but as I understand, both ports are assigned to the first one, and the second one produces this error: Ports are not available: listen tcp 0.0.0.0:8081: bind: address already in use.
Here is the output of docker ps -a:
6c7067b4ebee spring-boot-rest-kafka_publisher_app "java -jar /app.jar" 33 seconds ago Up 28 seconds 0.0.0.0:8080->8080/tcp, 0.0.0.0:8081->8080/tcp spring-boot-rest-kafka_publisher_app_2
70828ba8f370 spring-boot-rest-kafka_publisher_app "java -jar /app.jar" 33 seconds ago Created spring-boot-rest-kafka_publisher_app_1
Docker engine version: 20.10.11
Docker compose version: 2.2.1
How to handle this case? Your help will be very appreciated.
Here is the source code: https://github.com/aleksei17/springboot-rest-kafka-mysql/blob/master/docker-compose.yml
tried locally on Windows 10 and failed similarly, both with v2 and with v2 disabled.
It seems like a compose issue
when tried on arch: amd64 fedora based linux distro with package manager installed docker and manually installing docker-compose 1.29.2 (using the official guide for linux) worked:
compose file:
version : "3"
services:
web:
image: "nginx:latest"
ports:
- "8000-8020:80"
docker command:
docker-compose up --scale web=5
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b304d397b2cd nginx:latest "/docker-entrypoint.…" 14 seconds ago Up 7 seconds 0.0.0.0:8004->80/tcp, :::8004->80/tcp testdir_web_4
a8c6f177a6e6 nginx:latest "/docker-entrypoint.…" 14 seconds ago Up 7 seconds 0.0.0.0:8003->80/tcp, :::8003->80/tcp testdir_web_3
b1abe53e7d7d nginx:latest "/docker-entrypoint.…" 14 seconds ago Up 8 seconds 0.0.0.0:8002->80/tcp, :::8002->80/tcp testdir_web_2
ead91e9df671 nginx:latest "/docker-entrypoint.…" 14 seconds ago Up 9 seconds 0.0.0.0:8001->80/tcp, :::8001->80/tcp testdir_web_5
65ffd6a87715 nginx:latest "/docker-entrypoint.…" 24 seconds ago Up 21 seconds 0.0.0.0:8000->80/tcp, :::8000->80/tcp testdir_web_1
I have some docker images running like below:
77beec19859a nginx:latest "nginx -g 'daemon of…" 9 seconds ago Up 4 seconds 0.0.0.0:8000->80/tcp dockerisedphp_web_1
d48461d800e0 php:fpm "docker-php-entrypoi…" 9 seconds ago Up 4 seconds 9000/tcp dockerisedphp_php_1
a6ed456a4cc2 phpmyadmin/phpmyadmin "/docker-entrypoint.…" 12 hours ago Up 12 hours 0.0.0.0:8080->80/tcp sc-phpmyadmin
9e0dda76c110 firewatchdocker_webserver "docker-php-entrypoi…" 12 hours ago Up 12 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp 7.4.x-webserver
beba7cb1ee14 firewatchdocker_mysql "docker-entrypoint.s…" 12 hours ago Up 12 hours 0.0.0.0:3306->3306/tcp, 33060/tcp mysql
000e0f21d46e redis:latest "docker-entrypoint.s…" 12 hours ago Up 12 hours 0.0.0.0:6379->6379/tcp sc-redis
The problem is: My PHP script need to access the data on the mysql inside the mysql container from the container dockerisedphp_web_1.
This kind of data exchange between containers is possible?
Actually I'm using the docker-compose to bring all up.
If you only need to do something with the data, and you don't need it on the host, you can use docker exec.
If you want to copy it to the host, or copy data from the host into a container, you can use docker cp.
You can use docker exec to run mysql client inside the mysql container, write the results to a file, and then use docker cp to copy the output to the host.
Or you can just do something like docker exec mysql-container-name mysql mysql-args > output on the host.
I'm getting error: This page isn’t working
I ran the following command inside the Laradock directory yet it's not connecting when I go to localhost. docker-compose up -d nginx postgres
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19433b191832 laradock_nginx "/bin/bash /opt/star…" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp laradock_nginx_1
e7f68a9d841d laradock_php-fpm "docker-php-entrypoi…" 5 minutes ago Up 5 minutes 9000/tcp laradock_php-fpm_1
3c73fedff4aa laradock_workspace "/sbin/my_init" 5 minutes ago Up 5 minutes 0.0.0.0:2222->22/tcp laradock_workspace_1
eefb58598ee5 laradock_postgres "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:5432->5432/tcp laradock_postgres_1
ea559a775854 docker:dind "dockerd-entrypoint.…" 5 minutes ago Up 5 minutes 2375/tcp laradock_docker-in-docker_1
docker-compose ps returns these results:
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------
laradock_docker-in-docker_1 dockerd-entrypoint.sh Up 2375/tcp
laradock_nginx_1 /bin/bash /opt/startup.sh Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
laradock_php-fpm_1 docker-php-entrypoint php-fpm Up 9000/tcp
laradock_postgres_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
laradock_workspace_1 /sbin/my_init Up 0.0.0.0:2222->22/tcp
Any help would be much appreciated.
I figured this out. I edited my docker-compose file volume to be /local/path/to/default.conf:/etc/nginx/sites-available
This is a problem because nginx looks for default.conf file but the volumes flag was setting sites-available as the file. I thought docker volume would symlink the file into the site-available directory not make it a file.
The correct volume syntax should be:
/local/path/to/default.conf:/etc/nginx/sites-available/default.conf
I have 2 containers running using the ports 5001 and 5000 of my server. This server can be part of a docker swarm. When I leave the docker swarm using Docker swarm leave --force the redirection of my physicals port is gone.
[98-swarm-hello-world *]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfc9bbc8573b registry "/entrypoint.sh /etc…" 7 seconds ago Up 5 seconds 0.0.0.0:5001->5000/tcp docker-registry_registry-private_1
760cbf6e6b15 registry "/entrypoint.sh /etc…" 7 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp docker-registry_registry-mirror_1
[98-swarm-hello-world *]$ docker swarm init
Swarm initialized: current node (srlmoh6a2nm28biifgv7vpjb1) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1hws6so0lpikgc1e0ztlhpobj7ejvg0hg4lk0k22wsdss4ntri-7l6eoo7cimlhmpzputbjpo6qt 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[98-swarm-hello-world *]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfc9bbc8573b registry "/entrypoint.sh /etc…" 21 seconds ago Up 19 seconds 0.0.0.0:5001->5000/tcp docker-registry_registry-private_1
760cbf6e6b15 registry "/entrypoint.sh /etc…" 21 seconds ago Up 19 seconds 0.0.0.0:5000->5000/tcp docker-registry_registry-mirror_1
For now the ports are still here but then :
[98-swarm-hello-world *]$ docker swarm leave --force
Node left the swarm.
[98-swarm-hello-world *]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfc9bbc8573b registry "/entrypoint.sh /etc…" 33 seconds ago Up 31 seconds docker-registry_registry-private_1
760cbf6e6b15 registry "/entrypoint.sh /etc…" 33 seconds ago Up 31 seconds docker-registry_registry-mirror_1
EDIT : My image might have a problem, with another image (created with docker container run --rm --name nginx -p 80:80 -d nginx) the ports are still exposed :
[root#n0300 docker-registry]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f40e2463eb38 registry "/entrypoint.sh /etc…" 6 seconds ago Up 2 seconds 0.0.0.0:5001->5000/tcp docker-registry_registry-private_1
fbb31476bddf registry "/entrypoint.sh /etc…" 6 seconds ago Up 2 seconds 0.0.0.0:5000->5000/tcp docker-registry_registry-mirror_1
b3086042d2f5 nginx "nginx -g 'daemon of…" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp nginx
[root#n0300 docker-registry]# docker swarm init
Swarm initialized: current node (s5fpahqg1klnbi2w90pver5ao) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3b2gv5e1f3x4ez9s3itf5hxnilypvh0g4t4butdhggwqpjsx2n-c4l1o42p4fl9mwy8ktjhl3yzo 172.16.1.44:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root#n0300 docker-registry]# docker swarm leave --fore
unknown flag: --fore
See 'docker swarm leave --help'.
[root#n0300 docker-registry]# docker swarm leave --force
Node left the swarm.
[root#n0300 docker-registry]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f40e2463eb38 registry "/entrypoint.sh /etc…" 22 seconds ago Up 18 seconds docker-registry_registry-private_1
fbb31476bddf registry "/entrypoint.sh /etc…" 22 seconds ago Up 18 seconds docker-registry_registry-mirror_1
b3086042d2f5 nginx "nginx -g 'daemon of…" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp nginx
EDIT 2 : My image isn't build, I'm just running a container.
Here's the compose file:
version: '3'
services:
registry-mirror:
image: registry
environment:
- REGISTRY_HTTP_ADDR=0.0.0.0:5000
volumes:
- ./config-mirror-registry.yml:/etc/docker/registry/config.yml
ports:
- "5000:5000"
registry-private:
image: registry
environment:
- REGISTRY_HTTP_ADDR=0.0.0.0:5000
ports:
- "5001:5000"
and here the config file
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
proxy:
remoteurl: https://registry-1.docker.io
I just committed a docker container and getting following list
[root#kentuckianatradenew log]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19d7479f4f66 newsslcyclos "catalina.sh run" 12 minutes ago Up 12 minutes 0.0.0.0:80->80/tcp, 8080/tcp n2n33
88175386c0da cyclos/db "docker-entrypoint.s…" 26 hours ago Up 21 minutes 5432/tcp cyclos-db
But when I browse it through IP it won't accessible while same was fine before commit.
docker port 19d7479f4f66
80/tcp -> 0.0.0.0:80