Docker Container Not accessible after commit - docker

I just committed a docker container and getting following list
[root#kentuckianatradenew log]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19d7479f4f66 newsslcyclos "catalina.sh run" 12 minutes ago Up 12 minutes 0.0.0.0:80->80/tcp, 8080/tcp n2n33
88175386c0da cyclos/db "docker-entrypoint.s…" 26 hours ago Up 21 minutes 5432/tcp cyclos-db
But when I browse it through IP it won't accessible while same was fine before commit.
docker port 19d7479f4f66
80/tcp -> 0.0.0.0:80

Related

Docker stack deploy cannot deploy service to different node in swarm cluster

I am trying to deploy the application on multiple instances. On master node. after deployed application running the only master node. cannot deploy service different node in the docker swarm cluster.
here my docker-compose file
version: "3"
services:
mydb:
image: localhost:5000/mydb-1
environment:
TZ: "Asia/Colombo"
ports:
- 9042:9042
volumes:
- /root/data/cdb:/var/lib/cassandra
- /root/logs/cdb:/var/log/cassandra
command docker service scale mydb-1_mydb=5
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7fxxxxxxxx7 localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 5 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.2.q77i258vn2xynlgein9s7tdpb
34fcxxxx14bd localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 4 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.1.s2mzitj8yzb0zo7spd3dmpo1j
9axxxx1efb localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 8 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.3.zgyev3p4qdg7hf7h67oeedutr
f14xxxee59 localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 2 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.4.r0themodonzzr1izdbnppd5bi
e3xxx16d localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 6 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.5.bdebi4
all running only master-node. Does anyone know the issue?
Your image appears to be locally built with a name that cannot be resolved in other nodes (localhost:5000/mydb-1). In swarm, images should be pushed to a registry, and that registry needs to be accessible by all nodes. You can run your own registry service on your own node, there's a docker image, or you can push to docker hub. If the registry is private, you also need to perform a docker login on the node running the stack deploy and include registry credentials in that deploy, e.g.
docker stack deploy -c compose.yml --with-registry-auth stack-name
Thanks. I find the issue and fixed.
volumes:
- /root/data/cdb:/var/lib/cassandra
- /root/logs/cdb:/var/log/cassandra
If you bind mount a host path into your service’s containers, the path must exist on every swarm node.
docker service scale zkr_zkr=2
after scale-up service running my node
root#beta-node-1:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f9bxxx15861 localhost:5000/zookeeper:latest "/docker-entrypoint.…" 40 minutes ago Up 40 minutes 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zkr_zkr.3.qpr8qp5y
01dxxxx64bc localhost:5000/zookeeper:latest "/docker-entrypoint.…" 40 minutes ago Up 40 minutes 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zkr_zkr.1.g2uee5j

Why can't I go to localhost using Laradock?

I'm getting error: This page isn’t working
I ran the following command inside the Laradock directory yet it's not connecting when I go to localhost. docker-compose up -d nginx postgres
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19433b191832 laradock_nginx "/bin/bash /opt/star…" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp laradock_nginx_1
e7f68a9d841d laradock_php-fpm "docker-php-entrypoi…" 5 minutes ago Up 5 minutes 9000/tcp laradock_php-fpm_1
3c73fedff4aa laradock_workspace "/sbin/my_init" 5 minutes ago Up 5 minutes 0.0.0.0:2222->22/tcp laradock_workspace_1
eefb58598ee5 laradock_postgres "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:5432->5432/tcp laradock_postgres_1
ea559a775854 docker:dind "dockerd-entrypoint.…" 5 minutes ago Up 5 minutes 2375/tcp laradock_docker-in-docker_1
docker-compose ps returns these results:
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------
laradock_docker-in-docker_1 dockerd-entrypoint.sh Up 2375/tcp
laradock_nginx_1 /bin/bash /opt/startup.sh Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
laradock_php-fpm_1 docker-php-entrypoint php-fpm Up 9000/tcp
laradock_postgres_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
laradock_workspace_1 /sbin/my_init Up 0.0.0.0:2222->22/tcp
Any help would be much appreciated.
I figured this out. I edited my docker-compose file volume to be /local/path/to/default.conf:/etc/nginx/sites-available
This is a problem because nginx looks for default.conf file but the volumes flag was setting sites-available as the file. I thought docker volume would symlink the file into the site-available directory not make it a file.
The correct volume syntax should be:
/local/path/to/default.conf:/etc/nginx/sites-available/default.conf

Is there a way to stop docker swarm leave --force from removing the redirection of port of the current containers running

I have 2 containers running using the ports 5001 and 5000 of my server. This server can be part of a docker swarm. When I leave the docker swarm using Docker swarm leave --force the redirection of my physicals port is gone.
[98-swarm-hello-world *]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfc9bbc8573b registry "/entrypoint.sh /etc…" 7 seconds ago Up 5 seconds 0.0.0.0:5001->5000/tcp docker-registry_registry-private_1
760cbf6e6b15 registry "/entrypoint.sh /etc…" 7 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp docker-registry_registry-mirror_1
[98-swarm-hello-world *]$ docker swarm init
Swarm initialized: current node (srlmoh6a2nm28biifgv7vpjb1) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1hws6so0lpikgc1e0ztlhpobj7ejvg0hg4lk0k22wsdss4ntri-7l6eoo7cimlhmpzputbjpo6qt 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[98-swarm-hello-world *]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfc9bbc8573b registry "/entrypoint.sh /etc…" 21 seconds ago Up 19 seconds 0.0.0.0:5001->5000/tcp docker-registry_registry-private_1
760cbf6e6b15 registry "/entrypoint.sh /etc…" 21 seconds ago Up 19 seconds 0.0.0.0:5000->5000/tcp docker-registry_registry-mirror_1
For now the ports are still here but then :
[98-swarm-hello-world *]$ docker swarm leave --force
Node left the swarm.
[98-swarm-hello-world *]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfc9bbc8573b registry "/entrypoint.sh /etc…" 33 seconds ago Up 31 seconds docker-registry_registry-private_1
760cbf6e6b15 registry "/entrypoint.sh /etc…" 33 seconds ago Up 31 seconds docker-registry_registry-mirror_1
EDIT : My image might have a problem, with another image (created with docker container run --rm --name nginx -p 80:80 -d nginx) the ports are still exposed :
[root#n0300 docker-registry]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f40e2463eb38 registry "/entrypoint.sh /etc…" 6 seconds ago Up 2 seconds 0.0.0.0:5001->5000/tcp docker-registry_registry-private_1
fbb31476bddf registry "/entrypoint.sh /etc…" 6 seconds ago Up 2 seconds 0.0.0.0:5000->5000/tcp docker-registry_registry-mirror_1
b3086042d2f5 nginx "nginx -g 'daemon of…" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp nginx
[root#n0300 docker-registry]# docker swarm init
Swarm initialized: current node (s5fpahqg1klnbi2w90pver5ao) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3b2gv5e1f3x4ez9s3itf5hxnilypvh0g4t4butdhggwqpjsx2n-c4l1o42p4fl9mwy8ktjhl3yzo 172.16.1.44:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root#n0300 docker-registry]# docker swarm leave --fore
unknown flag: --fore
See 'docker swarm leave --help'.
[root#n0300 docker-registry]# docker swarm leave --force
Node left the swarm.
[root#n0300 docker-registry]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f40e2463eb38 registry "/entrypoint.sh /etc…" 22 seconds ago Up 18 seconds docker-registry_registry-private_1
fbb31476bddf registry "/entrypoint.sh /etc…" 22 seconds ago Up 18 seconds docker-registry_registry-mirror_1
b3086042d2f5 nginx "nginx -g 'daemon of…" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp nginx
EDIT 2 : My image isn't build, I'm just running a container.
Here's the compose file:
version: '3'
services:
registry-mirror:
image: registry
environment:
- REGISTRY_HTTP_ADDR=0.0.0.0:5000
volumes:
- ./config-mirror-registry.yml:/etc/docker/registry/config.yml
ports:
- "5000:5000"
registry-private:
image: registry
environment:
- REGISTRY_HTTP_ADDR=0.0.0.0:5000
ports:
- "5001:5000"
and here the config file
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
proxy:
remoteurl: https://registry-1.docker.io

I run a docker container,but after few minutes , it was killed by himself

I have run a docker container and using nsenter into it.But after few minutes,this container was killed by himself
root#n14:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e69069513c8b sameersbn/mysql "/start" 25 minutes ago Up 12 seconds 3306/tcp mysqldb1
02f3f156b3e7 sameersbn/mysql "/start" 25 minutes ago Up 25 minutes 3306/tcp mysqldb
root#n14:~# nsenter --target 13823 --mount --uts --ipc --net --pid
root#e69069513c8b:/home/git/gitlab# Killed
root#n14:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e69069513c8b sameersbn/mysql "/start" 55 minutes ago Exited (0) 28 minutes ago mysqldb1

Docker port uncertainty

I am trying to access a running app on a port that I defined using "EXPOSE".
Here is what I get:
docker#boot2docker:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
653d8ee23260 nginx:latest "nginx -g 'daemon of 2 minutes ago Up 2 minutes 80/tcp, 443/tcp
insane_thompson
007cfcd0f539 highlighter:latest "java -jar -Xmx1500m 8 minutes ago Up 8 minutes 7777/tcp
elated_kirch
docker#boot2docker:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
653d8ee23260 nginx:latest "nginx -g 'daemon of 2 minutes ago Up 2 minutes 80/tcp, 443/tcp insane_thompson
007cfcd0f539 highlighter:latest "java -jar -Xmx1500m 8 minutes ago Up 8 minutes 7777/tcp elated_kirch
docker#boot2docker:~$ docker port 007cfcd0f539
docker#boot2docker:~$ docker port 653d8ee23260
docker#boot2docker:~$ docker port 653d8ee23260 80
FATA[0000] Error: No public port '80/tcp' published for 653d8ee23260
docker#boot2docker:~$ docker port 007cfcd0f539 7777
FATA[0000] Error: No public port '7777/tcp' published for 007cfcd0f539
Am I misunderstanding how the "port" command works?
EXPOSE in Dockerfile is not enough.
You need to specifically tell docker to expose port upon docker run with the -P/-p flags.
A much more detailed answer can be found here.

Resources