I am new to the docker world. During learning I have created the below setup:
1.Virtual machine - Ubuntu 20 running on VMware workstation 15 Player. IP - 192.168.0.106. I am able to access the internet from this VM(say it VM1) and able to ping that system from my physical system OS( Windows 10)
2.Virtual Machine - Ubuntu 20 running on VMware workstation 15 Player. IP - 192.168.0.105. I am able to access the internet from this VM(say it VM2) and able to ping that system from my physical system OS( Windows 10)
Now I have created the swarm as follows from VM1:
sudo docker swarm init --advertise-addr 192.168.0.106:2377 --listen-addr 192.168.0.106:2377
Then I added the VM2 in this swarm as follows:
sudo docker swarm join --token SWMTKN-1-4i56y47l6o4aycrmg7un21oegmfmwnllcsxaf4zxd05ggqg0zh-9qp67bejerq9dhl3f0suaauvl 192.168.0.106:2377 --advertise-addr 192.168.0.105:2377 --listen-addr 192.168.0.105:2377
After that I checked the swarm details:
sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
ogka7rdjohri9elcbjjcpdlbp * ubuntumaster Ready Active Leader 19.03.12
7qu9kiprcz7oowfk2ol31k1mx ubuntuslave Ready Active 19.03.13
Then deployed the nginx service as follows from VM1:
sudo docker service create -d --name myweb1 --mode global -p9090:80 nginx:1.19.3
Service status:
sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
e1o9cbm3e0t myweb1 global 2/2 nginx:1.19.3 *:9090->80/tcp
Service details:
sudo docker service ps zf6kfw7aqhag
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
egd8oliwngf3 myweb1.ogka7rdjohri9elcbjjcpdlbp nginx:1.19.3 ubuntumaster Running Running 14 minutes ago
1o4q8dlt94jj myweb1.7qu9kiprcz7oowfk2ol31k1mx nginx:1.19.3 ubuntuslave Running Running 14 minutes ago
Now I am able to access the nginx from VM1 using URL: 192.168.0.106:9090 and localhost:9090. But I am not able to access nginx from VM2 using URL: 192.168.0.105:9090 and localhost:9090. My understanding that the nginx are running on both the VMs and can be accessible on both.
in both the VM1 I am able to see the nginx container is running.
VM1 :
sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7a4e13e49dfd nginx:1.19.3 "/docker-entrypoint.…" 16 minutes ago Up 15 minutes 80/tcp myweb1.ogka7rdjohri9elcbjjcpdlbp.egd8oliwngf35wwpjcieew323
VM2:
sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
999062110f0 nginx:1.19.3 "/docker-entrypoint.…" 16 minutes ago Up 16 minutes 80/tcp myweb1.7qu9kiprcz7oowfk2ol31k1mx.1o4q8dlt94jj4uufysnhsbamd
Please guide me on this if I am doing any mistakes.
TIA,
Deb
Problem solved! it was an issue was the ip clashing. Restarted the whole systems including the VM and router to solve this issue.
Related
I am trying to connect to my docker container..
My docker-machine ip is:
docker-machine ip default
192.168.99.100
docker ps
b546f4666f01 richarvey/nginx-php-fpm:php5 "/start.sh" 2 days ago Up 2 days 443/tcp, 0.0.0.0:8080->80/tcp my_container
When I try to access in a browser:
192.168.99.100:8080 doesn't resolve. But I can access the container doing localhost:8080
Here's the docker + machine versions:
Docker version 18.06.0-ce, build 0ffa825
docker-machine version 0.15.0, build b48dc28d
I used to be able to directly access the container w/ the same network linking properties above w/ an older version.
I am running an image and created a container for one of the open project BusyBox.
Below are my container details.
:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ea15ec5dac5 busybox "sh" 16 hours ago Up 16 hours 0.0.0.0:80->8080/tcp tt3
df236d039aa1 busybox "sh" 17 hours ago Up 17 hours 0.0.0.0:32769->8080/tcp bb1
I am trying to access the container using the URL http://GoogleExternalIP:32769 and http://GoogleExternalIP:80 but getting the error like This site can’t be reached 35.231.34.38 refused to connect. Search Google for 231 32769 ERR_CONNECTION_REFUSED
Note: My Jenkins is running on the port 8080 and I can access it. But Jenkins is installed on VM. but the apps which I want to access are running in Docker Containers.
I have set up the Google firewall to allow all incoming for all ports.
I have a docker swarm cluster consisting of one manager and one worker node. Then I configured (tls and DOCKER_HOST) a client from my laptop to get access to this cluster.
When I run docker ps I see only containers from the worker node (and not all containers of worker node (!)).
For example, from my client:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a129d9402aeb progrium/consul "/bin/start -rejoi..." 2 weeks ago Up 22 hours IP:8300-8302->8300-8302/tcp, IP:8400->8400/tcp, IP:8301-8302->8301-8302/udp, 53/tcp, 53/udp, IP:8500->8500/tcp, IP:8600->8600/udp hadoop1103/consul-agt2-hadoop
As well as I run docker ps at worker node:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4fec7fbf0b00 swarm "/swarm join --advert" 16 hours ago Up 16 hours 2375/tcp join
a129d9402aeb progrium/consul "/bin/start -rejoin -" 2 weeks ago Up 22 hours 0.0.0.0:8300-8302->8300-8302/tcp, 0.0.0.0:8400->8400/tcp, 0.0.0.0:8301-8302->8301-8302/udp, 53/tcp, 53/udp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8600->8600/udp consul-agt2-hadoop
So two questions: Why docker ps doesn't show containers from manager machine and not all containers from worker node?
Classic swarm (run as a container) by default hides the swarm management containers from docker ps output. You can show these containers with a docker ps -a command instead.
This behavior may be documented elsewhere, but the one location I've seen the behavior documented is in the api differences docs:
GET "/containers/json"
Containers started from the swarm official image are hidden by default, use all=1 to display them.
The all=1 api syntax is the equivalent of the docker ps -a cli.
Apparently, this is a silly question, though, i hope someone can help me.
I was thinking docker containers can run, because docker-machine is running on my MacOS X. Like on this situation:
> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v1.12.2
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abb8beb2a0fd httpd:2.4 "httpd-foreground" 48 minutes ago Up 47 minutes 0.0.0.0:80->80/tcp romantic_kare
But container can run, although in this situation.
> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abb8beb2a0fd httpd:2.4 "httpd-foreground" 48 minutes ago Up 47 minutes 0.0.0.0:80->80/tcp romantic_kare
Are there are no relationships between them?
Reference: https://docs.docker.com/machine/overview/
I installed Docker for Mac.
> docker --version
Docker version 1.12.1, build 6f9534c
This post is duplicated with Default docker machine on Mac.
Docker 1.12 and onward no longer uses docker-machine to run containers. Instead it uses a native docker engine for mac/windows.
Im using docker registry and the docker frontend is listed as running when I invoke docker ps but it is not available at localhost:80:
e2a54694e434 konradkleine/docker-registry-frontend "/bin/sh -c $START_S 26 seconds ago Up 2 seconds 443/tcp, 0.0.0.0:8080->80/tcp serene_tesla
Do you use boot2docker or docker-machine? If so, you should use the VMs IP address instead of localhost.
for boot2docker usually 192.168.59.103.
for docker-machines IP address type docker-machine ip <yourmachine>.