Apparently, this is a silly question, though, i hope someone can help me.
I was thinking docker containers can run, because docker-machine is running on my MacOS X. Like on this situation:
> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v1.12.2
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abb8beb2a0fd httpd:2.4 "httpd-foreground" 48 minutes ago Up 47 minutes 0.0.0.0:80->80/tcp romantic_kare
But container can run, although in this situation.
> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abb8beb2a0fd httpd:2.4 "httpd-foreground" 48 minutes ago Up 47 minutes 0.0.0.0:80->80/tcp romantic_kare
Are there are no relationships between them?
Reference: https://docs.docker.com/machine/overview/
I installed Docker for Mac.
> docker --version
Docker version 1.12.1, build 6f9534c
This post is duplicated with Default docker machine on Mac.
Docker 1.12 and onward no longer uses docker-machine to run containers. Instead it uses a native docker engine for mac/windows.
Related
my elastichead is not connecting even when my elasticsearch container is running.
I can't understand the problem.
(env) C:\Users\shubh\Desktop\react-django\emt_api\emt_assets>docker ps -a
CONTAINER ID IMAGE
COMMAND CREATED STATUS PORTS
NAMES 117ba87ad874 mdillon/postgis:11
"docker-entrypoint.s…" 9 hours ago Up 4 hours
0.0.0.0:5432->5432/tcp postgres 70b6527e1046 docker.elastic.co/elasticsearch/elasticsearch:6.8.8
"/usr/local/bin/dock…" 9 hours ago Up 5 hours
0.0.0.0:9200->9200/tcp, 9300/tcp elasticsearch
but after running elastichead head there is still no connection.
my system is windows 10 home edition.
i was working with the docker toolbox which includes oracle virtual box. There you can see your docker machine settings.
just stop your docker machine with command in docker quickstart terminal
docker-machine stop <docker-machine name>
Then in your oracle virtual box..go to System>Advanced> Increase the memory to 4gb(4096mb)
And start docker-machine with command in docker quickstart terminal
docker-machine start <docker-machine name>
and work with your container.
Its very common for ES process to die due to out of memory error. Please refer my this SO answer for more info.
I am new to the docker world. During learning I have created the below setup:
1.Virtual machine - Ubuntu 20 running on VMware workstation 15 Player. IP - 192.168.0.106. I am able to access the internet from this VM(say it VM1) and able to ping that system from my physical system OS( Windows 10)
2.Virtual Machine - Ubuntu 20 running on VMware workstation 15 Player. IP - 192.168.0.105. I am able to access the internet from this VM(say it VM2) and able to ping that system from my physical system OS( Windows 10)
Now I have created the swarm as follows from VM1:
sudo docker swarm init --advertise-addr 192.168.0.106:2377 --listen-addr 192.168.0.106:2377
Then I added the VM2 in this swarm as follows:
sudo docker swarm join --token SWMTKN-1-4i56y47l6o4aycrmg7un21oegmfmwnllcsxaf4zxd05ggqg0zh-9qp67bejerq9dhl3f0suaauvl 192.168.0.106:2377 --advertise-addr 192.168.0.105:2377 --listen-addr 192.168.0.105:2377
After that I checked the swarm details:
sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
ogka7rdjohri9elcbjjcpdlbp * ubuntumaster Ready Active Leader 19.03.12
7qu9kiprcz7oowfk2ol31k1mx ubuntuslave Ready Active 19.03.13
Then deployed the nginx service as follows from VM1:
sudo docker service create -d --name myweb1 --mode global -p9090:80 nginx:1.19.3
Service status:
sudo docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
e1o9cbm3e0t myweb1 global 2/2 nginx:1.19.3 *:9090->80/tcp
Service details:
sudo docker service ps zf6kfw7aqhag
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
egd8oliwngf3 myweb1.ogka7rdjohri9elcbjjcpdlbp nginx:1.19.3 ubuntumaster Running Running 14 minutes ago
1o4q8dlt94jj myweb1.7qu9kiprcz7oowfk2ol31k1mx nginx:1.19.3 ubuntuslave Running Running 14 minutes ago
Now I am able to access the nginx from VM1 using URL: 192.168.0.106:9090 and localhost:9090. But I am not able to access nginx from VM2 using URL: 192.168.0.105:9090 and localhost:9090. My understanding that the nginx are running on both the VMs and can be accessible on both.
in both the VM1 I am able to see the nginx container is running.
VM1 :
sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7a4e13e49dfd nginx:1.19.3 "/docker-entrypoint.…" 16 minutes ago Up 15 minutes 80/tcp myweb1.ogka7rdjohri9elcbjjcpdlbp.egd8oliwngf35wwpjcieew323
VM2:
sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
999062110f0 nginx:1.19.3 "/docker-entrypoint.…" 16 minutes ago Up 16 minutes 80/tcp myweb1.7qu9kiprcz7oowfk2ol31k1mx.1o4q8dlt94jj4uufysnhsbamd
Please guide me on this if I am doing any mistakes.
TIA,
Deb
Problem solved! it was an issue was the ip clashing. Restarted the whole systems including the VM and router to solve this issue.
I am trying to connect to my docker container..
My docker-machine ip is:
docker-machine ip default
192.168.99.100
docker ps
b546f4666f01 richarvey/nginx-php-fpm:php5 "/start.sh" 2 days ago Up 2 days 443/tcp, 0.0.0.0:8080->80/tcp my_container
When I try to access in a browser:
192.168.99.100:8080 doesn't resolve. But I can access the container doing localhost:8080
Here's the docker + machine versions:
Docker version 18.06.0-ce, build 0ffa825
docker-machine version 0.15.0, build b48dc28d
I used to be able to directly access the container w/ the same network linking properties above w/ an older version.
I have a docker swarm cluster consisting of one manager and one worker node. Then I configured (tls and DOCKER_HOST) a client from my laptop to get access to this cluster.
When I run docker ps I see only containers from the worker node (and not all containers of worker node (!)).
For example, from my client:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a129d9402aeb progrium/consul "/bin/start -rejoi..." 2 weeks ago Up 22 hours IP:8300-8302->8300-8302/tcp, IP:8400->8400/tcp, IP:8301-8302->8301-8302/udp, 53/tcp, 53/udp, IP:8500->8500/tcp, IP:8600->8600/udp hadoop1103/consul-agt2-hadoop
As well as I run docker ps at worker node:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4fec7fbf0b00 swarm "/swarm join --advert" 16 hours ago Up 16 hours 2375/tcp join
a129d9402aeb progrium/consul "/bin/start -rejoin -" 2 weeks ago Up 22 hours 0.0.0.0:8300-8302->8300-8302/tcp, 0.0.0.0:8400->8400/tcp, 0.0.0.0:8301-8302->8301-8302/udp, 53/tcp, 53/udp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8600->8600/udp consul-agt2-hadoop
So two questions: Why docker ps doesn't show containers from manager machine and not all containers from worker node?
Classic swarm (run as a container) by default hides the swarm management containers from docker ps output. You can show these containers with a docker ps -a command instead.
This behavior may be documented elsewhere, but the one location I've seen the behavior documented is in the api differences docs:
GET "/containers/json"
Containers started from the swarm official image are hidden by default, use all=1 to display them.
The all=1 api syntax is the equivalent of the docker ps -a cli.
I want to create a Docker Swarm Cluster running an elastic search instance, a MongoDB instance and a grails app, each on a separate machine. I'm using Docker Machine to set up my Docker Swarm Cluster
swarm-01:
mongodb
mongodb_ambassador
swarm-02:
elasticsearch
elasticsearch_ambassador
swarm-03:
mongodb_ambassador
elasticsearch_ambassador
grails
The last step of my setup, running the actual grails app, using the following command:
docker run -p 8080:8080 -d --name grails-master --volumes-from maven --link mongo:mongo-master --link es:es-master my-grails-image
fails with error:
Error response from daemon: Unable to find a node fulfilling all
dependencies: --volumes-from=maven --link=mongo:mongo-master
--link=es:es-master
The ambassador containers and the maven data container are all running on the same node.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74677dad09a7 svendowideit/ambassador "/bin/sh -c 'env | gr" 18 minutes ago Up 18 minutes 9200/tcp, 9300/tcp swarm-03/es
98b38c4fc575 svendowideit/ambassador "/bin/sh -c 'env | gr" 18 minutes ago Up 18 minutes 27107/tcp swarm-03/mongo
7d45fb82eacc debian:jessie "/bin/bash" 20 minutes ago swarm-03/maven
I'm not able to get the Grails app running on the Swarm cluster; any advice would be appreciated. Running all containers on a single machine works, so I guess I'm making a mistake linking the mongo and es instances to the grails app.
Btw I'm using latest Docker Toolbox installation on OS X.
"linking" is deprecated in docker. Don't use it. It's complicated and not flexible enough.
Just create an overlay network for swarm mode.
docker network create -d overlay mynetwork
In swarm mode (even in single container mode), just add every service who should communicate with another service to the same network.
docker service create --network mynetwork --name mymongodb ...
Other services in the same network can reach your mongodb service just over the hostname mymongodb. That's all. Docker swarm mode has battery included.