docker version: 17.05.0-ce
I have some containers running by hand using docker run ... but recently for new project I create docker-compose.yml file based on this tutorial. However when i run following commands in my hosting:
docker network create --driver bridge reverse-proxy
docker-compose up
and
docker run -d --name nginx-reverse-proxy --net reverse-proxy -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
The proxy not work for old containers and I am unable to use subdomain to that projects (they "stop work").
So what to do?
I make experiments with --net parameter in docker run ... and using docker network inspect network_name. I get many different results like welcome to nginx or http 404 not found or http 503 temporarily unavailable and get following conclusions:
if no --net command then container is run in bridge network
if --net xxx command then cntainer is run only in 'xxx' network (not in bridge !)
if --net xxx --net yyy then container is run only in 'yyy' (no 'xxx' at all!)
The bridge is default docker network for containers inter-communication.
So when on running proxy we use only --net reverse-proxy then proxy container not see bridge and cannot communicate with other containers. If we try to use --net reverse-proxy --net bridge (two or more times in one line - like -p) then container will be connected only to last network.
So solution is... run proxy in following way:
docker run -d --name nginx-reverse-proxy -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker network connect reverse-proxy reverse-proxy
As you see we not use --net command at all. The network connect command allow container to connect to use multiple networks. When you execute docker network inspect reverse-proxy and docker network inspect bridge we will see that nginx-reverse-proxy is in both networks :)
Related
I am running Mongo DB image with following command:
docker run -d -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=test -e MONGO_INITDB_ROOT_PASSWORD=password --name=testdb mongo
This created container and I'm able to connect to this from robo3T.
Now I ran mongo-express image with following command and trying to above DB:
docker run -d -p 8081:8081 -e ME_CONFIG_MONGODB_ADMINUSERNAME=test -e ME_CONFIG_MONGODB_ADMINPASSWORD=password -e ME_CONFIG_MONGODB_SERVER=testdb --name=mongo-ex mongo-express
But I'm getting following error:
UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [testb:27017] on first connect [Error: getaddrinfo ENOTFOUND testb
If I'm creating a custom bridge network and running these two images in that container it's working.
My question is: As the default network is bridge network, and these containers are creating in default bridge network, why are they not able to communicate? Why is it working with custom bridge network?
There are two kinds of "bridge network"; if you don't have a docker run --net option then you get the "default" bridge network which is pretty limited. You almost always want to docker network create a "user-defined" bridge network, which has the standard Docker networking features.
# Use modern Docker networking
docker network create myapp
docker run -d --net myapp ... --name testdb mongo
docker run -d --net myapp ... -e ME_CONFIG_MONGODB_SERVER=testdb mongo-express
# Because both containers are on the same --net, the first
# container's --name is usable as a host name from the second
The "default" bridge network that you get without --net by default forbids inter-container communication, and you need a special --link option to make the connection. This is considered obsolete, and the Docker documentation page describing links notes that links "may eventually be removed".
# Use obsolete Docker networking; may stop working at some point
docker run -d ... --name testdb mongo
docker run -d ... -e ME_CONFIG_MONGODB_SERVER=testdb --link testdb mongo-express
# Containers can only connect to each other by name if they use --link
On modern Docker setups you really shouldn't use --link or the equivalent Compose links: option. Prefer to use the more modern docker network create form. If you're using Compose, note that Compose creates a network named default but this is a "user-defined bridge"; in most cases you don't need any networks: options at all to get reasonable inter-container networking.
When I start MySQL :
docker run --rm -d -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 -v /Docker/data/matos/mysql:/var/lib/mysql mysql:5.7
And start PHPMyAdmin :
docker run --rm -d -e PMA_HOST=172.17.0.1 phpmyadmin/phpmyadmin:latest
PMA cannot connect to the DB server.
When I try with PMA_HOST=172.17.0.2 (which is the address assigned to the MySQL container), it works.
But :
as MySQL container publishes its 3306 port, I think it should be reachable on 172.17.0.1:3306.
I don't want to use the 172.17.0.2 address because the MySQL container can be assigned another address whenever it restarts
Am I wrong ?
(I know I can handle this with docker-compose, but prefer managing my containers one by one).
(My MySQL container is successfully telnetable from my laptop with telnet 172.17.0.1 3306).
(My docker version : Docker version 20.10.3, build 48d30b5).
Thanks for your help.
Create a new docker network and start both containers with the network
docker network create my-network
docker run --rm -d --network my-network -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 -v /Docker/data/matos/mysql:/var/lib/mysql --name mysql mysql:5.7
docker run --rm -d --network my-network -e PMA_HOST=mysql phpmyadmin/phpmyadmin:latest
Notice in the command that I've given the mysql container a name 'mysql' and used it as the address for phpmyadmin
Just found out the problem.
My ufw was active on my laptop, and did not allow explicitly port 3306.
I managed to communicate between PMA container and MySQL, using 172.17.0.1, either by disabling ufw or adding a rule to explicitly accept port 3306.
Thanks #kidustiliksew for your quick reply, and the opportunity you gave me to test user-defined networks.
maybe it's a good idea to use docker-compose.
Create a docker-compose.yml file and inside declare two services, one web and the other db, then you can reference them through their service names (web, db)
ex: PMA_HOST=db
I've got two Docker containers that need to have a websocket connection between the two.
I run one container like this:
docker run --name comm -p 8080:8080 comm_module:latest
to expose port 8080 to the host. Then I try to run the second container like this:
docker run --name test -p 8080:8080 datalogger:latest
However, I get the error below:
docker: Error response from daemon: driver failed programming external
connectivity on endpoint test
(f06588ee059e2c4be981e3676d7e05b374b42a8491f9f45be27da55248189556):
Bind for 0.0.0.0:8080 failed: port is already allocated. ERRO[0000]
error waiting for container: context canceled
I'm not sure what to do. Should I connect these to a network? How do I run these containers?
you can't bind the same host port twice in the same time you may change one of the ports on one container:
docker run --name comm -p 8080:8080 comm_module:latest
docker run --name test -p 8081:8080 datalogger:latest
you may check the configuration in the containers on how they communicate .
you can also create link between them:
docker run --name test -p 8081:8080 --link comm datalogger:latest
I finally worked it out. These are the steps involved for a two-way websocket communication between two Docker containers:
Modify the source code in the containers to use the name of the other container as the destination host address + port number (e.g. comm:port_no inside test, and vice versa).
Expose the same port (8080) in the Dockerfiles of the two containers and build the images. No need to publish them as they are will be visible to other containers on the network.
Create a user-defined bridge network like this:
docker network create my-net
Create my first container and attach it to the network:
docker create --name comm --network my-net comm_module:latest
Create my second container and attach it to the network:
docker create --name test --network my-net datalogger:latest
Start both containers by issuing the docker start command.
And the two-way websocket communication works nicely!
My Solution works fine.
docker network create mynet
docker run -p 443:443 --net=mynet --ip=172.18.0.3 --hostname=frontend.foobar.com foobarfrontend
docker run -p 9999:9999 --net=mynet --ip=172.18.0.2 --hostname=backend.foobar.com foobarbackend
route /P add 172.18.0.0 MASK 255.255.0.0 10.0.75.2
the foobarfrontend calls a wss websocket on foobarbackend on port 9999
PS: i work on docker windows 10 with linuxcontainers
have fun
I am running Docker for Mac. When I run
docker run -d --rm --name nginx -p 80:80 nginx:1.10.3
I can access Nginx on port 80 on my Mac. When I run
docker run -d --rm --name nginx --network host -p 80:80 nginx:1.10.3
I can not.
Is it possible to use both "--network host" and publish a port so that it is reachable from my Mac?
Alternatively, can I access Nginx from my Mac via the IP of the HyperKit VM?
Without the --network flag the container is added to the bridge network by default; which creates a network stack on the Docker bridge (usually the veth interface).
If you specify --network host the container gets added to the Docker host network stack. Note the container will share the networking namespace of the host, and thus all its security implications.
Which means you don't need to add -p 80:80, instead run...
docker run -d --rm --name nginx --network host nginx:1.10.3
and access the container on http://127.0.0.1
The following link will help answer the HyperKit question and the current limitations:
https://docs.docker.com/docker-for-mac/networking/
There is no docker0 bridge on macOS
Because of the way networking is implemented in Docker for Mac, you
cannot see a docker0 interface in macOS. This interface is actually
within HyperKit.
Ok, I am pretty new to Docker world. So this might be a very basic question.
I have a container running in Docker, which is running RabbitMQ. Let's say the name of this container is "Rabbit-container".
RabbitMQ container was started with this command:
docker run -d -t -i --name rmq -p 5672:5672 rabbitmq:3-management
Python script command with 2 args:
python ~/Documents/myscripts/migrate_data.py amqp://rabbit:5672/ ~/Documents/queue/
Now, I am running a Python script from my host machine, which is creating some messages. I want to send these messages to my "Rabbit-container". Hence I want to connect to this container from my host machine (Mac OSX).
Is this even possible? If yes, how?
Please let me know if more details are needed.
So, I solved it by simply mapping the RMQ listening port to host OS:
docker run -d -t -i --name rmq -p 15672:15672 -p 5672:5672 rabbitmq:3-management
I previously had only -p 15672:15672 in my command. This is mapping the Admin UI from Docker container to my host OS. I added -p 5672:5672, which mapped RabbitMQ listening port from Docker container to host OS.
If you're running this container in your local OSX system then you should find your default docker-machine ip address by running:
docker-machine ip default
Then you can change your python script to point to that address and mapped port on <your_docker_machine_ip>:5672.
That happens because docker runs in a virtualization engine on OSX and Windows, so when you map a port to the host, you're actually mapping it to the virtual machine.
You'd need to run the container with port 5672 exposed, perhaps 15672 as well if you want WebUI, and 5671 if you use SSL, or any other port for which you add tcp listener in rabbitmq.
It would be also easier if you had a specific IP and a host name for the rabbitmq container. To do this, you'd need to create your own docker network
docker network create --subnet=172.18.0.0/16 mynet123
After that start the container like so
docker run -d --net mynet123--ip 172.18.0.11 --hostname rmq1 --name rmq_container_name -p 15673:15672 rabbitmq:3-management
note that with rabbitmq:3-management image the port 5672 is (well, was when I used it) already exposed so no need to do that. --name is for container name, and --hostname obviously for host name.
So now, from your host you can connect to rmq1 rabbitmq server.
You said that you have never used docker-machine before, so i assume you are using the Docker Beta for Mac (you should see the docker-icon in the menu bar at the top).
Your docker run command for rabbit is correct. If you now want to connect to rabbit, you have two options:
Wrap your python script in a new container and link it to rabbit:
docker run -it --rm --name migration --link rmq:rabbit -v ~/Documents/myscripts:/app -w /app python:3 python migrate_data.py
Note that we have to link rmq:rabbit, because you name your container rmq but use rabbit in the script.
Execute your python script on your host machine and use localhost:5672
python ~/Documents/myscripts/migrate_data.py amqp://localhost:5672/ ~/Documents/queue/