I try to migrate from multiple VM with static ip to container based solution.
Now I'm using VM with static ip:
I can ping and telnet my VMs telnet 10.48.0.10 5432 and telnet 10.48.0.11 5432
I want to create a single docker host that allows me to do the same :
It would be great if I can telnet 172.17.0.2 5432 and telnet 172.17.0.3 5432
I try to do it via docker because I want to manage the configuration.
What would be the proper way to do this ?
Should I use a TCP Proxy inside a container to manage this ?
The solution is pretty simple.
create a network and bind it to the host
docker network create --subnet=10.0.0.0/24 -o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" mynet
then run a container on mynet network
docker run -ti --net=mynet --ip=10.0.0.30 busybox
Now from another computer if you add route to your docker host (192.168.2.156) for this subnet :
sudo route add -net 10.0.0.0 netmask 255.255.255.0 gw 192.168.2.156
You can ping your container (ping 10.0.0.30)
If you want to access the containers from your host or from any other server that can get your host, you will need to map each container to a different port in the host server.
docker run -d -p 54321:5432 my_app
docker run -d -p 54322:5432 my_app
So you will can telnet 10.200.0.1 54321 and telnet 10.200.0.1 54322
Related
I have three instance of same containerized app running on docker. So their port are same for all. I can access one of them using port forwarding at localhost:8080 but when I want to do same thing for other ones I got error. So I think, somehow I need to access each instance from different ip address to connect them from my docker host. How can I do that?
I figured out how to do that. In case of someone else wants to achive this behaviour, I am writing down here the solution.
First example, you have three same container from couchbase image and you want to connect them in couchbase ui like each of them is seperated node.
1- Firstly you should open network interface for each container you want to deploy. Run below command on terminal.
sudo ifconfig lo0 alias 172.18.0.2 netmask 0xff000000
sudo ifconfig lo0 alias 172.18.0.3 netmask 0xff000000
sudo ifconfig lo0 alias 172.18.0.4 netmask 0xff000000
2- IP adresses above will be static container ip addresses for each instance. To do that, you should create docker network.
docker network create -d bridge my-network --gateway 172.18.0.1 --subnet 172.18.0.0/24
3- Create containers from couchbase image.
docker run -d --name cb1 --network my-network --ip 172.18.0.2 -p 172.18.0.2:8091-8096:8091-8096 -p 172.18.0.2:11210-11211:11210-11211 couchbase
docker run -d --name cb2 --network my-network --ip 172.18.0.3 -p 172.18.0.3:8091-8096:8091-8096 -p 172.18.0.3:11210-11211:11210-11211 couchbase
docker run -d --name cb3 --network my-network --ip 172.18.0.4 -p 172.18.0.4:8091-8096:8091-8096 -p 172.18.0.4:11210-11211:11210-11211 couchbase
4- Then you can open one of couchbase ui in this browser and connect the other two container to cluster. For example, type 172.18.0.2:8091 in browser and connect the 172.18.0.3 and 172.18.0.4 containers.
5- I need this project for GoSDK. So for golang you can use "couchbase://172.18.0.2" connection string to connect your cluster.
Note: This ip addresses are choosen randomly, you can assign whatever you want.
I have a jupyterhub running in a container with network_mode: host due to some requirement.
However after setting the network_mode to host in my docker-compose file, I can't access jupyterhub from an external host using the host ip:8000.
my understanding from this is
If you use the host network mode for a container, that container’s
network stack is not isolated from the Docker host (the container
shares the host’s networking namespace), and the container does not
get its own IP-address allocated. For instance, if you run a container
which binds to port 80 and you use host networking, the container’s
application is available on port 80 on the host’s IP address.
Is there anything i am missing?
EDIT:
To simplify i follow the instructions here
docker run --rm -d --network host --name my_nginx nginx
I can access the nginx welcome page doing
$ curl localhost:80
but if i try to curl from another host i get
$ curl 10.230.0.123:80
curl: (7) Failed connect to 10.230.0.123:80; No route to host
This issue can happen when on your system firewall is active and is blocking the port access. You can enable port access using below:
# in centos7, by updating iptables rules
iptables -I INPUT 5 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
# in ubuntu
sudo ufw allow 80/tcp
I'm stuck on port mapping in Docker.
I want to map port 8090 on the outside of a container to port 80 on the inside of the container.
Here is the container running:
ea41c430105d tag-xx "/usr/local/openrest…" 4 minutes ago Up 4 minutes 8090/tcp, 0.0.0.0:8090->80/tcp web
Notice that it says that port 8090 is mapped to port 80.
Now inside another container I do
curl web
I get a 401 response. Which means that the container responds. So far so good.
But when I do curl web:8090 I get:
curl: (7) Failed to connect to web port 8090: Connection refused
Why is port mapping not working for me?
Thanks
P.S. I know that specifically my container responds to curl web with a 401 because when I stop docker stop web and do curl web again, I get could not resolve host: web.
You cannot connect to a published port from inside another container because those are only available on the host. In your case:
From host:
curl localhost:8090 will connect to your container
curl localhost:80 won't connect to your container because the port isn't published
From another container in the same network
curl web will work
curl web:8090 won't work because the only port exposed and listening for the web service is the 80.
Docker containers unless specified connects to the default bridge network. This default bridge network does not support automatic DNS resolution between containers. It looks like you are most likely on the default bridge network. However, on a default bridge network, you could connect using the container IP Address which can be found out using the following command
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container name>
So, curl <IP Address of web container>:8090 should work.
It is always better to create a user defined bridge network and attach the containers to this network. On a user defined bridge network, the containers that are connected have their ports exposed to each other and not to the outside world. The user defined bridge network also support automatic DNS resolution and you could refer to the container's name instead of IP Address. You could try the following commands to create a user defined bridge network and attach your containers to it.
docker network create --driver bridge my-net
docker attach web
docker attach <other container name>
Now, from the other container you should be able to run curl on the 'web' container.
You can create network to connect between containers.
Or you can use --link :
docker run --name container1 -p 80:???? -d image (expose on port 80)
docker run --name container2 --links lcontainer1:container1
and inside container2 you can use :
curl lcontainer1
Hope it helps
I've installed docker in a VM which is publicy available on internet. I've installed mongodb in a docker container in the VM.Mongodb is listening on 27017 port.
I've installed using the following steps
docker run -p 27017:27017 --name da-mongo -v ~/mongo-data:/data/db -d mongo
The port from container is redirected to the host using the -p flag. But the port 27017 is exposed on the internet. I don't want it to happen.
Is there any way to fix it?
Well, if you want it available for certain hosts then you need a firewall. But, if all you need is it working on localhost (your VM machine), then you don't need to expose/bind the port with the host. I suggest you to run the container without the -p option, then, run the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' your_container_id_or_name
After that, it will display an IP, it is the IP of the container you've just ran (Yes, docker uses somewhat an internal virtual network connecting your containers and your host machine between them).
After that, you can connect to it using the IP and port combination, something like:
172.17.0.2:27017
When you publish the port, you can select which host interface to publish on:
docker run -p 127.0.0.1:27017:27017 --name da-mongo \
-v ~/mongo-data:/data/db -d mongo
That will publish the container port 27017 to host interface 127.0.0.1 port 27017. You can only add the interface to the host port, the container itself must still bind to 0.0.0.0.
Steps to reproduce:
HOST: docker run -t -i -p 22:1200 myimage /bin/bash
GUEST: bash# service sshd start
HOST: docker ps -l
HOST: docker inspect -f '{{ .NetworkSettings.IPAddress }}' container_id
HOST: ssh -p 1200 root#container_ip_from_previous_command
RESULT: Can't access mapped port (can't connect to sshd running inside docker container)
My host computer is running Ubuntu 14.04 64bit. Docker version 1.1.2, build d84a070
I can connect from inside the docker container to sshd running on localhost port 22. I've tried with ufw disabled (has forwarding enabled) as well, same results.
As mentioned in my comment, the -p switch on docker run will only route a port off of the Docker interface. For example you will have two interfaces, docker0 and eth0, you can route port 1200 from eth0 to port 22 on docker0.
If you're connecting on the private interface docker0, there is no need to route external ports and you can just connect on port 22 (To the IP provided with docker inspect).
Lastly, you have the port order wrong (assuming SSH is on port 22 in the container). You should use -p external:internal where external is the port you want to expose to the World, and internal the relevant port within the container.