Ive a simple webapp running on docker container which makes DB connection to couchbase.
My couchbase is currently running on a VM localhost. (not another container).
I tried issue a command
docker run --net=host -p 8081:8081 {**image-name-one**} // This connects without issue
Now, I need another instance of the same app but different port and for that I created a bridge network with ip - 192.168.0.1 then modified connection string to use network ip
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 test
Now, I tried running 2nd container with below ports
docker run --net=test -p 8083:8081 {**2nd-image-name**} // This will never connect to the database
Any insight would be greatly helpful.
Im using Ubuntu 16.04.
I found a work around by adding the subnet to my firewall to allow connections to any ports.
Now, I can get my services connect to Couchbase.
Related
I've got following two containers:
mysql - docker run -d -e MYSQL_ROOT_PASSWORD=secretpassword -p 3306:3306 --restart=unless-stopped -v /var/lib/mysql:/var/lib/mysql --name mysql mysql:8.0.29
and springapp with a Spring Boot app which tries to connect to it:
spring.datasource.url=jdbc:mysql://<HOST_IP_ADDRESS>:3306/databaseschema
This ends up in with an error message:
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
I am able to connect to MySQL using MySQL Workbench from my PC without any issues. I was able to fix this connection issue in two ways:
adding both containers to the same network and using mysql container name
adding spring app to the host network
My question is: Why is the connection not possible? I thought if I can connect to the mysql instance from the "external" world, than it should be also possible from the container. Does someway differentiate if the port is managed by Docker container and then access is restricted for other containers?
By using -p 3306:3306 with your MySQL container you've exposed port 3306 to your host machine which then, in turn, exposes it to other machines on its network (effectively). It doesn't expose it to other containers running on the same machine because containers are meant to be isolated.
Of course you can effectively disable this behaviour by running your spring app container with --network=host.
You can try usign the name of the container, for example:
spring.datasource.url=jdbc:mysql://mysql:3306/databaseschema
If it doen't work try usign the same network for both containers
When I run: docker run --rm -it redis, The container receives ip: 172.18.0.2. Then from the host I connect to the container with the following command: redis-cli -h 172.18.0.2, and it connects normally, everything works, the keys are added. Why does this happen without port forwarding? Default docker network - bridge
docker run --rm -it redis will not expose the port. Try stop the redis container. Then run redis-cli -h 172.18.0.2 to check if another redis exists.
It is only possible because you're on native Linux, and the way Docker networking is implemented, it happens to be possible to directly connect to the container-private IP addresses from outside Docker.
This doesn't work in a wide variety of common situations (on MacOS or Windows hosts; if Docker is actually running in a VM; if you're making the call from a different host) and the IP address you get can change if the container is recreated. As such it's not usually a best practice to look up the container-private IP address. Use docker run -p to publish a port, and connect to that published port and the host's IP address.
It's because the redis docker file exposes the right port for the api which is 6379.
TL;DR: I just want a way to forward trafic to localhost to the host without using --net=host
I'm running multiple containers on the same host, and need them to access an instance of Redis that's available at localhost:6379. Also, I need to use port forwarding, so using --net=host is not an option.
How can I start multiple containers and allow all of them to forward trafic to localhost to the host?
I have also tried docker run --add-host localhost:<private ip address> -p <somehostport>:<somecontainerport> my_image, with no success (I still get that connection to 127.0.0.1:6379 is refused, as if localhost was not resolved to the host's private IP)
I'm running multiple containers on the same host, and need them to access an instance of Redis that's available at localhost:6379.
You can't.
If something is listening only on localhost, then you can't connect to it from another computer, from a virtual machine, or from a container. However, if your service is listening to any other address on your host, so you can simply point your containers at that address.
One solution is to configure your Redis service to listen on the address of the docker0 bridge, and then point your containers at that address.
This is better solved by a small redesign. Move redis into a container. Connect containers via container networking, and publish the redis port to localhost for anything that still isn't in a container. E.g.
docker network create redis
docker run -d --net redis -p 127.0.0.1:6379:6379 --name redis redis
docker run -d --net redis -e REDIS_URL=redis:6379 your_app
Containers need to communicate by the container name over a user created network, so your app will need to be configured with the new redis URL (changing localhost to redis).
The only other solution I've seen for this involves hacking of iptables rules, which isn't very stable when containers get redeployed.
I am trying to run a small test server with MS SQL Server running on a Mac in a Linux docker container. Maybe I have the terminology wrong so please correct me if necessary:
host - the macOS desktop with docker installed (ip 10.0.1.73)
container - the Linux instance running in the docker container with SQL Server running in it
remote desktop - another computer on the local area network trying to connect to SQL Server
I followed the MS installation instructions and everything seems to be running fine, except I can't connect to SQL Server from the Remote Desktop
I can connect to the docker host(10.0.1.73) and can ping the IP address
I can connect to SQL Server from the docker host and see the databases etc.
I used the following command to create the docker container
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<XXXXXX>" -p 1433:1433 --name sqlserver1 -d microsoft/mssql-server-linux:2017-latest
Thinking that the -p 1433:1433 would map the linux port to the macOS host port and allow the remote computer to access the docker container when connecting to that port on the macOS host from the local area network
This is not working and I assume this may be to do with the network routing on the macOS host
Most solutions I have seen seem to indicate that one should use the VirtualBox UI to modify the network settings - but I don't have that installed
The others seem to have pages and pages of command line instructions that are required
Is there an easy solution somewhere I have missed?
EDIT:
Some more research and I found this explanation about how by default the Docker networking is set up for single host networking. Good explanation for anyone else struggling with the Docker concepts.
It is also worth reading up about the differences between docker containers and virtual machines...
https://youtu.be/Js_140tDlVI
Still trying to find some explanation on multi host networking.
try disabeling the firewall on the host you want to connect to.
port 1433 will be forwarded to the docker container, but your host (MAC) should have port 1433 open to be able to connect to your host.
Using NAT:
Assign the target address to your host interface:
sudo ifconfig en1 alias 10.0.1.74/21 up
Create the docker container and map the port to the second IP address assigned to the host interface
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<XXXXXXXXX>" -p 10.0.1.74:1433:1433 --name sqlserver1 -d microsoft/mssql-server-linux:2017-latest
I'm new to docker and maybe this is something I don't fully understand yet, but what I'm trying to do is connect to an open port in a running docker container. I've pulled and run the rabbitmq container from hub (https://hub.docker.com/_/rabbitmq/). The rabbitmq container should uses port 5672 for clients to connect to.
After running the container (as instructed in the hub page):
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
Now what I want to do is telnet into the open post (it is possible on a regular rabbitmq installation and should be on a container as well).
I've (at least I think I did) gotten the container IP address using the following command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
And the result I got was 172.17.0.2. When I try to access using telnet 172.17.0.2 5672 it's unsuccessful.
The address 172.17.0.2 seems strange to me because if I run ipconfig on my machine I don't see any interface using 172.17.0.x address. I do see Ethernet adapter vEthernet (DockerNAT) using the following ip: 10.0.75.1. Is this how it is supposed to be?
If I do port binding (adding -p 5672:5672) then I can telnet into this port using telnet localhost 5672 and immidiatly connect.
What am I missing here?
As you pointed out, you need port binding in order to achieve the result you need because you are running the application over the default bridge network (on Windows i guess).
From the official docker doc
Containers connected to the same user-defined bridge network automatically expose all ports to each other, and no ports to the outside world. [...]
If you run the same application stack on the default bridge network, you need to open both the web port and the database port, using the -p or --publish flag for each. This means the Docker host needs to block access to the database port by other means.
Later in the rabbitmq hub there is a reference to a Management Plugin which is run by executing the command
docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
Which exposes the port 8080 used for management which I think is what you may need.
You should also notice that they talk about clusters and nodes there, maybe they meant the container to be run as a service in a swarm (hence using the overlay network and not the bridge one).
Hope I could help somehow :)