cannot access to docker container web app by using IP - docker

I'm using Linux containers on Windows and containerize a simple web app to test.
Firstly I create a network with:
docker network create --subnet 192.168.15.0/24 new_network
Afterthat I run
docker container run -d --name web1 --publish 8080:8080 --network new_network test:latest
I inspect and know that IP of that container is 192.168.15.2. But I cannot access to this via 192.168.15.2 or ip:8080. However, when I'm using localhost:8080, it works!
Could pls show me what is the problem and how to fix it.

I think it's normal behavior on Docker Desktop for Windows. Please refer to docker-for-windows and windowscontainers

Related

How to connect nifi & nifi registry using docker network

I have a container running nifi (--name nifi) exposing port 8080 and another container running nifi registry (--name nifireg) exposing port 10808. I can get to both UI's, and I am able to connect nifi to the registry in the registry services by using the registry container's IP (172.17.0.5). These containers are also on a docker network called nifi-net. My issue is that the registry client is unable to talk to the registry when using the container name.
From the nifi I can ping by container IP as well as by name (ping nifireg), so there is some level of connectivity. But if I change the registry client to point to http://nifireg:180880 or even http://nifi-net.nifireg:18080 it clocks for a while and then eventually returns this error:
Unable to obtain listing of buckets: java.net.ConnectException: Connection refused (Connection refused)
What needs to be done to allow nifi to connect to the nifi registry using the container name?
EDIT: Here is how I set everything up:
docker run -d --name nifi -p 8080:8080 apache/nifi
docker run -d --name nifireg -p 18080:18080 apache/nifi-registry
I added the netorking after the fact, but that shouldn't be an issue.
docker network create nifi-net
docker network connect nifi-net nifi
docker network connect nifi-net nifireg
I don't understand why this solved the problem, but destroying the containers and recreating them with the --net nifi-net option at spin-up solved the problem.
docker run -d --name nifi --net nifi-net -p 8080:8080 apache/nifi
docker run -d --name nifireg --net nifi-net -p 18080:18080 apache/nifi-registry
The docs state that you can add them to a network after the fact, and I am able to ping from one container to the other using the name. I guess it's just a lesson that I need to use docker networking more.
I would suggest using docker-compose to manage the deployment since you can define the network once in docker-compose.yaml and not have to worry about it agian.
Plus it lets you learn about docker networking :P

Why docker container is not able to access other container?

I have 3 docker applications(containers) in which one container is communicating with other 2 containers. If I run that containers using below command, container 3 is able to access the container 1 and container 2.
docker run -d --network="host" --env-file container1.txt -p 8001:8080 img1:latest
docker run -d --network="host" --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --network="host" --env-file container3.txt -p 8000:8080 img3:latest
But this is working only with host network if I remove this --network="host" option then I am not able to access this application outside(on web browser). In order to access it outside i need to make the host port and container ports same as below.
docker run -d --env-file container1.txt -p 8001:8001 img1:latest
docker run -d --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --env-file container3.txt -p 8000:8000 img3:latest
With this above commands i am able to access my application on web browser but container 3 is not able to communicate with container 1. here container 3 can access the container 2 because there i am exposing 8080 host + container port. But i can't expose again 8080 host port for container 3.
How to resolve this issue??
At last my goal is this application should be accessible on browser without using host network, it should use the bridge network . And container 3 needs to communicate with container 1 & 2.
On user-defined networks, containers can not only communicate by IP address but can also resolve a container name to an IP address. This capability is called automatic service discovery.
Read this for more details on Docker container networking.
You can perform the following steps to achieve the desired result.
Create a private bridge network.
docker network create --driver bridge privet-net
Now start your application containers along with the --network private-net added to your docker run command.
docker run -d --env-file container1.txt -p 8001:8001 --network private-net img1:latest
docker run -d --env-file container2.txt -p 8080:8080 --network private-net img2:latest
docker run -d --env-file container3.txt -p 8000:8000 --network private-net img3:latest
With this way, all the three containers will be able to communicate with each other and also to the internet.
In this case when you are using --network=host, then you are telling docker to not isolate the network rather to use the host's network. So all the containers are on the same network, hence can communicate with each other without any issues. However when you remove --newtork=host, then docker will isolate the network as well there by restricting container 3 to communicate with container 1.
You will need some sort of orchestration service like docker compose, docker swarm etc.

dockerized app needs to interact with other dockers over localhost

I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost

How to set docker back to default configuration

I have installed docker compose and used it a little. Then decided I did not need it it. Now when I create containers by hand they are assigned a network with an ip address, gateway and other things. When I inspected older containers before i installed docker compose they do not have these network settings.
I have tried unoinstalling docker compose and reinstalling docker which did not work. Is there anything I can do? The reason I am asking is I can't link containers together because every new container is assigned an ip address and other network settings.
Docker always does that, nothing to do with compose. Compose doesn't modify your Docker installation in any way, purely connects to the daemon to run commands under the hood.
By linking containers together I'm assuming you mean just so they can communicate with each other? --link is deprecated for some time now in favor of docker network .... Try the following:
$ docker network create test-net
$ docker run -d --name c1 --net test-net alpine:3.3 sleep 20000
$ docker run -it --name c2 --net test-net alpine:3.3 ping c1

How do I create a Docker container with a hostname that an other container can use?

So I've created a container for RabbitMq with the following command.
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management
Which works fine, but I then created another container with the following command
docker run test .
This runs a container with a PHP file that tries to connect to the hostname, my-rabbit but it can't find it the host so the PHP just closes itself right away. I did, however, find out the IP of the my-rabbit( first container ) and replace the hostname( my-rabbit ) in my PHP code with the IP and it connects with no problems.
So how do I create a hostname for the RabbitMq that all the other container on the same network can see and use instead of an IP?
I found out the answer after making this post. I use the --link arg with docker run.
https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#communication-across-links

Resources