Docker containers connection issue - docker

I have two containers. One of them is my application and the other is ElasticSearch-5.5.3. My application needs to connect to ES container. However, I always get "Connection refused"
I run my application with static port:
docker run -i -p 9000:9000 .....
I run ES with static port:
docker run -i -p 9200:9200 .....
How can I connect them?

You need to link both the containers by using --links
Start your ES container with a name es -
$ docker run --name es -d -p 9200:9200 .....
Start your application container by using --links -
$ docker run --name app --links es:es -d -p 9000:9000 .....
That's all. You should be able to access ES container with hostname es from application container i.e app.
try - curl -I http://es:9200/ from inside the application container & you should be able to access ES service running in es container.
Ref - https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#communication-across-links

I suggest one of the following:
1) use docker links to link your containers together.
2) use docker-compose to run your containers.
Solution 1 is considered deprecated, but maybe the easier to get started.
First, run your elasticsearch container giving it a name by using the --name=<your chosen name> flag.
Then, run your application container adding --link <your chosen name>:<your chosen name>.
Then, you can use <your chosen name> as the hostname to connect from the application to your elasticsearch.

Do you have a --network set on your containers? If they are both on the same --network, they can talk to each other over that network. So in the example below, the myapplication container would reference http://elasticsearch:9200 in its connection string to post to Elasticsearch.
docker run --name elasticsearch -p 9200:9200 --network=my_network -d elasticsearch:5.5.3
docker run --name myapplication --network=my_network -d myapplication
Learn more about Docker networks here: https://docs.docker.com/engine/userguide/networking/

Related

Docker container for portainer not exposing 9000 to host

I am trying to create the container by running docker run -d -p 8000:8000 -p 9000:9000 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:2.9.3
The container created but I am not able to access the portainer ui page using the localhost:9000enter image description here
As you can see the container is restarting. When every thing is fine the status will be running instead of restarting. It's possible that port is busy so it cant be exposed so you should check your port first and if the port was available you should check your image then to check if your image is working properly. It's better to use a docker file to make sure that your configuration is correct.

Why docker container is not able to access other container?

I have 3 docker applications(containers) in which one container is communicating with other 2 containers. If I run that containers using below command, container 3 is able to access the container 1 and container 2.
docker run -d --network="host" --env-file container1.txt -p 8001:8080 img1:latest
docker run -d --network="host" --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --network="host" --env-file container3.txt -p 8000:8080 img3:latest
But this is working only with host network if I remove this --network="host" option then I am not able to access this application outside(on web browser). In order to access it outside i need to make the host port and container ports same as below.
docker run -d --env-file container1.txt -p 8001:8001 img1:latest
docker run -d --env-file container2.txt -p 8080:8080 img2:latest
docker run -d --env-file container3.txt -p 8000:8000 img3:latest
With this above commands i am able to access my application on web browser but container 3 is not able to communicate with container 1. here container 3 can access the container 2 because there i am exposing 8080 host + container port. But i can't expose again 8080 host port for container 3.
How to resolve this issue??
At last my goal is this application should be accessible on browser without using host network, it should use the bridge network . And container 3 needs to communicate with container 1 & 2.
On user-defined networks, containers can not only communicate by IP address but can also resolve a container name to an IP address. This capability is called automatic service discovery.
Read this for more details on Docker container networking.
You can perform the following steps to achieve the desired result.
Create a private bridge network.
docker network create --driver bridge privet-net
Now start your application containers along with the --network private-net added to your docker run command.
docker run -d --env-file container1.txt -p 8001:8001 --network private-net img1:latest
docker run -d --env-file container2.txt -p 8080:8080 --network private-net img2:latest
docker run -d --env-file container3.txt -p 8000:8000 --network private-net img3:latest
With this way, all the three containers will be able to communicate with each other and also to the internet.
In this case when you are using --network=host, then you are telling docker to not isolate the network rather to use the host's network. So all the containers are on the same network, hence can communicate with each other without any issues. However when you remove --newtork=host, then docker will isolate the network as well there by restricting container 3 to communicate with container 1.
You will need some sort of orchestration service like docker compose, docker swarm etc.

Connect application to database when they are in separate docker containers

Well, the set up is simple, there should be two containers: one of them for the mysql database and the other one for web application.
What I do to run the containers,
the first one for database and the second for the app:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d mysql
docker run -p 8081:8081 myrepo/myapp
The application tries to connect to database using localhost:3306, but as I found out the issue is that each container has its own localhost.
One of the solution I found was to add the same network for containers using --net and the docker commands happend to be like the following:
docker network create my-network
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d
--net my-network mysql
docker run --net my-network -p 8081:8081 myrepo/myapp
Though, the web application still is not able to connect to the database. What am I doing wrong and what is the proper flow to connect application to database when they are both inside containers?
You could use the name of the container (i.e. mysql-container) to connect to mysql. Example:
Run the mysql container:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d --net my-network mysql
Connect from another container using the mysql client:
docker run --net my-network -it mysql mysql -u root -p db -h mysql-container
In your application you should replace in the database URL, whatever IP you have with mysql-container.
Well, after additional research, I successfully managed to connect to the database.
The approach I used is the following:
On my host I grabbed the IP address of the docker itself but not the specific container:
sudo ip addr show | grep docker0
The IP address of the docker0 I added to the database connection URL inside my application and thus application managed to connect to the database (note: with this flow I don't add the --net keyword when start container)
What definitely strange is that even adding shared network like --net=my-nework for both the container didn't work. Moreover I did try to use --net=host to share the host network with container's one, still it was unsuccessful. If there's any who can explain why it couldn't work, please - share your knowledge.

How to create websocket connection between two Docker containers

I've got two Docker containers that need to have a websocket connection between the two.
I run one container like this:
docker run --name comm -p 8080:8080 comm_module:latest
to expose port 8080 to the host. Then I try to run the second container like this:
docker run --name test -p 8080:8080 datalogger:latest
However, I get the error below:
docker: Error response from daemon: driver failed programming external
connectivity on endpoint test
(f06588ee059e2c4be981e3676d7e05b374b42a8491f9f45be27da55248189556):
Bind for 0.0.0.0:8080 failed: port is already allocated. ERRO[0000]
error waiting for container: context canceled
I'm not sure what to do. Should I connect these to a network? How do I run these containers?
you can't bind the same host port twice in the same time you may change one of the ports on one container:
docker run --name comm -p 8080:8080 comm_module:latest
docker run --name test -p 8081:8080 datalogger:latest
you may check the configuration in the containers on how they communicate .
you can also create link between them:
docker run --name test -p 8081:8080 --link comm datalogger:latest
I finally worked it out. These are the steps involved for a two-way websocket communication between two Docker containers:
Modify the source code in the containers to use the name of the other container as the destination host address + port number (e.g. comm:port_no inside test, and vice versa).
Expose the same port (8080) in the Dockerfiles of the two containers and build the images. No need to publish them as they are will be visible to other containers on the network.
Create a user-defined bridge network like this:
docker network create my-net
Create my first container and attach it to the network:
docker create --name comm --network my-net comm_module:latest
Create my second container and attach it to the network:
docker create --name test --network my-net datalogger:latest
Start both containers by issuing the docker start command.
And the two-way websocket communication works nicely!
My Solution works fine.
docker network create mynet
docker run -p 443:443 --net=mynet --ip=172.18.0.3 --hostname=frontend.foobar.com foobarfrontend
docker run -p 9999:9999 --net=mynet --ip=172.18.0.2 --hostname=backend.foobar.com foobarbackend
route /P add 172.18.0.0 MASK 255.255.0.0 10.0.75.2
the foobarfrontend calls a wss websocket on foobarbackend on port 9999
PS: i work on docker windows 10 with linuxcontainers
have fun

How to forward all ports in docker container

Consider:
docker run -p 5000:5000 -v /host/:/host appimage
it forwards 5000 to 50000
even in multiple:
docker run -p 5000:5000 -p 5001:5001 -v /host/:/host appimage
What I want to know is:
docker run -p allports:allports
is there any command available that allows to forward all ports in container? Because in my case I am running flask app. For testing purpose I want to run multiple flask instances. So for each flask instance I want to run it in different ports. This auto multi-port forwarding would help.
You can expose a range of ports using the -p option, for example:
docker run -p 2000-5000:2000-5000 -v /host/:/host appimage
See the docker run reference documentation for more details.
You might have a working set-up by using docker run --net host ..., in which case host's network is directly exposed to the continer and all port bindings are "public". I haven't tested this with multiple containers simultaneously but it might work just fine.

Resources