Docker's container name can not be resolved - docker

I just tried to create two containers for Elastic Search and Kibana.
docker network create esnetwork
docker run --name myes --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" elasticsearch:7.9.3
and Elastic Search works when I use http://localhost:9200 or http://internal-ip:9200
But when I use http://myes:9200, it just can't resolve the container name.
Thus when I run
docker run --name mykib --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://myes:9200” docker.elastic.co/kibana/kibana:7.9.3
It couldn't be created because it cannot resolve myes:9200
I also tried to replace "ELASTICSEARCH_HOSTS=http://myes:9200" with localhost:9200 or internal IP instead. but nothing works.
So I think my question should be how to make the container's DNS works?

How are you resolving 'myes'?
Is it mapped in hostname file and resolving to 127.0.0.1?
Also, use 127.0.0.1 wherever possible as localhost could be pointing to something else and not getting resolved.

It seems this problem doesn't arise from DNS. Both Elastic search and Kibana containers should use the fix name "elasticsearch" . so the docker command will be:
$docker network create esnetwork
$sudo vi /etc/sysctl.d/max_map_count.conf
vm.max_map_count=262144
$docker run --name elasticsearch --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e
$docker run --name kib01-test --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://elasticsearch:9200” docker.elastic.co/kibana/kibana:7.9.3
Then if the terminals that run installations could be ended automatically, just close them. And restart containers from the docker desktop manager. Then everything will go smoothly.
My environment is Fedora 36, docker 20.10.18

Related

Cannot access dockerized MySQL instance from another container

When I start MySQL :
docker run --rm -d -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 -v /Docker/data/matos/mysql:/var/lib/mysql mysql:5.7
And start PHPMyAdmin :
docker run --rm -d -e PMA_HOST=172.17.0.1 phpmyadmin/phpmyadmin:latest
PMA cannot connect to the DB server.
When I try with PMA_HOST=172.17.0.2 (which is the address assigned to the MySQL container), it works.
But :
as MySQL container publishes its 3306 port, I think it should be reachable on 172.17.0.1:3306.
I don't want to use the 172.17.0.2 address because the MySQL container can be assigned another address whenever it restarts
Am I wrong ?
(I know I can handle this with docker-compose, but prefer managing my containers one by one).
(My MySQL container is successfully telnetable from my laptop with telnet 172.17.0.1 3306).
(My docker version : Docker version 20.10.3, build 48d30b5).
Thanks for your help.
Create a new docker network and start both containers with the network
docker network create my-network
docker run --rm -d --network my-network -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 -v /Docker/data/matos/mysql:/var/lib/mysql --name mysql mysql:5.7
docker run --rm -d --network my-network -e PMA_HOST=mysql phpmyadmin/phpmyadmin:latest
Notice in the command that I've given the mysql container a name 'mysql' and used it as the address for phpmyadmin
Just found out the problem.
My ufw was active on my laptop, and did not allow explicitly port 3306.
I managed to communicate between PMA container and MySQL, using 172.17.0.1, either by disabling ufw or adding a rule to explicitly accept port 3306.
Thanks #kidustiliksew for your quick reply, and the opportunity you gave me to test user-defined networks.
maybe it's a good idea to use docker-compose.
Create a docker-compose.yml file and inside declare two services, one web and the other db, then you can reference them through their service names (web, db)
ex: PMA_HOST=db

Running Toxiproxy in docker containers with MacOS

Very basic question about proxying inside docker containers in OSX: I am running a toxiproxy docker container with:
docker run --name proxy -p 8474:8474 -p 19200:19200 shopify/toxiproxy
and an Elasticsearch container:
docker run --name es -p 9200:9200 elasticsearch:6.8.6
I want toxiproxy to redirect the traffic from the `Elasticsearch:9200 container to localhost:19200. I config the toxiproxy with:
curl -XPOST "localhost:8474/proxies -d "{ \"name\": \"proxy_es\", \"listen\": \"0.0.0.0:19200\", \"upstream\": \"localhost:9200\", \"enabled\": true}"
Now, I would expect that:
curl -XGET localhost:19200/_cat
would point me to the Elasticsearch endpoint. But get:
curl: (52) Empty reply from server
Any idea why this is wrong? How can I fix it?
From inside the toxiproxy container, localhost:9200 does not resolve to es container.
This is because by default these containers are attached to default network. On the default network, localhost refers to the localhost of the container. It does not resolve to the localhost of the host machine (where docker-machine is running).
You can use the host network by adding --net=host for this to work. A better approach would be to create a new network, and run all containers in that network.
docker run --name proxy -p 8474:8474 -p 19200:19200 --net host shopify/toxiproxy
docker run --name es -p 9200:9200 --net host elasticsearch:6.8.6
You localhost should be resolvable in both ways

Connect application to database when they are in separate docker containers

Well, the set up is simple, there should be two containers: one of them for the mysql database and the other one for web application.
What I do to run the containers,
the first one for database and the second for the app:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d mysql
docker run -p 8081:8081 myrepo/myapp
The application tries to connect to database using localhost:3306, but as I found out the issue is that each container has its own localhost.
One of the solution I found was to add the same network for containers using --net and the docker commands happend to be like the following:
docker network create my-network
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d
--net my-network mysql
docker run --net my-network -p 8081:8081 myrepo/myapp
Though, the web application still is not able to connect to the database. What am I doing wrong and what is the proper flow to connect application to database when they are both inside containers?
You could use the name of the container (i.e. mysql-container) to connect to mysql. Example:
Run the mysql container:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d --net my-network mysql
Connect from another container using the mysql client:
docker run --net my-network -it mysql mysql -u root -p db -h mysql-container
In your application you should replace in the database URL, whatever IP you have with mysql-container.
Well, after additional research, I successfully managed to connect to the database.
The approach I used is the following:
On my host I grabbed the IP address of the docker itself but not the specific container:
sudo ip addr show | grep docker0
The IP address of the docker0 I added to the database connection URL inside my application and thus application managed to connect to the database (note: with this flow I don't add the --net keyword when start container)
What definitely strange is that even adding shared network like --net=my-nework for both the container didn't work. Moreover I did try to use --net=host to share the host network with container's one, still it was unsuccessful. If there's any who can explain why it couldn't work, please - share your knowledge.

Running ELK stack docker on host machine

I want to install elasticsearch and kibana, on dockers, on my host machine:
$sudo docker run -dit --name elasticsearch -h elasticsearch --net host -p 9200:9200 -p 9300:9300 -v $(pwd)/elasticsearch/data/:/usr/share/elasticsearch/data/ -e "discovery.type=single-node" elasticsearch:6.6.1
WARNING: Published ports are discarded when using host network mode
$sudo docker run -dit --name kibana -h kibana --net host -p 5601:5601 kibana:6.6.1
WARNING: Published ports are discarded when using host network mode
and I get the following errors on kibana:
log [14:32:26.655] [warning][admin][elasticsearch] Unable to revive connection: http://elasticsearch:9200/
log [14:32:26.656] [warning][admin][elasticsearch] No living connections
But If I don't use host machine:
sudo docker network create mynetwork
sudo docker run -dit --name elasticsearch -h elasticsearch --net mynetwork -p 9200:9200 -p 9300:9300 -v $(pwd)/elasticsearch/data/:/usr/share/elasticsearch/data/ -e "discovery.type=single-node" elasticsearch:6.6.1
sudo docker run -dit --name kibana -h kibana --net mynetwork -p 5601:5601 kibana:6.6.1
all working fine. What is the problem?
--net host disables most of the Docker networking stack. Basic features like communicating between containers using their container name as a host name don’t work. Except in very unusual circumstances it’s never necessary.
Your second setup that uses standard Docker networking and publishes selected ports through the host is a best practice.

Docker containers connection issue

I have two containers. One of them is my application and the other is ElasticSearch-5.5.3. My application needs to connect to ES container. However, I always get "Connection refused"
I run my application with static port:
docker run -i -p 9000:9000 .....
I run ES with static port:
docker run -i -p 9200:9200 .....
How can I connect them?
You need to link both the containers by using --links
Start your ES container with a name es -
$ docker run --name es -d -p 9200:9200 .....
Start your application container by using --links -
$ docker run --name app --links es:es -d -p 9000:9000 .....
That's all. You should be able to access ES container with hostname es from application container i.e app.
try - curl -I http://es:9200/ from inside the application container & you should be able to access ES service running in es container.
Ref - https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#communication-across-links
I suggest one of the following:
1) use docker links to link your containers together.
2) use docker-compose to run your containers.
Solution 1 is considered deprecated, but maybe the easier to get started.
First, run your elasticsearch container giving it a name by using the --name=<your chosen name> flag.
Then, run your application container adding --link <your chosen name>:<your chosen name>.
Then, you can use <your chosen name> as the hostname to connect from the application to your elasticsearch.
Do you have a --network set on your containers? If they are both on the same --network, they can talk to each other over that network. So in the example below, the myapplication container would reference http://elasticsearch:9200 in its connection string to post to Elasticsearch.
docker run --name elasticsearch -p 9200:9200 --network=my_network -d elasticsearch:5.5.3
docker run --name myapplication --network=my_network -d myapplication
Learn more about Docker networks here: https://docs.docker.com/engine/userguide/networking/

Resources