Well, the set up is simple, there should be two containers: one of them for the mysql database and the other one for web application.
What I do to run the containers,
the first one for database and the second for the app:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d mysql
docker run -p 8081:8081 myrepo/myapp
The application tries to connect to database using localhost:3306, but as I found out the issue is that each container has its own localhost.
One of the solution I found was to add the same network for containers using --net and the docker commands happend to be like the following:
docker network create my-network
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d
--net my-network mysql
docker run --net my-network -p 8081:8081 myrepo/myapp
Though, the web application still is not able to connect to the database. What am I doing wrong and what is the proper flow to connect application to database when they are both inside containers?
You could use the name of the container (i.e. mysql-container) to connect to mysql. Example:
Run the mysql container:
docker run --name mysql-container -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=db -p 3306:3306 -d --net my-network mysql
Connect from another container using the mysql client:
docker run --net my-network -it mysql mysql -u root -p db -h mysql-container
In your application you should replace in the database URL, whatever IP you have with mysql-container.
Well, after additional research, I successfully managed to connect to the database.
The approach I used is the following:
On my host I grabbed the IP address of the docker itself but not the specific container:
sudo ip addr show | grep docker0
The IP address of the docker0 I added to the database connection URL inside my application and thus application managed to connect to the database (note: with this flow I don't add the --net keyword when start container)
What definitely strange is that even adding shared network like --net=my-nework for both the container didn't work. Moreover I did try to use --net=host to share the host network with container's one, still it was unsuccessful. If there's any who can explain why it couldn't work, please - share your knowledge.
Related
I just tried to create two containers for Elastic Search and Kibana.
docker network create esnetwork
docker run --name myes --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" elasticsearch:7.9.3
and Elastic Search works when I use http://localhost:9200 or http://internal-ip:9200
But when I use http://myes:9200, it just can't resolve the container name.
Thus when I run
docker run --name mykib --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://myes:9200” docker.elastic.co/kibana/kibana:7.9.3
It couldn't be created because it cannot resolve myes:9200
I also tried to replace "ELASTICSEARCH_HOSTS=http://myes:9200" with localhost:9200 or internal IP instead. but nothing works.
So I think my question should be how to make the container's DNS works?
How are you resolving 'myes'?
Is it mapped in hostname file and resolving to 127.0.0.1?
Also, use 127.0.0.1 wherever possible as localhost could be pointing to something else and not getting resolved.
It seems this problem doesn't arise from DNS. Both Elastic search and Kibana containers should use the fix name "elasticsearch" . so the docker command will be:
$docker network create esnetwork
$sudo vi /etc/sysctl.d/max_map_count.conf
vm.max_map_count=262144
$docker run --name elasticsearch --net esnetwork -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e
$docker run --name kib01-test --net esnetwork -p 5601:5601 -e “ELASTICSEARCH_HOSTS=http://elasticsearch:9200” docker.elastic.co/kibana/kibana:7.9.3
Then if the terminals that run installations could be ended automatically, just close them. And restart containers from the docker desktop manager. Then everything will go smoothly.
My environment is Fedora 36, docker 20.10.18
I am running Mongo DB image with following command:
docker run -d -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=test -e MONGO_INITDB_ROOT_PASSWORD=password --name=testdb mongo
This created container and I'm able to connect to this from robo3T.
Now I ran mongo-express image with following command and trying to above DB:
docker run -d -p 8081:8081 -e ME_CONFIG_MONGODB_ADMINUSERNAME=test -e ME_CONFIG_MONGODB_ADMINPASSWORD=password -e ME_CONFIG_MONGODB_SERVER=testdb --name=mongo-ex mongo-express
But I'm getting following error:
UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [testb:27017] on first connect [Error: getaddrinfo ENOTFOUND testb
If I'm creating a custom bridge network and running these two images in that container it's working.
My question is: As the default network is bridge network, and these containers are creating in default bridge network, why are they not able to communicate? Why is it working with custom bridge network?
There are two kinds of "bridge network"; if you don't have a docker run --net option then you get the "default" bridge network which is pretty limited. You almost always want to docker network create a "user-defined" bridge network, which has the standard Docker networking features.
# Use modern Docker networking
docker network create myapp
docker run -d --net myapp ... --name testdb mongo
docker run -d --net myapp ... -e ME_CONFIG_MONGODB_SERVER=testdb mongo-express
# Because both containers are on the same --net, the first
# container's --name is usable as a host name from the second
The "default" bridge network that you get without --net by default forbids inter-container communication, and you need a special --link option to make the connection. This is considered obsolete, and the Docker documentation page describing links notes that links "may eventually be removed".
# Use obsolete Docker networking; may stop working at some point
docker run -d ... --name testdb mongo
docker run -d ... -e ME_CONFIG_MONGODB_SERVER=testdb --link testdb mongo-express
# Containers can only connect to each other by name if they use --link
On modern Docker setups you really shouldn't use --link or the equivalent Compose links: option. Prefer to use the more modern docker network create form. If you're using Compose, note that Compose creates a network named default but this is a "user-defined bridge"; in most cases you don't need any networks: options at all to get reasonable inter-container networking.
When I start MySQL :
docker run --rm -d -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 -v /Docker/data/matos/mysql:/var/lib/mysql mysql:5.7
And start PHPMyAdmin :
docker run --rm -d -e PMA_HOST=172.17.0.1 phpmyadmin/phpmyadmin:latest
PMA cannot connect to the DB server.
When I try with PMA_HOST=172.17.0.2 (which is the address assigned to the MySQL container), it works.
But :
as MySQL container publishes its 3306 port, I think it should be reachable on 172.17.0.1:3306.
I don't want to use the 172.17.0.2 address because the MySQL container can be assigned another address whenever it restarts
Am I wrong ?
(I know I can handle this with docker-compose, but prefer managing my containers one by one).
(My MySQL container is successfully telnetable from my laptop with telnet 172.17.0.1 3306).
(My docker version : Docker version 20.10.3, build 48d30b5).
Thanks for your help.
Create a new docker network and start both containers with the network
docker network create my-network
docker run --rm -d --network my-network -e MYSQL_ROOT_PASSWORD=root -p 3306:3306 -v /Docker/data/matos/mysql:/var/lib/mysql --name mysql mysql:5.7
docker run --rm -d --network my-network -e PMA_HOST=mysql phpmyadmin/phpmyadmin:latest
Notice in the command that I've given the mysql container a name 'mysql' and used it as the address for phpmyadmin
Just found out the problem.
My ufw was active on my laptop, and did not allow explicitly port 3306.
I managed to communicate between PMA container and MySQL, using 172.17.0.1, either by disabling ufw or adding a rule to explicitly accept port 3306.
Thanks #kidustiliksew for your quick reply, and the opportunity you gave me to test user-defined networks.
maybe it's a good idea to use docker-compose.
Create a docker-compose.yml file and inside declare two services, one web and the other db, then you can reference them through their service names (web, db)
ex: PMA_HOST=db
I am trying to have my application connect to the Sql Server Express DB, both which are containerized.
When i run my app container in in a separate VM to the db, it connects and all is good.
However if the app container is running on the same VM as the DB container, it cannot connect.
I've tried setting the network mode to host and still nothing.
I got a very simple setup as part of my hands on learning.
Diagram of setup below.
Model A: Vm to VM - Connection Works
Model B: Internal VM - Cannot Connect thus App fails
I been reading up on docker a bit (running simple docker setup) to try and figure out the problem but no luck so far.
I've also used docker-compose to try and help still no luck.
Edit 1:
Commands used.
SQL Server: as per docker hub instructions
docker run --restart always -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=RANDOMPASS01!' -e 'MSSQL_PID=Express' -p 1433:1433 -d mcr.microsoft.com/mssql/server:2017-latest-ubuntu
AppA
This by itself works fine in Model A
docker run -p 5000:80 -d appa:0.1
I've also tried
docker run -p 5000:80 --network host -d appa:01
For what I see that you're doing it (starting your app container with net=host) should be working without issues as long as you are using localhost for connecting to the db.
If this is just for testing in your local machine I would suggest start both containers within their own docker network and access the db by container name, you can do it manually or use docker-compose to do it.
Example with docker-compose:
version: '3'
services:
db:
image: mcr.microsoft.com/mssql/server:2017-latest-ubuntu
ports:
- 1433:1433
environment:
- MSSQL_PID=Express
- SA_PASSWORD=RANDOMPASS01!
- ACCEPT_EULA=Y
restart: always
app:
# You can use this to tell docker-compose to build the image of you app
# or use a prebuilt image like the db service is using
image: appa:0.1
ports:
- 5000:80
Put this in a file called docker-compose.yml and start it with:
docker-compose up
This will create two containers in the same network, this will provide you as well with a DNS record for each container with the name of the provided service in the docker-compose file, so instead of using an IP or localhost you use "db" or "app".
More info for docker-compose: https://docs.docker.com/compose/overview/
The manual way:
docker network create mynetwork
Run the containers within the network:
docker run -p 5000:80 -d --net=mynetwork --name app appa:0.1
docker run --restart always -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=RANDOMPASS01!' -e 'MSSQL_PID=Express' -p 1433:1433 --net=mynetwork --name db -d mcr.microsoft.com/mssql/server:2017-latest-ubuntu
In the same way as using docker-compose you can access the db by using the "db" dns record that docker creates based on the name of the container.
The aforementioned DNS records are only resolvable within the containers.
More info on user-defined networks: https://docs.docker.com/network/bridge/
I have two containers. One of them is my application and the other is ElasticSearch-5.5.3. My application needs to connect to ES container. However, I always get "Connection refused"
I run my application with static port:
docker run -i -p 9000:9000 .....
I run ES with static port:
docker run -i -p 9200:9200 .....
How can I connect them?
You need to link both the containers by using --links
Start your ES container with a name es -
$ docker run --name es -d -p 9200:9200 .....
Start your application container by using --links -
$ docker run --name app --links es:es -d -p 9000:9000 .....
That's all. You should be able to access ES container with hostname es from application container i.e app.
try - curl -I http://es:9200/ from inside the application container & you should be able to access ES service running in es container.
Ref - https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#communication-across-links
I suggest one of the following:
1) use docker links to link your containers together.
2) use docker-compose to run your containers.
Solution 1 is considered deprecated, but maybe the easier to get started.
First, run your elasticsearch container giving it a name by using the --name=<your chosen name> flag.
Then, run your application container adding --link <your chosen name>:<your chosen name>.
Then, you can use <your chosen name> as the hostname to connect from the application to your elasticsearch.
Do you have a --network set on your containers? If they are both on the same --network, they can talk to each other over that network. So in the example below, the myapplication container would reference http://elasticsearch:9200 in its connection string to post to Elasticsearch.
docker run --name elasticsearch -p 9200:9200 --network=my_network -d elasticsearch:5.5.3
docker run --name myapplication --network=my_network -d myapplication
Learn more about Docker networks here: https://docs.docker.com/engine/userguide/networking/