I have setup docker container with mysql that expose 3306.
I've specified database user, database password and create a test db and give the privileges to new user.
In another container i want to accesso to this db.
So i set up new container with a simply php script that create new table in this db.
I know that mysql container's ip is 172.17.0.2 so :
$mysqli = new mysqli("172.17.0.2", "mattia", "prova", "prova");
Than using mysqli i create new table and all works fine.
But i think that connect to container using his ip address is not good.
Is there another way to specify db host? I tryed with the hostname of the mysql container but it doens't work.
The --link flag is considered a legacy feature, you should use user-defined networks.
You can run both containers on the same network:
docker run -d --name php_container --network my_network my_php_image
docker run -d --name mysql_container --network my_network my_mysql_image
Every container on that network will be able to communicate with each other using the container name as hostname.
You need to link your docker containers together with --link flag in docker run command or using link feature in docker-compose. For instance:
docker run -d -name app-container-name --link mysql-container-name app-image-name
In this way docker will add the IP address of the mysql container into /etc/hosts file of your application container.
For a complete document refer to:
MySQL Docker Containers: Understanding the basics
In your docker-compose.yml file add a link property to your webserver service:
https://docs.docker.com/compose/networking/#links
Then in your query string, the host parameter's value is your database service name:
$mysqli = new mysqli("database", "mattia", "prova", "prova");
If you are using docker-compose, than the database will be accessible under the service name.
version: "3.9"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Then the database is accessible using: postgres://db:5432.
Here the service name is at the same time the hostname in the internal network.
Quote from docker docs:
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
Source:
https://docs.docker.com/compose/networking/
I have two docker containers:
database
app that consumes the database
I run my database container like this:
docker run --name my-db -p 127.0.0.1:3306:3306 my-db-image
And my app container like this:
docker run --name my-app --network host -it my-app-image
This works fine on Linux. I can access the DB from both the host system and the app container. Perfect.
However --network host does not work on Mac and Windows:
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
(source: https://docs.docker.com/network/host/)
I can still access the database via 127.0.0.1:3306 from the main host, but I cannot access it from the app container.
How can I solve this issue? How can I let the app container connect to the database (and keep accessing also to the DB from the main host using 127.0.0.1:3306)?
I've tried using host.docker.internal and gateway.docker.internal but it doesn't work.
I've also tried to launch both containers using --network my-network after creating my-network with docker network create my-network but it doesn't work.
I can't figure out how to solve this issue.
For multiple services, it can often be easier to create a docker-compose.yml file that will launch all the services and any networks needed to connect them.
version: '3'
services:
my-db:
image: my-db-image
ports:
- "3306:3306"
networks:
- mynetwork
my-app:
image: my-app-image
ports:
- "8000:80"
networks:
- mynetwork
networks:
mynetwork:
From the project folder, you run docker-compose up or docker-compose up -d to make the services run in the background.
In this scenario, the magic of Docker provisions a network with hostname "mynetwork". It should expose default ports to other services on that network. If you want to remap the ports, the pattern is target:source.
I don't know that you need the 'ports' config here. But I'm trying to map your config to the compose file. Also I'm assuming you need to expose the app on some port; using 8000 as it's pretty common setup.
What are the parameters here? Docker-compose reference
Kubernetes has a concept of pods where containers can share ports between them. For example within the same pod, a container can access another container (listening on port 80) via localhost:80.
However on docker-compose, localhost refers to the container itself.
Is there anyway to implement the kubernetes network config on docker?
Essentially I have a kubernetes config that I would like to reuse in a docker-compose config, without having to modify the images.
I seem to have gotten it to work by adding network_mode: host to each of the container configs within my docker-compose config.
Yes you can. You run a service and then you can use network_mode: service:<nameofservice>
version: '3'
services:
mainnetwork:
image: alpine
command: tail -f /dev/null
mysql:
image: mysql
network_mode: service:mainnetwork
environment:
- "MYSQL_ROOT_PASSWORD=root"
mysqltest:
image: mysql
command: bash -c "sleep 10 && mysql -uroot -proot -h 127.0.0.1 -e 'CREATE DATABASE tarun;'"
network_mode: service:mainnetwork
Edit-1
So the network_mode can have below possible values
host
service:(servicename in same compose file)
container:(name or id of a external container already running)
In this case i have used service:mainnetwork, so the mainnetwork needs to be up.
Also this has been tested on Docker 17.06 ce. So I assume you are using a newer version
Using Docker Links mechanism you can wire together containers and then declared ports will be available through localhost.
I created two docker containers based on two different images. One of db and another for webserver. Both containers are running on my mac osx.
I can access db container from host machine and same way can access webserver from host machine.
However, how do I access db connection from webserver?
The way I started db container is
docker run --name oracle-db -p 1521:1521 -p 5501:5500 oracle/database:12.1.0.2-ee
I started wls container as
docker run --name oracle-wls -p 7001:7001 wls-image:latest
I can access db on host by connecting to
sqlplus scott/welcome1#//localhost:1521/ORCLCDB
I can access wls on host as
http://localhost:7001/console
It's easy.
If you have two or more running container, complete next steps:
docker network create myNetwork
docker network connect myNetwork web1
docker network connect myNetwork web2
Now you connect from web1 to web2 container or the other way round.
Use the internal network IP addresses which you can find by running:
docker network inspect myNetwork
Note that only internal IP addresses and ports are accessible to the containers connected by the network bridge.
So for example assuming that web1 container was started with: docker run -p 80:8888 web1 (meaning that its server is running on port 8888 internally), and inspecting myNetwork shows that web1's IP is 172.0.0.2, you can connect from web2 to web1 using curl 172.0.0.2:8888).
Easiest way is to use --link, however the newer versions of docker are moving away from that and in fact that switch will be removed soon.
The link below offers a nice how too, on connecting two containers. You can skip the attach portion, since that is just a useful how to on adding items to images.
https://web.archive.org/web/20160310072132/https://deis.com/blog/2016/connecting-docker-containers-1/
The part you are interested in is the communication between two containers. The easiest way, is to refer to the DB container by name from the webserver container.
Example:
you named the db container db1 and the webserver container web0. The containers should both be on the bridge network, which means the web container should be able to connect to the DB container by referring to its name.
So if you have a web config file for your app, then for DB host you will use the name db1.
if you are using an older version of docker, then you should use --link.
Example:
Step 1: docker run --name db1 oracle/database:12.1.0.2-ee
then when you start the web app. use:
Step 2: docker run --name web0 --link db1 webapp/webapp:3.0
and the web app will be linked to the DB. However, as I said the --link switch will be removed soon.
I'd use docker compose instead, which will build a network for you. However; you will need to download docker compose for your system. https://docs.docker.com/compose/install/#prerequisites
an example setup is like this:
file name is base.yml
version: "2"
services:
webserver:
image: moodlehq/moodle-php-apache:7.1
depends_on:
- db
volumes:
- "/var/www/html:/var/www/html"
- "/home/some_user/web/apache2_faildumps.conf:/etc/apache2/conf-enabled/apache2_faildumps.conf"
environment:
MOODLE_DOCKER_DBTYPE: pgsql
MOODLE_DOCKER_DBNAME: moodle
MOODLE_DOCKER_DBUSER: moodle
MOODLE_DOCKER_DBPASS: "m#0dl3ing"
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
db:
image: postgres:9
environment:
POSTGRES_USER: moodle
POSTGRES_PASSWORD: "m#0dl3ing"
POSTGRES_DB: moodle
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
this will name the network a generic name, I can't remember off the top of my head what that name is, unless you use the --name switch.
IE docker-compose --name setup1 up base.yml
NOTE: if you use the --name switch, you will need to use it when ever calling docker compose, so docker-compose --name setup1 down this is so you can have more then one instance of webserver and db, and in this case, so docker compose knows what instance you want to run commands against; and also so you can have more then one running at once. Great for CI/CD, if you are running test in parallel on the same server.
Docker compose also has the same commands as docker so docker-compose --name setup1 exec webserver do_some_command
best part is, if you want to change db's or something like that for unit test you can include an additional .yml file to the up command and it will overwrite any items with similar names, I think of it as a key=>value replacement.
Example:
db.yml
version: "2"
services:
webserver:
environment:
MOODLE_DOCKER_DBTYPE: oci
MOODLE_DOCKER_DBNAME: XE
db:
image: moodlehq/moodle-db-oracle
Then call docker-compose --name setup1 up base.yml db.yml
This will overwrite the db. with a different setup. When needing to connect to these services from each container, you use the name set under service, in this case, webserver and db.
I think this might actually be a more useful setup in your case. Since you can set all the variables you need in the yml files and just run the command for docker compose when you need them started. So a more start it and forget it setup.
NOTE: I did not use the --port command, since exposing the ports is not needed for container->container communication. It is needed only if you want the host to connect to the container, or application from outside of the host. If you expose the port, then the port is open to all communication that the host allows. So exposing web on port 80 is the same as starting a webserver on the physical host and will allow outside connections, if the host allows it. Also, if you are wanting to run more then one web app at once, for whatever reason, then exposing port 80 will prevent you from running additional webapps if you try exposing on that port as well. So, for CI/CD it is best to not expose ports at all, and if using docker compose with the --name switch, all containers will be on their own network so they wont collide. So you will pretty much have a container of containers.
UPDATE: After using features further and seeing how others have done it for CICD programs like Jenkins. Network is also a viable solution.
Example:
docker network create test_network
The above command will create a "test_network" which you can attach other containers too. Which is made easy with the --network switch operator.
Example:
docker run \
--detach \
--name db1 \
--network test_network \
-e MYSQL_ROOT_PASSWORD="${DBPASS}" \
-e MYSQL_DATABASE="${DBNAME}" \
-e MYSQL_USER="${DBUSER}" \
-e MYSQL_PASSWORD="${DBPASS}" \
--tmpfs /var/lib/mysql:rw \
mysql:5
Of course, if you have proxy network settings you should still pass those into the containers using the "-e" or "--env-file" switch statements. So the container can communicate with the internet. Docker says the proxy settings should be absorbed by the container in the newer versions of docker; however, I still pass them in as an act of habit. This is the replacement for the "--link" switch which is going away. Once the containers are attached to the network you created you can still refer to those containers from other containers using the 'name' of the container. Per the example above that would be db1. You just have to make sure all containers are connected to the same network, and you are good to go.
For a detailed example of using network in a cicd pipeline, you can refer to this link:
https://git.in.moodle.com/integration/nightlyscripts/blob/master/runner/master/run.sh
Which is the script that is ran in Jenkins for a huge integration tests for Moodle, but the idea/example can be used anywhere. I hope this helps others.
You will have to access db through the ip of host machine, or if you want to access it via localhost:1521, then run webserver like -
docker run --net=host --name oracle-wls wls-image:latest
See here
Using docker-compose, services are exposed to each other by name by default. Docs.
You could also specify an alias like;
version: '2.1'
services:
mongo:
image: mongo:3.2.11
redis:
image: redis:3.2.10
api:
image: some-image
depends_on:
- mongo
- solr
links:
- "mongo:mongo.openconceptlab.org"
- "solr:solr.openconceptlab.org"
- "some-service:some-alias"
And then access the service using the specified alias as a host name, e.g mongo.openconceptlab.org for mongo in this case.
Environment: Windows 10, Docker Desktop version 4.5.1.
Use hostname host.docker.internal to access services running on your host machine from inside a container.
See: https://docs.docker.com/desktop/windows/networking/#use-cases-and-workarounds
I run PostgreSQL in one container and my app in a separate container.
I configure the app database connection to use host.docker.internal as the hostname and it just works.
Consider Example
We Create two containers here PostgreSQL server and pgadmin(for accessing servers like PHPMyAdmin, SQL studio, workbench).
Exposed port
PostgreSql --->5436
Pgadmin --->5050
After adding a server in pgadmin hostname as localhost.It will show a connection error. Because Docker container pgadmin getting localhost as their system instead we need PostgreSQL IP to solve the problem.
docker network create con
docker network connect con app1
docker network connect con app2
This command gets connected container IP address and other details.
docker network inspect con
Now you can see the IP address shown in the network inspect. Choose the Postgres container IP. You can access other exposed ports through this IP. Here postgre 5432 is only exposed.Now set hostname as the container ip and it will work.
You can use the default docker network. If you don't want to go through any docker networking, you can do this:
Copy the ip address in Docker subnet in Resources>Network in Docker Preferences in Mac:
Docker preferences screenshot
As you can see from the screenshot link the ip address is
192.168.65.0
You just need to replace “localhost” in your containers config file with “192.168.65.1" (i.e. IP address picked + 1 ).
You can start your containers and should be set for local development/testing.
For some more details, you can see my article:
Connect Docker containers the easy way
In my case, the host connection in the application to a container from an other container by the IP provide by the bridge didn't work.
But it works with the name of the container (see my screenshot).
So you can replace the IP by the name of the container.
I have a very simple 2 container application, 1 running Tomcat the other Redis. I am attempting to use docker-compose but the link between the containers is not working.
If I simply do a run with the below commands all is happy.
docker run -d --name db redis
docker run -d -P --name web --link db web
When I attempt to use the below docker-compose.yml file there is no link between the containers.
version: '2'
services:
web:
image: web
links:
- redis
ports:
- "8080:8080"
redis:
image: redis
There are no environment variables created using docker-compose and the /etc/hosts file is not updating.
I'm puzzled since this is literally a copy of several of the examples I've found on numerous site, including docker.com. Any help would be greatly appreciated.
In version 2 of compose you don't need links. You should be able to connect to the redis container just be referring to the hostname redis. It doesn't appear in environment variables or /etc/hosts as networking is now a first-class feature in docker, and instead it uses built-in DNS resolution.
Since the new networking model came in, each container is started on a network (default by er, default) and can communicate with each other by using their service or container name.