I am trying to make a portable solution to having my application container connect to a postgres container. By 'portable' I mean that I can give the user two docker run commands, one for each container, and they will always work together.
I have a postgres docker container running on my local PC, and I run it like this,
docker run -p 5432:5432 -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
and I am able to access it from a python flask app, using the address 127.0.0.1:5432.
I put the python app in a docker container as well, and I am having trouble connecting to the postgres container.
Address 127.0.0.1:5432 does not work.
Address 172.17.0.2:5432 DOES work (172.17.0.2 is the address of the docker container running postgres). However I consider this not portable because I can't guarantee what the postgres container IP will be.
I am aware of the --add-host flag, but it is also asking for the host-ip, which I want to be the localhost (127.0.0.1). Despite several hits on --add-host I wasn't able to get that to work so that the final docker run commands can be the same on any computer they are run on.
I also tried this: docker container port accessed from another container
My situation is that the postgres and myApp will be containers running on the same computer. I would prefer a non-Docker compose solution.
The comment from Truong had me try that approach (again) and I got it working. Here are my steps in case it helps out another. The crux of the problem was needing one container to address another container in a way that was static (didn't change). Using user defined network was the answer, because you can name a container, and thus reference that container IP by that name.
My steps,
docker network create mynet
docker run --net mynet --name mydb -v $(pwd)/datadir:/var/lib/postgresql/data -e POSTGRES_PASSWORD=qwerty -d postgres:11
Now the IP address of the postgres database is mydb, and all the ports of this container are exposed to any other container running in this network.
Now add the front end app,
docker run --net mynet -ti -p 80:80 -v mydockerhubaccount/myapp
I'm looking at documentation here, and see the following line:
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
What should go in the --network some-network field? My docker run command in the field before did default port mapping of docker run -d -p 6379:6379, etc.
I'm starting my redis server with default docker network configuration, and see this is in use:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcfa8a32de9 redis "docker-entrypoint.s…" 19 minutes ago Up 19 minutes 0.0.0.0:6379->6379/tcp some-redis
However, using the default bridge network produces:
$ docker run -it --network bridge --rm redis redis-cli -h some-redis
Could not connect to Redis at some-redis:6379: Name or service not known
Ignore the --network bridge command and use:
docker exec -it some-redis redis-cli
Docker includes support for networking containers through the use of network drivers. By default, Docker provides two network drivers for you, the bridge and the overlay drivers. You can also write a network driver plugin so that you can create your own drivers but that is an advanced task.
Read more here.
https://docs.docker.com/engine/tutorials/networkingcontainers/
https://docs.docker.com/v17.09/engine/userguide/networking/
You need to run
docker network create some-network
It doesn't matter what name some-network is, just so long as the Redis server, your special CLI container, and any clients talking to the server all use the same name. (If you're using Docker Compose this happens for you automatically and the network will be named something like directoryname_default; use docker network ls to find it.)
If your Redis server is already running, you can use docker network connect to attach the existing container to the new network. This is one of the few settings you're able to change after you've created a container.
If you're just trying to run a client to talk to this Redis, you don't need Docker for this at all. You can install the Redis client tools locally and run redis-cli, pointing at your host's IP address and the first port in the docker run -p option. The Redis wire protocol is simple enough that you can also use primitive tools like nc or telnet as well.
I have an app that launches a docker container and automates a few of the routines.
Now I have dockerized this app which is not able to talk to other containers over localhost. I tried setting
--network host
when launching the container and now not able to access the containerized webapp over localhost:.
Any pointers?
localhost won't work. Suppose, you are running a VM and try to talk to your host/ other VMs running in your machine. If you call localhost from one of the VMs, it's localhost for that VM only, not to your host. So, you won't be able to talk from one VM to another by calling localhost. Docker works same in regard to the localhost. You have two options,
Use a network
If you are using network, create a network and add all the containers to that network. This is the new suggested way by docker.
docker network create <your-network-name>
docker run --network <your-network-name> --name <container-name1> <image>
docker run --network <your-network-name> --name <container-name2> <image>
Then use the container name (container-name1) to talk to that service from other service (container-name2).
Use --link option
Or you could use --link option, which is a legacy system for docker. Docker docs says, unless you have a specific reason to use, don't use --link anymore.
docker run --name <container1> <image>
docker run --name <container2> <image>
You could use container1 to talk from container2 and vice versa. You could use these container name in places like DB host, etc.
did you try creating a common bridge network and attach your containers to the same network:
create network :-
docker network create networkname
and then in docker run command add this switch --network=networkname
I figured it later after going over a lot of other documents.
Step 1: install docker inside the container. Added following line to my dockerfile
RUN curl -sSL https://get.docker.com/ | sh
Step 2: provide volume-mapping in docker run command
-v /var/run/docker.sock:/var/run/docker.sock
Now hosts' docker commands are accessible from within my current container and without changing the --network for current docker container, I'm able to access other containers over localhost
I created two docker containers based on two different images. One of db and another for webserver. Both containers are running on my mac osx.
I can access db container from host machine and same way can access webserver from host machine.
However, how do I access db connection from webserver?
The way I started db container is
docker run --name oracle-db -p 1521:1521 -p 5501:5500 oracle/database:12.1.0.2-ee
I started wls container as
docker run --name oracle-wls -p 7001:7001 wls-image:latest
I can access db on host by connecting to
sqlplus scott/welcome1#//localhost:1521/ORCLCDB
I can access wls on host as
http://localhost:7001/console
It's easy.
If you have two or more running container, complete next steps:
docker network create myNetwork
docker network connect myNetwork web1
docker network connect myNetwork web2
Now you connect from web1 to web2 container or the other way round.
Use the internal network IP addresses which you can find by running:
docker network inspect myNetwork
Note that only internal IP addresses and ports are accessible to the containers connected by the network bridge.
So for example assuming that web1 container was started with: docker run -p 80:8888 web1 (meaning that its server is running on port 8888 internally), and inspecting myNetwork shows that web1's IP is 172.0.0.2, you can connect from web2 to web1 using curl 172.0.0.2:8888).
Easiest way is to use --link, however the newer versions of docker are moving away from that and in fact that switch will be removed soon.
The link below offers a nice how too, on connecting two containers. You can skip the attach portion, since that is just a useful how to on adding items to images.
https://web.archive.org/web/20160310072132/https://deis.com/blog/2016/connecting-docker-containers-1/
The part you are interested in is the communication between two containers. The easiest way, is to refer to the DB container by name from the webserver container.
Example:
you named the db container db1 and the webserver container web0. The containers should both be on the bridge network, which means the web container should be able to connect to the DB container by referring to its name.
So if you have a web config file for your app, then for DB host you will use the name db1.
if you are using an older version of docker, then you should use --link.
Example:
Step 1: docker run --name db1 oracle/database:12.1.0.2-ee
then when you start the web app. use:
Step 2: docker run --name web0 --link db1 webapp/webapp:3.0
and the web app will be linked to the DB. However, as I said the --link switch will be removed soon.
I'd use docker compose instead, which will build a network for you. However; you will need to download docker compose for your system. https://docs.docker.com/compose/install/#prerequisites
an example setup is like this:
file name is base.yml
version: "2"
services:
webserver:
image: moodlehq/moodle-php-apache:7.1
depends_on:
- db
volumes:
- "/var/www/html:/var/www/html"
- "/home/some_user/web/apache2_faildumps.conf:/etc/apache2/conf-enabled/apache2_faildumps.conf"
environment:
MOODLE_DOCKER_DBTYPE: pgsql
MOODLE_DOCKER_DBNAME: moodle
MOODLE_DOCKER_DBUSER: moodle
MOODLE_DOCKER_DBPASS: "m#0dl3ing"
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
db:
image: postgres:9
environment:
POSTGRES_USER: moodle
POSTGRES_PASSWORD: "m#0dl3ing"
POSTGRES_DB: moodle
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
this will name the network a generic name, I can't remember off the top of my head what that name is, unless you use the --name switch.
IE docker-compose --name setup1 up base.yml
NOTE: if you use the --name switch, you will need to use it when ever calling docker compose, so docker-compose --name setup1 down this is so you can have more then one instance of webserver and db, and in this case, so docker compose knows what instance you want to run commands against; and also so you can have more then one running at once. Great for CI/CD, if you are running test in parallel on the same server.
Docker compose also has the same commands as docker so docker-compose --name setup1 exec webserver do_some_command
best part is, if you want to change db's or something like that for unit test you can include an additional .yml file to the up command and it will overwrite any items with similar names, I think of it as a key=>value replacement.
Example:
db.yml
version: "2"
services:
webserver:
environment:
MOODLE_DOCKER_DBTYPE: oci
MOODLE_DOCKER_DBNAME: XE
db:
image: moodlehq/moodle-db-oracle
Then call docker-compose --name setup1 up base.yml db.yml
This will overwrite the db. with a different setup. When needing to connect to these services from each container, you use the name set under service, in this case, webserver and db.
I think this might actually be a more useful setup in your case. Since you can set all the variables you need in the yml files and just run the command for docker compose when you need them started. So a more start it and forget it setup.
NOTE: I did not use the --port command, since exposing the ports is not needed for container->container communication. It is needed only if you want the host to connect to the container, or application from outside of the host. If you expose the port, then the port is open to all communication that the host allows. So exposing web on port 80 is the same as starting a webserver on the physical host and will allow outside connections, if the host allows it. Also, if you are wanting to run more then one web app at once, for whatever reason, then exposing port 80 will prevent you from running additional webapps if you try exposing on that port as well. So, for CI/CD it is best to not expose ports at all, and if using docker compose with the --name switch, all containers will be on their own network so they wont collide. So you will pretty much have a container of containers.
UPDATE: After using features further and seeing how others have done it for CICD programs like Jenkins. Network is also a viable solution.
Example:
docker network create test_network
The above command will create a "test_network" which you can attach other containers too. Which is made easy with the --network switch operator.
Example:
docker run \
--detach \
--name db1 \
--network test_network \
-e MYSQL_ROOT_PASSWORD="${DBPASS}" \
-e MYSQL_DATABASE="${DBNAME}" \
-e MYSQL_USER="${DBUSER}" \
-e MYSQL_PASSWORD="${DBPASS}" \
--tmpfs /var/lib/mysql:rw \
mysql:5
Of course, if you have proxy network settings you should still pass those into the containers using the "-e" or "--env-file" switch statements. So the container can communicate with the internet. Docker says the proxy settings should be absorbed by the container in the newer versions of docker; however, I still pass them in as an act of habit. This is the replacement for the "--link" switch which is going away. Once the containers are attached to the network you created you can still refer to those containers from other containers using the 'name' of the container. Per the example above that would be db1. You just have to make sure all containers are connected to the same network, and you are good to go.
For a detailed example of using network in a cicd pipeline, you can refer to this link:
https://git.in.moodle.com/integration/nightlyscripts/blob/master/runner/master/run.sh
Which is the script that is ran in Jenkins for a huge integration tests for Moodle, but the idea/example can be used anywhere. I hope this helps others.
You will have to access db through the ip of host machine, or if you want to access it via localhost:1521, then run webserver like -
docker run --net=host --name oracle-wls wls-image:latest
See here
Using docker-compose, services are exposed to each other by name by default. Docs.
You could also specify an alias like;
version: '2.1'
services:
mongo:
image: mongo:3.2.11
redis:
image: redis:3.2.10
api:
image: some-image
depends_on:
- mongo
- solr
links:
- "mongo:mongo.openconceptlab.org"
- "solr:solr.openconceptlab.org"
- "some-service:some-alias"
And then access the service using the specified alias as a host name, e.g mongo.openconceptlab.org for mongo in this case.
Environment: Windows 10, Docker Desktop version 4.5.1.
Use hostname host.docker.internal to access services running on your host machine from inside a container.
See: https://docs.docker.com/desktop/windows/networking/#use-cases-and-workarounds
I run PostgreSQL in one container and my app in a separate container.
I configure the app database connection to use host.docker.internal as the hostname and it just works.
Consider Example
We Create two containers here PostgreSQL server and pgadmin(for accessing servers like PHPMyAdmin, SQL studio, workbench).
Exposed port
PostgreSql --->5436
Pgadmin --->5050
After adding a server in pgadmin hostname as localhost.It will show a connection error. Because Docker container pgadmin getting localhost as their system instead we need PostgreSQL IP to solve the problem.
docker network create con
docker network connect con app1
docker network connect con app2
This command gets connected container IP address and other details.
docker network inspect con
Now you can see the IP address shown in the network inspect. Choose the Postgres container IP. You can access other exposed ports through this IP. Here postgre 5432 is only exposed.Now set hostname as the container ip and it will work.
You can use the default docker network. If you don't want to go through any docker networking, you can do this:
Copy the ip address in Docker subnet in Resources>Network in Docker Preferences in Mac:
Docker preferences screenshot
As you can see from the screenshot link the ip address is
192.168.65.0
You just need to replace “localhost” in your containers config file with “192.168.65.1" (i.e. IP address picked + 1 ).
You can start your containers and should be set for local development/testing.
For some more details, you can see my article:
Connect Docker containers the easy way
In my case, the host connection in the application to a container from an other container by the IP provide by the bridge didn't work.
But it works with the name of the container (see my screenshot).
So you can replace the IP by the name of the container.
I have mysql server running in centos host and I want to deploy my war in tomcat inside docker container in same host. Any idea how can I connect mysql from inside the container.
Here is what you can do to connect to the DB (mysql) from the App (tomcat).
There are two ways you can do this
By using links
Manual way by providing the IP address of db to the tomcat app
Example-1 (Links)
Get your mysql container up:
"docker run --name db -e MYSQL_ROOT_PASSWORD="xxxx" -e
MYSQL_DATABASE="xxxx" -d mysql"
You now need to build and run the application container which must
know where is the db instance. You will have to figure out where you
provide the link to the db in your application. It can be in the code
or in a property file. You can use volume to mount this file into the
application container. You will also be able to modify them on the
fly because they will be accessible to you on the host system. This
is one example of doing this: docker
docker run -d -p 8080:8080 --name app --link db:db -v
$PWD/webapp:/var/lib/tomcat7/webapps/ app
Here we are using the image app where i am mounting webapp folder
from my docker base folder. This folder is having my code which i am
pushing to the tomcat's webapp folder. I am also pushing the
properties file to the appropiate location on the container where the
mysql connection is mentioned like this :
"jdbc.url=jdbc:mysql://db:3306/dbname" . We are mapping port 8080 of
the container to the port 8080 of the host. We are mapping container
db with container app. This creates an entry on /etc/hosts file - "db
ipname". So both containers can communicate with each other even if
we restart the db container (which changes the IP). Links will update
the host entry with the new IP.
Example 2:
Up the db container
Get the IP of the db container using this "docker inspect -f '{{ .NetworkSettings.IPAddress }}' container-name/ID"
Up the app containers with volumes where you can edit the jdbc url on your host.
Change the IP to db container's IP and this should be able to establish the connection
If you want to use Host's mysql
docker run -d --net=host --name app image command
Do let me know if you need any further explanation.
You can go through the topics on volumes, links from docker documentation: https://docs.docker.com/