I have created the following docker-compose file...
version: '3'
services:
db-service:
image: postgres:11
volumes:
- ./db:/var/lib/postgresql/data
expose:
- 5432
environment:
- POSTGRES_PASSWORD=mypgpassword
networks:
- net1
pgadmin:
image: dpage/pgadmin4
volumes:
- ./pgadmin:/var/lib/pgadmin
ports:
- 5000:80
environment:
- PGADMIN_DEFAULT_EMAIL=me#gmail.com
- PGADMIN_DEFAULT_PASSWORD=mypass
networks:
- net1
networks:
net1:
external: false
From reading various docs on the docker site, my expectation was that the pgadmin container would be able to access the postgres container via port 5432 but that I should not be able to access postgres directly from the host. However, I am able to use psql to access the database from the host machine.
In fact, if I comment out the expose and ports lines I can still access both containers from the host.
What am I missing about this?
EDIT - I am accessing the container by first running docker container inspect... to get the IP address. For the postgres container I'm using
psql -h xxx.xxx.xxx.xxx -U postgres
It prompts me for the password and then allows me to do all the normal things you would expect.
In the case of the pgadmin container I point my browser to the IP address and get the pgadmin interface.
Note that both of those are being executed from a terminal on the host, not from within either container. I've also commented out the expose command and can still access the postgres db.
docker-compose creates a network for those two containers to be able talk to each-other when you run it, through a DNS service which will contain pointers to each service, by name.
So from the perspective of the pgadmin container, the dbserver can be reached under hostname db-service (because that is what you named your service in the docker-compose.yml file).
So, that traffic does not go through the host, as you were assuming, but through the aforementioned network.
For proof, docker exec -it [name-of-pg-admin-container] /bin/sh and type:
ping db-service. You will see that docker provides a DNS resolution and that you can even open a connection to the normal postgres port there.
The containers connect one with another by bridge network net1.
When you expose port, you create port forwarding in your IPTABLES for connecting host network and net1.
Stop expose 5432 port in your db-service and you see that you can't connect from your host to db-service.
Docker assigns an internal IP address to each container. If you happen to have this address, and it happens to be reachable, then Docker doesn’t do anything specific to firewall it off. On a Linux host in particular, if specific Docker network is on 172.17.0.0/24, the host might have a 172.17.0.1 address and a specific container might be 172.17.0.2, and they can talk to each other this way.
Using the Docker-internal IP addresses is not a best practice. If you ever delete and recreate a container, its IP address will change; on some host platforms you can’t directly access the private IP addresses even from the same host; the private IP addresses are never reachable from other hosts. In routine use you should never need to docker inspect a container.
The important level of isolation you do get here is that the container isn’t accessible from other hosts unless you explicitly publish a port (docker run -p option, Docker Compose ports: option). The setup here is much more uniform than for standard applications: set up the application inside the container to listen on 0.0.0.0 (“all host interfaces”, usually the default), and then publish a port or not as your needs require. If the host has multiple interfaces you can publish a port on only one of them (even if the application doesn’t natively support that).
Related
I've read this post and I've tried adding ports: "7080:7080" in docker-compose.yml but still can't connect to the container using 172.18.0.2:7080 (btw I'm a docker newbie)
The container is one of several in a DockStation project on Windows 10. The image I'm using is for OpenLiteSpeed with WordPress.
The docker-compose.yml file contents is below:
version: '2'
services:
gnome-3-28-1804:
image: ubuntudesktop/gnome-3-28-1804
firefox:
image: jlesage/firefox
browser-box:
image: jim3ma/browser-box
openlitespeed:
image: litespeedtech/openlitespeed
ports:
- "7080:7080"
Any ideas please?
UPDATE: IP 172.17.0.1 appears to be the default bridge gateway IP so I assume 172.18.0.2 for this container is in some way related to that; Docker and DockStation are both running locally on host 10.0.0.10 Not sure if the setup should even be using a bridge. http://localhost:7080/ says ERR_CONNECTION_REFUSED
UPDATE 2: I'm using Docker for Windows (Docker Desktop). Tried turning off the Windows firewall but makes no difference. Still getting ERR_CONNECTION_REFUSED for http://localhost:7080/ and http://10.0.0.10:7080/. There are 3 other containers in the project but not running, only the LiteSpeed one is running.
UPDATE 3: I created a new project and installed tutum/hello-world/ then ran the new container. The hello-world container is running and I've not found any error in the logs, but neither localhost nor 10.0.0.10 will connect, the error in Chrome is ERR_CONNECTION_REFUSED. Same if I run docker run -d -p 80 tutum/hello-world in Windows command prompt.
What is this IP (172.18.0.2) representing? Is it a remote machine where DockStation is running?
If this is a case, check if this port is publicly available on that machine. You did add ports section to the Dockerfile which will map container's port to machine's port - but it is a matter whether e.g. firewall blocks outside access to that port.
I would first troubleshoot it by trying to access localhost:7080 from 172.18.0.2 machine - if it works, your Docker configuration is good and you need to look for the problem in that machine's configuration (e.g. firewall).
I tried your Compose file on my system and it works as expected - I can access port 7080 both using my host's system IP and hostname and the container's IP and ports 80 and 443 using only the container's IP (since they're not mapped to any of the host's ports).
You did not specify whether you're using Docker for Windows or Docker Toolbox - DockStation works with both, but if you're using Docker Toolbox, then you'll have to use the virtual machine's IP or hostname to access port 7080, instead of localhost. If you're using Docker for Windows, then I do not understand what is going on - are you sure the containers are running?
As for where those IP's you mentioned come from - 172.17.0.1 is most likely your hosts IP on Docker's default bridged network. Docker-compose, by default, creates its own bridged networks for every project. In your case, in your project's network, your host's IP would be 172.18.0.1. You can view Docker's networks with command docker network ls and their details with docker network inspect <network-name>.
You should not use any of those IP's for any reason, since there's no guarantee they'll remain the same. If you need to connect from outside, map internal container ports to your Docker's host's ports, like you did with port 7080 and if you need containers to connect to each other - with docker-compose you can use service names as hostnames, without it you have to connect them to the same, non-default, bridged Docker network and use their container names as hostnames.
This solution worked for me.
docker run -d -p 127.0.0.1:80:80 tutum/hello-world
Apparently you have to specify you want the port exposed under localhost. Then localhost entered in the browser address bar loaded the Hello World page - hurrah!
Once I changed the ports in docker-compose.yml to '127.0.0.1:80:80' then it also worked when run from DockStation.
I have two docker containers, nginx and php, from which I want to access mysql server running on host machine and sql server on remote machine.
Have tried change the network type from "bridge" to "host" but it returns errors.
version: '2'
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- /var/www/:/code
- ./site.conf:/etc/nginx/conf.d/default.conf
networks:
- mynetwork
php:
image: php:fpm
volumes:
- /var/www/:/code
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
I'm expecting php code running in my containers can connect to those two databases.
Note: I don't using docker run to run container, instead I'm using docker-compose up -d so I just want to edit the docker-compose.yml file.
Just make sure the container can access the external database by going online.Bridge" and "host network type can do.
First, you need to make sure you have a correct mysql grant rule, such as %.
1\You can use the ip of the host to access the mysql on the host from the inside of the container.
2\Other mysql instances that belong to the same LAN as the host, access from the container can also be accessed using the LAN ip on the mysql instance.
Ensure the ping is normal,Make sure the ping is working, otherwise your docker installation may have problems, such as problems from iptables.
In your php service declaration you have to add something like:
extra_hosts:
- "local_db:host_ip"
Where local_db is the name you will configure in your database connection string and host_ip is the IP of your host on the local network.
You have to make sure that your php code does not try to connect to "localhost" because that will not work. You need to use the server name "local_db" (in my example).
You do the same thing for the remote database, just make sure the IP is reachable.
You can remove the network declaration because it is not needed.
In order to docker containers has access to each other you should link them. docker service uses link switch to add ID and IP of one container in /etc/hosts file of another.
For testing purposes I would like to run a cluster of three containers, each running the same service on port 7600. Those containers should reside in one network and could theoretically access each other as host1:7600, host2:7600 and host3:7600.
However I want to 'emulate' an external port mapping, such that the service of each container is still bound to port 7600 but that the services can acess each other by maped (different) ports like host1:8881, host2:8882 and host3:8883.
How can I do that as easily as possible - preferred within a Docker Compose setup.
The reasoning is that I want to test how the service will behave with a configuration of three physical hosts running that service and mapped its port to an arbitrary external port.
Following some edits to clarify the task, after the first comments won't met the requirements (however thank you for every answer).
I can't use VMs, as the Test is already running within VirtualBox with no ability to get nested VT-x running.
I would neither bind the ports to the host, nor to the same IP address.
After further investigation I found a working solution for me.
The following Docker Compose file shows an example of the solution. It shows how to make two services accessible by an external IP and external port. The example works completely in Docker without the need to run the containers in two separate virtual machines.
The two services are by example two Nginx instances. Imagine both services should access each other by their external IP and port to form a cluster. The external IP and port are emulated by two separate busybox containers mapping the ports of the service containers to their own IP.
version: '3'
services:
service1:
image: nginx:latest
service2:
image: nginx:latest
proxy1:
image: busybox:latest
command: nc -lk -p 8081 -e /bin/nc service1 80
expose:
- "8081"
proxy2:
image: busybox:latest
command: nc -lk -p 8082 -e /bin/nc service2 80
expose:
- "8082"
The services service1:80 and service2:80 can access each other by their external representations proxy1:8081 and proxy2:8082
In docker-compose you can specify ports like 1234 in order to publish it on an ephemeral port, and like 127.0.0.1:1234:1234 to publish it on a specific interface.
However, is there a way to use an ephemeral port on a specific interface?
There appears to be no --ip option to docker-compose up like there is for docker run.
Unless i am mistaken I assume you want to publish on a specific interface with ephemeral port - in a random way - you can use this in your docker-compose.yml
ports:
- "127.0.0.1::1234"
Or if you don't need to specify an interface and just want an ephemeral port you can use this:
ports:
- "1234"
In both scenarios this makes the container listen on a random port mapped to a specific port (e.g. 1234) inside the container similar to what -P would do in docker run
To set an ip for container in docker-compose you can use the following to make it work similar to --ip in docker run, assuming you have a custom network called my_network
networks:
my_network:
ipv4_address: 172.20.1.5
I created two docker containers based on two different images. One of db and another for webserver. Both containers are running on my mac osx.
I can access db container from host machine and same way can access webserver from host machine.
However, how do I access db connection from webserver?
The way I started db container is
docker run --name oracle-db -p 1521:1521 -p 5501:5500 oracle/database:12.1.0.2-ee
I started wls container as
docker run --name oracle-wls -p 7001:7001 wls-image:latest
I can access db on host by connecting to
sqlplus scott/welcome1#//localhost:1521/ORCLCDB
I can access wls on host as
http://localhost:7001/console
It's easy.
If you have two or more running container, complete next steps:
docker network create myNetwork
docker network connect myNetwork web1
docker network connect myNetwork web2
Now you connect from web1 to web2 container or the other way round.
Use the internal network IP addresses which you can find by running:
docker network inspect myNetwork
Note that only internal IP addresses and ports are accessible to the containers connected by the network bridge.
So for example assuming that web1 container was started with: docker run -p 80:8888 web1 (meaning that its server is running on port 8888 internally), and inspecting myNetwork shows that web1's IP is 172.0.0.2, you can connect from web2 to web1 using curl 172.0.0.2:8888).
Easiest way is to use --link, however the newer versions of docker are moving away from that and in fact that switch will be removed soon.
The link below offers a nice how too, on connecting two containers. You can skip the attach portion, since that is just a useful how to on adding items to images.
https://web.archive.org/web/20160310072132/https://deis.com/blog/2016/connecting-docker-containers-1/
The part you are interested in is the communication between two containers. The easiest way, is to refer to the DB container by name from the webserver container.
Example:
you named the db container db1 and the webserver container web0. The containers should both be on the bridge network, which means the web container should be able to connect to the DB container by referring to its name.
So if you have a web config file for your app, then for DB host you will use the name db1.
if you are using an older version of docker, then you should use --link.
Example:
Step 1: docker run --name db1 oracle/database:12.1.0.2-ee
then when you start the web app. use:
Step 2: docker run --name web0 --link db1 webapp/webapp:3.0
and the web app will be linked to the DB. However, as I said the --link switch will be removed soon.
I'd use docker compose instead, which will build a network for you. However; you will need to download docker compose for your system. https://docs.docker.com/compose/install/#prerequisites
an example setup is like this:
file name is base.yml
version: "2"
services:
webserver:
image: moodlehq/moodle-php-apache:7.1
depends_on:
- db
volumes:
- "/var/www/html:/var/www/html"
- "/home/some_user/web/apache2_faildumps.conf:/etc/apache2/conf-enabled/apache2_faildumps.conf"
environment:
MOODLE_DOCKER_DBTYPE: pgsql
MOODLE_DOCKER_DBNAME: moodle
MOODLE_DOCKER_DBUSER: moodle
MOODLE_DOCKER_DBPASS: "m#0dl3ing"
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
db:
image: postgres:9
environment:
POSTGRES_USER: moodle
POSTGRES_PASSWORD: "m#0dl3ing"
POSTGRES_DB: moodle
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
this will name the network a generic name, I can't remember off the top of my head what that name is, unless you use the --name switch.
IE docker-compose --name setup1 up base.yml
NOTE: if you use the --name switch, you will need to use it when ever calling docker compose, so docker-compose --name setup1 down this is so you can have more then one instance of webserver and db, and in this case, so docker compose knows what instance you want to run commands against; and also so you can have more then one running at once. Great for CI/CD, if you are running test in parallel on the same server.
Docker compose also has the same commands as docker so docker-compose --name setup1 exec webserver do_some_command
best part is, if you want to change db's or something like that for unit test you can include an additional .yml file to the up command and it will overwrite any items with similar names, I think of it as a key=>value replacement.
Example:
db.yml
version: "2"
services:
webserver:
environment:
MOODLE_DOCKER_DBTYPE: oci
MOODLE_DOCKER_DBNAME: XE
db:
image: moodlehq/moodle-db-oracle
Then call docker-compose --name setup1 up base.yml db.yml
This will overwrite the db. with a different setup. When needing to connect to these services from each container, you use the name set under service, in this case, webserver and db.
I think this might actually be a more useful setup in your case. Since you can set all the variables you need in the yml files and just run the command for docker compose when you need them started. So a more start it and forget it setup.
NOTE: I did not use the --port command, since exposing the ports is not needed for container->container communication. It is needed only if you want the host to connect to the container, or application from outside of the host. If you expose the port, then the port is open to all communication that the host allows. So exposing web on port 80 is the same as starting a webserver on the physical host and will allow outside connections, if the host allows it. Also, if you are wanting to run more then one web app at once, for whatever reason, then exposing port 80 will prevent you from running additional webapps if you try exposing on that port as well. So, for CI/CD it is best to not expose ports at all, and if using docker compose with the --name switch, all containers will be on their own network so they wont collide. So you will pretty much have a container of containers.
UPDATE: After using features further and seeing how others have done it for CICD programs like Jenkins. Network is also a viable solution.
Example:
docker network create test_network
The above command will create a "test_network" which you can attach other containers too. Which is made easy with the --network switch operator.
Example:
docker run \
--detach \
--name db1 \
--network test_network \
-e MYSQL_ROOT_PASSWORD="${DBPASS}" \
-e MYSQL_DATABASE="${DBNAME}" \
-e MYSQL_USER="${DBUSER}" \
-e MYSQL_PASSWORD="${DBPASS}" \
--tmpfs /var/lib/mysql:rw \
mysql:5
Of course, if you have proxy network settings you should still pass those into the containers using the "-e" or "--env-file" switch statements. So the container can communicate with the internet. Docker says the proxy settings should be absorbed by the container in the newer versions of docker; however, I still pass them in as an act of habit. This is the replacement for the "--link" switch which is going away. Once the containers are attached to the network you created you can still refer to those containers from other containers using the 'name' of the container. Per the example above that would be db1. You just have to make sure all containers are connected to the same network, and you are good to go.
For a detailed example of using network in a cicd pipeline, you can refer to this link:
https://git.in.moodle.com/integration/nightlyscripts/blob/master/runner/master/run.sh
Which is the script that is ran in Jenkins for a huge integration tests for Moodle, but the idea/example can be used anywhere. I hope this helps others.
You will have to access db through the ip of host machine, or if you want to access it via localhost:1521, then run webserver like -
docker run --net=host --name oracle-wls wls-image:latest
See here
Using docker-compose, services are exposed to each other by name by default. Docs.
You could also specify an alias like;
version: '2.1'
services:
mongo:
image: mongo:3.2.11
redis:
image: redis:3.2.10
api:
image: some-image
depends_on:
- mongo
- solr
links:
- "mongo:mongo.openconceptlab.org"
- "solr:solr.openconceptlab.org"
- "some-service:some-alias"
And then access the service using the specified alias as a host name, e.g mongo.openconceptlab.org for mongo in this case.
Environment: Windows 10, Docker Desktop version 4.5.1.
Use hostname host.docker.internal to access services running on your host machine from inside a container.
See: https://docs.docker.com/desktop/windows/networking/#use-cases-and-workarounds
I run PostgreSQL in one container and my app in a separate container.
I configure the app database connection to use host.docker.internal as the hostname and it just works.
Consider Example
We Create two containers here PostgreSQL server and pgadmin(for accessing servers like PHPMyAdmin, SQL studio, workbench).
Exposed port
PostgreSql --->5436
Pgadmin --->5050
After adding a server in pgadmin hostname as localhost.It will show a connection error. Because Docker container pgadmin getting localhost as their system instead we need PostgreSQL IP to solve the problem.
docker network create con
docker network connect con app1
docker network connect con app2
This command gets connected container IP address and other details.
docker network inspect con
Now you can see the IP address shown in the network inspect. Choose the Postgres container IP. You can access other exposed ports through this IP. Here postgre 5432 is only exposed.Now set hostname as the container ip and it will work.
You can use the default docker network. If you don't want to go through any docker networking, you can do this:
Copy the ip address in Docker subnet in Resources>Network in Docker Preferences in Mac:
Docker preferences screenshot
As you can see from the screenshot link the ip address is
192.168.65.0
You just need to replace “localhost” in your containers config file with “192.168.65.1" (i.e. IP address picked + 1 ).
You can start your containers and should be set for local development/testing.
For some more details, you can see my article:
Connect Docker containers the easy way
In my case, the host connection in the application to a container from an other container by the IP provide by the bridge didn't work.
But it works with the name of the container (see my screenshot).
So you can replace the IP by the name of the container.