I have two docker containers, nginx and php, from which I want to access mysql server running on host machine and sql server on remote machine.
Have tried change the network type from "bridge" to "host" but it returns errors.
version: '2'
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- /var/www/:/code
- ./site.conf:/etc/nginx/conf.d/default.conf
networks:
- mynetwork
php:
image: php:fpm
volumes:
- /var/www/:/code
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
I'm expecting php code running in my containers can connect to those two databases.
Note: I don't using docker run to run container, instead I'm using docker-compose up -d so I just want to edit the docker-compose.yml file.
Just make sure the container can access the external database by going online.Bridge" and "host network type can do.
First, you need to make sure you have a correct mysql grant rule, such as %.
1\You can use the ip of the host to access the mysql on the host from the inside of the container.
2\Other mysql instances that belong to the same LAN as the host, access from the container can also be accessed using the LAN ip on the mysql instance.
Ensure the ping is normal,Make sure the ping is working, otherwise your docker installation may have problems, such as problems from iptables.
In your php service declaration you have to add something like:
extra_hosts:
- "local_db:host_ip"
Where local_db is the name you will configure in your database connection string and host_ip is the IP of your host on the local network.
You have to make sure that your php code does not try to connect to "localhost" because that will not work. You need to use the server name "local_db" (in my example).
You do the same thing for the remote database, just make sure the IP is reachable.
You can remove the network declaration because it is not needed.
In order to docker containers has access to each other you should link them. docker service uses link switch to add ID and IP of one container in /etc/hosts file of another.
Related
I have created the following docker-compose file...
version: '3'
services:
db-service:
image: postgres:11
volumes:
- ./db:/var/lib/postgresql/data
expose:
- 5432
environment:
- POSTGRES_PASSWORD=mypgpassword
networks:
- net1
pgadmin:
image: dpage/pgadmin4
volumes:
- ./pgadmin:/var/lib/pgadmin
ports:
- 5000:80
environment:
- PGADMIN_DEFAULT_EMAIL=me#gmail.com
- PGADMIN_DEFAULT_PASSWORD=mypass
networks:
- net1
networks:
net1:
external: false
From reading various docs on the docker site, my expectation was that the pgadmin container would be able to access the postgres container via port 5432 but that I should not be able to access postgres directly from the host. However, I am able to use psql to access the database from the host machine.
In fact, if I comment out the expose and ports lines I can still access both containers from the host.
What am I missing about this?
EDIT - I am accessing the container by first running docker container inspect... to get the IP address. For the postgres container I'm using
psql -h xxx.xxx.xxx.xxx -U postgres
It prompts me for the password and then allows me to do all the normal things you would expect.
In the case of the pgadmin container I point my browser to the IP address and get the pgadmin interface.
Note that both of those are being executed from a terminal on the host, not from within either container. I've also commented out the expose command and can still access the postgres db.
docker-compose creates a network for those two containers to be able talk to each-other when you run it, through a DNS service which will contain pointers to each service, by name.
So from the perspective of the pgadmin container, the dbserver can be reached under hostname db-service (because that is what you named your service in the docker-compose.yml file).
So, that traffic does not go through the host, as you were assuming, but through the aforementioned network.
For proof, docker exec -it [name-of-pg-admin-container] /bin/sh and type:
ping db-service. You will see that docker provides a DNS resolution and that you can even open a connection to the normal postgres port there.
The containers connect one with another by bridge network net1.
When you expose port, you create port forwarding in your IPTABLES for connecting host network and net1.
Stop expose 5432 port in your db-service and you see that you can't connect from your host to db-service.
Docker assigns an internal IP address to each container. If you happen to have this address, and it happens to be reachable, then Docker doesn’t do anything specific to firewall it off. On a Linux host in particular, if specific Docker network is on 172.17.0.0/24, the host might have a 172.17.0.1 address and a specific container might be 172.17.0.2, and they can talk to each other this way.
Using the Docker-internal IP addresses is not a best practice. If you ever delete and recreate a container, its IP address will change; on some host platforms you can’t directly access the private IP addresses even from the same host; the private IP addresses are never reachable from other hosts. In routine use you should never need to docker inspect a container.
The important level of isolation you do get here is that the container isn’t accessible from other hosts unless you explicitly publish a port (docker run -p option, Docker Compose ports: option). The setup here is much more uniform than for standard applications: set up the application inside the container to listen on 0.0.0.0 (“all host interfaces”, usually the default), and then publish a port or not as your needs require. If the host has multiple interfaces you can publish a port on only one of them (even if the application doesn’t natively support that).
In docker-compose you can specify ports like 1234 in order to publish it on an ephemeral port, and like 127.0.0.1:1234:1234 to publish it on a specific interface.
However, is there a way to use an ephemeral port on a specific interface?
There appears to be no --ip option to docker-compose up like there is for docker run.
Unless i am mistaken I assume you want to publish on a specific interface with ephemeral port - in a random way - you can use this in your docker-compose.yml
ports:
- "127.0.0.1::1234"
Or if you don't need to specify an interface and just want an ephemeral port you can use this:
ports:
- "1234"
In both scenarios this makes the container listen on a random port mapped to a specific port (e.g. 1234) inside the container similar to what -P would do in docker run
To set an ip for container in docker-compose you can use the following to make it work similar to --ip in docker run, assuming you have a custom network called my_network
networks:
my_network:
ipv4_address: 172.20.1.5
I am working on a micro-service architecture where we have many different projects and all of them connect to the same redis instance. I want to move this architecture to the Docker to run on development environment. Since all of the projects have separate repositories I can not just simply use one docker-compose.yml file to connect them all. After doing some research I figured that I can create a shared external network to connect all of the projects, so I have started by creating a network:
docker network create common_network
I created a separate project for common services such as mongodb, redis, rabbitmq (The services that is used by all projects). Here is the sample docker-compose file of this project:
version: '3'
services:
redis:
image: redis:latest
container_name: test_project_redis
ports:
- "6379:6379"
networks:
- common_network
networks:
common_network:
external: true
Now when I run docker-compose build and docker-compose up -d it works like a charm and I can connect to the redis from my local machine using 127.0.0.1:6379. But there is a problem when I try to connect to this redis container from an other container.
Here is an other sample docker-compose.yml for another project which runs Node.js (I am not putting Dockerfile since it is irrelevant for this issue)
version: '3'
services:
api:
build: .
container_name: sample_project_api
networks:
- common_network
networks:
common_network:
external: true
There is no problem when I build and run this docker-compose as well but the Node.js project is getting CONNREFUSED 127.0.0.1:6379 error, which obviously it can not connect to the Redis server over 127.0.0.1
So I opened a live ssh into the api container (docker exec -i -t sample_project_api /bin/bash) and installed redis-tools to make some tests.
When I try to ping the redis-cli ping it returns Could not connect to Redis at 127.0.0.1:6379: Connection refused.
I checked the external network to see if all of the containers are connected to it properly, using docker network inspect common_network. There were no problem, all of the containers were listed under Containers, and from there I noticed that sample_project_redis container had an ip address of 192.168.16.3
As a final solution I tried to use internal ip address of the redis container:
From sample_project_api container I run redis-cli -h 192.168.16.3 ping and it return with PONG which it worked.
So my problem is that I can not connect to the redis server from other containers using ip address of 127.0.0.1 or 0.0.0.0 but I can connect using 192.168.16.3 which changes every time I restart docker container. What is the reason behind this ?
Containers have a namespaced network. Each container has its own loopback interface and an ip for the container per network you attach to. Therefore loopback or 127.0.0.1 in one container is that container and not the redis ip. To connect to redis, use the service name in your commands, which docker will resolve to the ip of the container running redis:
redis:6379
I'm running another docker-compose exposing Logstash on port 5044 (using docker-elk). I'm able to make requests to the service on localhost:5044 on my host, so the port is exposed correctly.
I'm then running another docker-compose (Filebeat) but from there I cannot connect to "localhost:5044". Here is the docker compose file:
version: '2'
services:
filebeat:
build: filebeat/
networks:
- elk
networks:
elk:
driver: bridge
Any cluye why the localhost:5044 is not accessable in this docker compose?
First of all, the compose file you linked exposes port 5000, but you say you're trying to connect to port 5044.
Secondly, exposing port 5044 (or 5000) will make the port available to the host machine, not to other containers launched with other compose files.
The way i see it is you can either:
keep the first service as it is and instead of localhost:port on the secon service use your_ip:port , where your_ip can be retrieved from ifconfig -a or something similar and should look like 192.168.x.x
Connect both services to an external created network like so:
first create the network with docker network create foo
link the services to the external network in the compose file:
networks:
test_network:
external: true
Then access change the logstash reference from localhost:port to logstash:port
Good luck
This is my docker-compose.yml:
version: '2'
services:
postgres:
image: "postgres:9.4"
ports:
- "5432:5432"
I use it to bring up the postgres db like this:
docker-compose up
Now I can connect to the db locally:
> psql postgres://postgres:postgres#docker-machine:5432
psql (9.6.4, server 9.4.13)
Type "help" for help.
postgres=#
My app is a Ruby/Sinatra app which connects to the postgres db.
When I start the app directly, using a bin/server script, the app boots up and connects to the db just fine.
This is how I build and run the dockerized app:
docker build -t my_app .
docker run \
-p 3001:3001 \
-e DATABASE_URI="postgres://postgres:postgres#docker-machine:5432" \
my_app bin/server
When I run the app via Docker like this, it fails to connect to the db, and prints this error:
Sequel::DatabaseConnectionError: PG::ConnectionBad: could not connect to server: Connection refused
Is the server running on host "docker-machine" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
How can I get my dockerized app to connect to the postgres db at the provided URI?
You could do one of the followings:
Bring up both containers at the same time using the same docker-compose fil
You could use a name server which will allow you to use a name for your DB container so that would be visible from inside your App container
You could just use the DB container IP address, however, in case the DB container crashes and restarts it would have a different IP address which will make your App container brake.
Your App container tries to connect to localhost (inside the container itself) of course it cant because there's no Postgres running.
As the error states docker-machine isn't found. While this will work on the host as it wouldn't in the container. You could pass in the hostname as an environment variable like
docker run .. -e HOST_HOSTNAME=hostname ..
Then use that variable in your connection string
Docker Compose creates a user defined network by default for version 2+ compose files.
User defined networks come with an embedded DNS server that can respond for the local container names. In this case, the service name postgres will resolve to the IP of your database on the user defined network compose creates.
To connect your app to the same network, modify your existing compose file, or create a new compose file in the same directory with your apps details.
version: '2'
services:
postgres:
image: "postgres:9.4"
ports:
- "5432:5432"
my_app:
build: .
image: my/app
ports:
- '3001:3001'
environment:
DATABASE_URI: 'postgres://postgres:postgres#postgres:5432'
command: 'bin/server'
Then each containers IP will be available by its service name.