version: "3.5"
services:
database:
container_name: proj-database
env_file: ../orm/.env.${PROJ_ENV}
image: postgres
restart: always
expose:
- 5433
ports:
- 5432:5432
- 5433:5432
networks:
- proj
I can connect from other containers with 5432, but i can't from other containers connect with 5433. Whats the problem?
Connections between containers always use the "normal" port number for the destination service. They don't consider any remappings in ports:; in fact, they don't need ports: at all. ports: are only used for connections from outside Docker into a container. Conversely, since internally each container has its own private IP address, there aren't conflicts even if you are running multiple copies of the same image.
Related
I build a website using Strapi and Gatsby, everythings works well when I try to connect to a remote database, but I'm trying to create a db inside a container and so far no luck.
Essentially, what I did is create the following docker-compose:
version: '3'
services:
backend:
container_name: myapp_backend
build: ./backend/
ports:
- '3002:3002'
volumes:
- ./backend:/usr/src/myapp/backend
- /usr/src/myapp/backend/node_modules
environment:
- APP_NAME=myapp_backend
- DATABASE_CLIENT=mysql
- DATABASE_HOST=db
- DATABASE_PORT=3307
- DATABASE_NAME=myapp_db
- DATABASE_USERNAME=johnny
- DATABASE_PASSWORD=stecchino
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=myapp_db
- HOST=localhost
depends_on:
- db
restart: always
db:
container_name: myapp_mysql
image: mysql:5.7
volumes:
- ./db.sql:/docker-entrypoint-initdb.d/db.sql
restart: always
ports:
- 3307:3307
environment:
MYSQL_ROOT_PASSWORD: 5!JF6!FgAkvt
MYSQL_DATABASE: myapp_db
MYSQL_USER: johnny
MYSQL_PASSWORD: stecchino
command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: 'myapp_phpmyadmin'
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3307
ports:
- '8081:80'
volumes:
- /sessions
depends_on:
- db
frontend:
container_name: myapp_frontend
build: ./frontend/
ports:
- '3001:3001'
depends_on:
- backend
volumes:
- ./frontend:/usr/src/myapp/frontend
the backend service contains the Strapi application, the db service contains the mysql instance which runs on the port 3307 'cause 3306 is already in use.
Then I have also installed phpmyadmin, and last but not least the Gastby site. When I run using docker-compose up --build, and try to access to phpmyadmin using:
http://localhost:8081/index.php
with the following credentials:
user: johnny
pwd: stecchino
I get:
MySQL mysqli::real_connect():(HY000/2002): Connection refused
now, what I did for fix that situation is pass the port 3306 instead of 3307 to backend and phpmyadmin service. And magically, everything works. But why? I have mapped container and host to 3307...
There are 2 things happening here.
Mysql is running on port 3306.
This is because you never told the mysql container to run on port 3307. The default configuration is running on 3306.
phpadmin can connect to mysql at port 3306.
Of course it can. This is because when you define multiple services within the same docker-compose file, they start on the same network. This means that they can see and connect to each other's internal ports without the need for external port binding like 3306:3306
I would suggest to keep port bindings only for services that you want access outside the docker environment (like the UI), and for internal components just expose the port like this
expose:
- 3306
Both answers are useful, I am particularly fond of Manish's answer
I wanted to add some additional wording:
There are the internal docker networks which nothing from the outside can gain access to. From inside any given service (or container), you can reach every other service (or container) via:
<service-name>:<port>/path/of/resources
<container-name>:<port>/path/of/resources
In order to access resources inside the docker network from outside of docker, whether that is from your host environment, or farther upstream on the internet, the docker daemon needs to bind to host ports, and then forward information received on those ports to a docker service (and ultimately a docker container).
In your docker-compose.yml when you do the 3307:3307 you are telling the docker daemon to listen on port 3307, and forward to your db service internally on it's port 3307.
However, from what we can all see, mysql is still internally (that is, inside the container) listening for traffic on port 3306. Any containers or services on the same docker networks as your db service (mysql running container(s)) would be able to access mysql via something like:
<driver>:mysql://db:3306/<dbname>
If you wanted all host traffic and docker network traffic to access mysql on port 3307, you would also need to configure mysql to listen on port 3307 instead of 3306. That tidbit of information does not appear to be in your question at the time of writing.
I hope the additional information helps! It's a topic I chat often about when talking docker with folks.
Because 3306 is the exposed port by the official Dockerfile.
What you can do is to map the port that is running MySQL to another port on your host: 3307:3306 for instance (always host:container)
I have 3 containers with my bot, server and db. after docker-compose up, server and db are working. telegram bot does get-request and takes this error:
Get "http://localhost:8080/user/": dial tcp 127.0.0.1:8080: connect: connection refused
docker-compose.yml
version: "3"
services:
db:
image: postgres
container_name: todo_postgres
restart: always
ports:
- "5432:5432"
environment:
# TODO: Change it to environment variables
POSTGRES_USER: user
POSTGRES_DB: somedb
POSTGRES_PASSWORD: pass
server:
depends_on:
- db
build: .
restart: always
ports:
- 8080:8080
environment:
DB_NAME: somedb
DB_USERNAME: user
DB_PASSWORD: pass
bot:
depends_on:
- server
build:
./src/telegram_bot
environment:
BOT_TOKEN: TOKEN
restart: always
links:
- server
When using compose, try using the containers hostname.. in the case your bot should try to connect to
server:8080
Compose will handle the name resolution to the IP you need
What you try is to access localhost within your container (service) bot.
Maybe this answer will help you to solve the problem. It sound similar to your problem.
But I want to provide you another solution to your problem:
In case it's not needed to access the containers form outside (from your host), one appraoch would be making use of the expose functionality and a docker network.
See docs.docker.com: network.
The expose functionality allows to access your other containers within your network
See docs.docker.com: expose
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Example
What is this example doing?
A couple of steps that are not mandatory
Set a static ip within your docker container
These Steps are not needed and can be omitted. However, I like to do this, since you have now a better control over the network. You can access the containers by their hostname (which is the container name or service name) as well.
The steps that are needed are the following:
This exposes port 8080, but do not publish it.
expose:
- 8080
The network which allows static ip configuration
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
A complete file could look similar to this:
version: "3.8"
services:
first-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.2
expose:
- 8080
second-service:
image: <your-image>
networks:
vpcbr:
ipv4_address: 10.5.0.3
depends_on:
- first-service
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
Your bot container is up before your server & db containers.
When you use depends_on it's not accually waiting them to finish setup themeselves.
You should try some tricky algorithem for waiting the other container finish setup.
I remmember that when I used Nginx proxy I used something called wait-for-it.sh
How can I make my spring boot application running inside docker containers connect to postgres database that is running in remote server (non-docker environment). Here is my docker-compose.yml file:
version: "3.3"
services:
app1:
image: repo/app1:latest
ports:
- 8000:8000
restart: always
network_mode: "host"
extra_hosts:
- 'postgresdb:192.168.2.50'
app2:
image: repo/app2:latest
ports:
- 8001:8001
restart: always
network_mode: "host"
extra_hosts:
- 'postgresdb:192.168.2.50'
IP of remote PostgreSQL database machine is: 192.168.2.50(hostname: postgresdb)
I am using network_mode: "host" option and works without any problem but I believe this would defeat the purpose of using docker network. What other options are available to make this work without using network_mode? IP address and necessary ports on both, the docker machine and remote database server, are all whitelisted and have access through the firewalls.
Such implementation, obviously will not work.
Since your database deployed remotely, the only working solution will be provided with environment variables.
version: "3.3"
services:
app1:
image: repo/app1:latest
ports:
- 8000:8000
restart: always
network_mode: "host"
environment:
- DBHOST: "192.168.2.50"
All you need in your application is to use this variable.
Python example:
dbhost = os.getenv("DBHOST")
I'm fairly new to docker and docker compose, so forgive me if this is a stupid question...
I have a compose file with 2 containers. A homeassistant container with port 8123 exposed and a database with 5432. The homeassistant can access the database using the url postgresql://user:password#db:5432/homeassistant_db. I think that this is because docker has created a db binding on the host and that's why I can connect to db.
However I need to bind the homeassistant to the host, which I can do with network_mode: "host" which you can see commented out in my config. When I do this I can indeed bind to the host and homeassistant can do it's discovery of network devices etc...
Unfortunately this breaks the connection with the database so that I can't use the postgresql://user:password#db:5432/homeassistant_db url any longer.
How do I attach homeassistant to the host AND keep the database connection working? I guess I could change the database host from db to the pi's url or network name (eg. postgresql://user:password#192.168.0.100:5432/homeassistant_db or postgresql://user:password#homeassistant.local:5432/homeassistant_db) but this doesn't feel as clean or as robust as it could be.
I don't really understand the network bindings so I wan to try and learn so I can fix this myself going forward.
compose file below:
version: '3'
services:
db:
restart: always
container_name: "homeassistant_db_container"
# image: postgres:latest
image: tobi312/rpi-postgresql
ports:
- "5432:5432"
volumes:
- ./data/postgres/data:/var/lib/postgresql/data/pgdata
env_file:
- ./envs/database.env
home_assistant:
container_name: "homeassistant_container"
restart: always
image: homeassistant/raspberrypi3-homeassistant
ports:
- "8123:8123"
# network_mode: "host"
env_file:
- ./envs/homeassistant.env
volumes:
- ./configs/homeassistant:/config
depends_on:
- db
volumes:
data:
driver_opts:
type: none
o: bind
device: "${PWD}/data/postgres"
You can add both the containers in the same network as shown below. Then you could use the way you want to. Just add below code to your compose file. Then it will create a network and add both these containers there. This will also give you a security layer, so that no other containers can talk to your db container.
Second, remove container_name. You are confusing yourself. Services get their host names equal to service names by default.
networks:
default:
external:
name: "tools"
Seems to be a common question but with different contexts but I'm having a problem connecting to my localhost DB when using Docker.
If I inspect the mysql container using docker inspect and find the IP address and use this as the DB host as part of the CMS, it runs fine... the only issue is the mysql container IP address changes (upon eachdocker-compose up and if I change wifi networks) so ideally I'd like to use 'localhost' or '127.0.0.1' but for some reason this results in a SQLSTATE[HY000] [2002] Connection refused error.
How can I use 'localhost' or '127.0.0.1' as the DB hostname in CMS applications so I don't have to keep changing it as the container IP address changes?
This is my docker-compose.yml file:
version: "3"
services:
webserver:
build:
context: ./bin/webserver
restart: 'always'
ports:
- "80:80"
- "443:443"
links:
- mysql
volumes:
- ${DOCUMENT_ROOT-./www}:/var/www/html
- ${PHP_INI-./config/php/php.ini}:/usr/local/etc/php/php.ini
- ${VHOSTS_DIR-./config/vhosts}:/etc/apache2/sites-enabled
- ${LOG_DIR-./logs/apache2}:/var/log/apache2
networks:
mynet:
aliases:
- john.dev
mysql:
image: 'mysql:5.7'
restart: 'always'
ports:
- "3306:3306"
volumes:
- ${MYSQL_DATA_DIR-./data/mysql}:/var/lib/mysql
- ${MYSQL_LOG_DIR-./logs/mysql}:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: example
networks:
- mynet
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mysql
environment:
PMA_HOST: mysql
PMA_PORT: 3306
ports:
- '8080:80'
volumes:
- /sessions
networks:
- mynet
networks:
mynet:
Try using mysql instead of localhost.
You are defining a link between webserver container and mysql container, so webserver container is able to resolve mysql IP.
According to Docker documentation:
Docker Cloud gives your containers two ways find other services:
Using service and container names directly as hostnames
Using service links, which are based on Docker Compose links
Service and Container Hostnames update automatically when a service
scales up or down or redeploys. As a user, you can configure service
names, and Docker Cloud uses these names to find the IP of the
services and containers for you. You can use hostnames in your code to
provide abstraction that allows you to easily swap service containers
or components.
Service links create environment variables which allow containers to
communicate with each other within a stack, or with other services
outside of a stack. You can specify service links explicitly when you
create a new service or edit an existing one, or specify them in the
stackfile for a service stack.
From Docker compose documentation:
Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified.