Connect to another container using Docker compose - docker

I need to use two containers together: one with Tomcat and another with a Database.
I have created the following yaml file which describes the services:
postgredb:
image: postgres
expose:
- 5432
ports:
- 5432:5432
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
tomcat:
image: tomcat
links:
- postgredb:db
ports:
- 8080:8080
After starting docker-compose I can see that I'm not able to reach the Database from Tomcat, unless I retrieve the IP address of the Database (via docker inspect) and use it when configuring Tomcat Connection Pool to the DB.
From my understanding, the two containers should be linked and I'd expect to find the database on localhost at port 5432, otherwise I see little benefits in linking the containers.
Is my understanding correct?

Use the alias "db" that you have defined in file to refer to the database host name.
Containers for the linked service will be reachable at a hostname
identical to the alias, or the service name if no alias was specified.
Source: https://docs.docker.com/compose/compose-file/compose-file-v2/#links

Related

Why is that I am able to access container outside the bridge network?

I started mysqldb from a docker container . I was surprised that I could connect it via the localhost using the below command
mysql -uroot -proot -P3306 -h localhost
I thought the docker containers that start on the bridge network and wont be available outside that network. How is that mysql CLI is able to connect to this instance
Below is my docker compose that runs the mysqldb-docker instance
version: '3.8'
services:
mysqldb-docker:
image: 'mysql:8.0.27'
restart: 'unless-stopped'
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-dev
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3306/reco-tracker-dev"
depends_on: [mysqldb-docker]
env_file:
- ./.env
volumes:
mysqldb:
You have published the port(s). That means you can reach them on the host system on the published port.
By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
The critical section in your config is the below. You have added a ports key to your service. This is composes way to publish ports. The left part is the port where you publish it to on the host system. The right part is where the container actually listens on.
ports:
- "3306:3306"
Also keep in mind that when you start compose, a default network is created that joins all container in the compose stack. That's why These containers can find each other, with the service name and/or container name as hostname.
You don't need to publish the port(s) like you did in order for them to be able to communicate. I guess that's why you did it. You can and probably should remove any port mapping from internal services, if possible. This will add extra security to your setup, because then it behaves like you describe. Only containers in the same network find each other.

Cannot connect to Mysql using Docker

I build a website using Strapi and Gatsby, everythings works well when I try to connect to a remote database, but I'm trying to create a db inside a container and so far no luck.
Essentially, what I did is create the following docker-compose:
version: '3'
services:
backend:
container_name: myapp_backend
build: ./backend/
ports:
- '3002:3002'
volumes:
- ./backend:/usr/src/myapp/backend
- /usr/src/myapp/backend/node_modules
environment:
- APP_NAME=myapp_backend
- DATABASE_CLIENT=mysql
- DATABASE_HOST=db
- DATABASE_PORT=3307
- DATABASE_NAME=myapp_db
- DATABASE_USERNAME=johnny
- DATABASE_PASSWORD=stecchino
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=myapp_db
- HOST=localhost
depends_on:
- db
restart: always
db:
container_name: myapp_mysql
image: mysql:5.7
volumes:
- ./db.sql:/docker-entrypoint-initdb.d/db.sql
restart: always
ports:
- 3307:3307
environment:
MYSQL_ROOT_PASSWORD: 5!JF6!FgAkvt
MYSQL_DATABASE: myapp_db
MYSQL_USER: johnny
MYSQL_PASSWORD: stecchino
command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: 'myapp_phpmyadmin'
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3307
ports:
- '8081:80'
volumes:
- /sessions
depends_on:
- db
frontend:
container_name: myapp_frontend
build: ./frontend/
ports:
- '3001:3001'
depends_on:
- backend
volumes:
- ./frontend:/usr/src/myapp/frontend
the backend service contains the Strapi application, the db service contains the mysql instance which runs on the port 3307 'cause 3306 is already in use.
Then I have also installed phpmyadmin, and last but not least the Gastby site. When I run using docker-compose up --build, and try to access to phpmyadmin using:
http://localhost:8081/index.php
with the following credentials:
user: johnny
pwd: stecchino
I get:
MySQL mysqli::real_connect():(HY000/2002): Connection refused
now, what I did for fix that situation is pass the port 3306 instead of 3307 to backend and phpmyadmin service. And magically, everything works. But why? I have mapped container and host to 3307...
There are 2 things happening here.
Mysql is running on port 3306.
This is because you never told the mysql container to run on port 3307. The default configuration is running on 3306.
phpadmin can connect to mysql at port 3306.
Of course it can. This is because when you define multiple services within the same docker-compose file, they start on the same network. This means that they can see and connect to each other's internal ports without the need for external port binding like 3306:3306
I would suggest to keep port bindings only for services that you want access outside the docker environment (like the UI), and for internal components just expose the port like this
expose:
- 3306
Both answers are useful, I am particularly fond of Manish's answer
I wanted to add some additional wording:
There are the internal docker networks which nothing from the outside can gain access to. From inside any given service (or container), you can reach every other service (or container) via:
<service-name>:<port>/path/of/resources
<container-name>:<port>/path/of/resources
In order to access resources inside the docker network from outside of docker, whether that is from your host environment, or farther upstream on the internet, the docker daemon needs to bind to host ports, and then forward information received on those ports to a docker service (and ultimately a docker container).
In your docker-compose.yml when you do the 3307:3307 you are telling the docker daemon to listen on port 3307, and forward to your db service internally on it's port 3307.
However, from what we can all see, mysql is still internally (that is, inside the container) listening for traffic on port 3306. Any containers or services on the same docker networks as your db service (mysql running container(s)) would be able to access mysql via something like:
<driver>:mysql://db:3306/<dbname>
If you wanted all host traffic and docker network traffic to access mysql on port 3307, you would also need to configure mysql to listen on port 3307 instead of 3306. That tidbit of information does not appear to be in your question at the time of writing.
I hope the additional information helps! It's a topic I chat often about when talking docker with folks.
Because 3306 is the exposed port by the official Dockerfile.
What you can do is to map the port that is running MySQL to another port on your host: 3307:3306 for instance (always host:container)

Hello world with gitlab ce docker container running on local ubuntu

I would like to run the docker image for gitlab community edition locally on my ubuntu laptop.
I am following this tutorial.
Currently there i already another app running on localhost so I changed the ports in docker -compose.
What I currently have: I'm in a directory I created called 'gitlab_test'. I have set a global variable per the instructions echo $GITLAB_HOME /srv/gitlab.
I pulled the ce gitlab image docker pull store/gitlab/gitlab-ce:11.10.4-ce.0
Then, in the gitlab_test directory I added a docker-compose file:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
ports:
- '8080:8080'
- '443:443'
- '22:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
I am unsure if I need to put 'localhost' in place of hostname and external url parameters. I tried that and as is and in each case I cannot see anything happen. I was expecting a web interface for gitlab at localhost:8080.
Tried docker-compose up and the terminal ran for a while with a bunch of output. There's no 'done' message (perhaps because I did not use -d?) but when I visit localhost:8080 I see no gitlab interface.
How can I run the gitlab ce container?
If you want to use different port you should not change your "container port". Only the host port you are exposing your container port to. So instead of:
ports:
- '8080:8080'
- '443:443'
- '22:22'
You should have done:
ports:
- '8080:80'
- '443:443'
- '22:22'
Which means you expose the internal container port 80 (which you cannot change) to your host port 8080.
UPD: I started this service locally and I think there are few things except ports to consider.
You should create $GITLAB_HOME folders (by this I mean that there is no need to register environment variable but rather to create set of dedicated folders). You take this '/srv/gitlab/config:/etc/gitlab' from example but this basically means "take content of srv/gitlab/config and mount it to the path /etc/gitlab" inside the container. I believe the paths like /srv/gitlab/config do not exist at your host.
Taking the above in the account I would suggest to create a separate folder (say my-gitlab) and create the folders config, logs and data inside that folder. They are to be empty but will be filled on Gitlab start.
Put your docker-compose.yaml to my-gitlab and switch to that folder.
Run docker-compose up from that folder. Do not use -d flag so that you're not detaching and can see if errors happen.
Below is my docker-compose.yaml with some explanation:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://localhost'
ports:
- '54321:80'
- '54443:443'
- '5422:22'
volumes:
- './config:/etc/gitlab'
- './logs:/var/log/gitlab'
- './data:/var/opt/gitlab'
Explanation:
I have my local services running at 80, 8080, 22 and 443 so I expose all the ports to what I have free by the moment
At this part http://localhost the http:// is important. If you set https:// Gitlab attempts to request SSL certificate for your domain at Letsencrypt. To make this you have to have public domain and some sort of port configuration.
Volumes are mounted through . (current directory) so that it is important to have consistent structure and call docker-compose up from a proper place.
So in my case I could successfully connect to http://localhost:54321.

Connecting to 'localhost' or '127.0.0.1' with a Docker/docker-compose setup

Seems to be a common question but with different contexts but I'm having a problem connecting to my localhost DB when using Docker.
If I inspect the mysql container using docker inspect and find the IP address and use this as the DB host as part of the CMS, it runs fine... the only issue is the mysql container IP address changes (upon eachdocker-compose up and if I change wifi networks) so ideally I'd like to use 'localhost' or '127.0.0.1' but for some reason this results in a SQLSTATE[HY000] [2002] Connection refused error.
How can I use 'localhost' or '127.0.0.1' as the DB hostname in CMS applications so I don't have to keep changing it as the container IP address changes?
This is my docker-compose.yml file:
version: "3"
services:
webserver:
build:
context: ./bin/webserver
restart: 'always'
ports:
- "80:80"
- "443:443"
links:
- mysql
volumes:
- ${DOCUMENT_ROOT-./www}:/var/www/html
- ${PHP_INI-./config/php/php.ini}:/usr/local/etc/php/php.ini
- ${VHOSTS_DIR-./config/vhosts}:/etc/apache2/sites-enabled
- ${LOG_DIR-./logs/apache2}:/var/log/apache2
networks:
mynet:
aliases:
- john.dev
mysql:
image: 'mysql:5.7'
restart: 'always'
ports:
- "3306:3306"
volumes:
- ${MYSQL_DATA_DIR-./data/mysql}:/var/lib/mysql
- ${MYSQL_LOG_DIR-./logs/mysql}:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: example
networks:
- mynet
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mysql
environment:
PMA_HOST: mysql
PMA_PORT: 3306
ports:
- '8080:80'
volumes:
- /sessions
networks:
- mynet
networks:
mynet:
Try using mysql instead of localhost.
You are defining a link between webserver container and mysql container, so webserver container is able to resolve mysql IP.
According to Docker documentation:
Docker Cloud gives your containers two ways find other services:
Using service and container names directly as hostnames
Using service links, which are based on Docker Compose links
Service and Container Hostnames update automatically when a service
scales up or down or redeploys. As a user, you can configure service
names, and Docker Cloud uses these names to find the IP of the
services and containers for you. You can use hostnames in your code to
provide abstraction that allows you to easily swap service containers
or components.
Service links create environment variables which allow containers to
communicate with each other within a stack, or with other services
outside of a stack. You can specify service links explicitly when you
create a new service or edit an existing one, or specify them in the
stackfile for a service stack.
From Docker compose documentation:
Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified.

Link between two docker containers in a network

I have a docker network between geoserver and postgres. When I do docker inspect container name I can see the two are linked. When I exec into the geoserver container I can ping the postgres container but when I try to connect to a postgres db from within the geoserver container I get an error
psql: could not translate host name postgres to address: Name or service not known
Here is an example of my docker-compose:
version: '2'
services:
postgres:
image: kartoza/postgis:9.5-2.2
geoserver:
image: geonode/geoserver
hostname: geonode-geoserver
links:
- postgres:postgres
ports:
- "8181:8080"
I know with docker networks the /etc/hosts file is not populated. How can I enable accessing the database from geoserver container.
The geoserver service is probably starting before the postgres service is available.
See https://docs.docker.com/compose/startup-order/
You should use a defined network for resolve names without links and use depends_on for postgres start before geoserver.
in the geoserver service definition change:
postgres:postgres
to:
postgres:kartoza/postgis:9.5-2.2
You need to match the service name to the image name

Resources