Docker Compose cannot connect to database - docker

I'm using nestjs for my backend and using typeorm as ORM.
I tried to define my database and my application in an docker-compose file.
If I'm running my database as a container and my application from my local machine it works well. My program connects and creates the tables etc.
But if I try to connect the database from within my container or to start the container with docker-compose up it fails.
Always get an ECONNREFUSED Error.
Where is my mistake ?
docker-compose.yml
version: '3.1'
volumes:
dbdata:
services:
db:
image: postgres:10
volumes:
- ./dbData/:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_PASSWORD=${TYPEORM_PASSWORD}
- POSTGRES_USER=${TYPEORM_USERNAME}
- POSTGRES_DB=${TYPEORM_DATABASE}
ports:
- ${TYPEORM_PORT}:5432
backend:
build: .
ports:
- "3001:3000"
command: npm run start
volumes:
- .:/src
Dockerfile
FROM node:10.5
WORKDIR /home
# Bundle app source
COPY . /home
# Install app dependencies
#RUN npm install -g nodemon
# If you are building your code for production
# RUN npm install --only=production
RUN npm i -g #nestjs/cli
RUN npm install
EXPOSE 3000
.env
# .env
HOST=localhost
PORT=3000
NODE_ENV=development
LOG_LEVEL=debug
TYPEORM_CONNECTION=postgres
TYPEORM_HOST=localhost
TYPEORM_USERNAME=postgres
TYPEORM_PASSWORD=postgres
TYPEORM_DATABASE=mariokart
TYPEORM_PORT=5432
TYPEORM_SYNCHRONIZE=true
TYPEORM_DROP_SCHEMA=true
TYPEORM_LOGGING=all
TYPEORM_ENTITIES=src/database/entity/*.ts
TYPEORM_MIGRATIONS=src/database/migrations/**/*.ts
TYPEORM_SUBSCRIBERS=src/database/subscribers/**/*.ts
I tried to use links but it don't work in the container.

Take a look at your /etc/hosts inside the backend container. You will see
192.0.18.1 dir_db_1
or something like that. The IP will be different and dir will represent the dir you're in. Therefore, you must change TYPEORM_HOST=localhost to TYPEORM_HOST=dir_db_1.
Although, I suggest you set static names to your containers.
services:
db:
container_name: project_db
...
backend:
container_name: project_backend
In this case you can always be sure, that your container will have a static name and you can set TYPEORM_HOST=project_db and never worry about the name ever again.

You can create a network and share among two services.
Create network for db and backend services:
networks:
common-net: {}
and add the network to these two services. So your .yml file would like below after edit:
version: '3.1'
volumes:
dbdata:
services:
db:
image: postgres:10
volumes:
- ./dbData/:/var/lib/postgresql/data
restart: always
environment:
- POSTGRES_PASSWORD=${TYPEORM_PASSWORD}
- POSTGRES_USER=${TYPEORM_USERNAME}
- POSTGRES_DB=${TYPEORM_DATABASE}
ports:
- ${TYPEORM_PORT}:5432
networks:
- common-net
backend:
build: .
ports:
- "3001:3000"
command: npm run start
volumes:
- .:/src
networks:
- common-net
networks:
common-net: {}
Note1: After this change, there is no need to expose the Postgres port externally unless you have a reason for it. You can remove that section.
Note2: TYPEORM_HOST should be renamed to db. Docker would resolve the IP address of db service by itself.

Related

Dockerimage working on pull but not on pull image directive in yml file?

I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.

Host to Docker Container to Docker Container communication

I'm building 2 docker containers, "app" and "db", via a docker-compose file.
The app server just installs java/tomcat via a Dockerfile which is what docker-compose uses to build.
The db server uses an MS SQL image.
When I run:
docker-compose up
I follow that with a build process of software I need to load which deploys a war to the tomcat directory in the app server and builds the database in the database server.
My problem is: The build process can reference localhost:8080 to install/patch the software to the app server and reference localhost:1433 to install/patch the database portion of the software to the database server. However, when I start Tomcat the system doesn't come online because the app server can't connect to the database server via "localhost:1433" so it requires me to jump in and update the properties file after the build to the docker internal IP address and THEN it works.
My question is: How am I able to get my localhost and my app container to reference the DB in the same manner in a database url?
Dockerfile for app server:
FROM centos:centos7
COPY apache-tomcat-9.0.20.tar.gz /tmp/
WORKDIR /tmp/
RUN yum -y update
RUN yum -y install java-11-openjdk-devel
RUN tar -xf apache-tomcat-9.0.20.tar.gz
RUN mv apache-tomcat-9.0.20 /opt/tomcat/
RUN export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/
RUN export PATH=$PATH:$JAVA_HOME/jre/bin
RUN export CATALINA_HOME=/opt/tomcat/
RUN export PATH=$PATH:$CATALINA_HOME/bin
WORKDIR /opt/tomcat/webapps
RUN mkdir testapp
enter code here
enter code here
Docker-Compose File:
version: '3.3'
services:
db:
image: "mcr.microsoft.com/mssql/server:2017-latest"
restart: always
volumes:
- db_data:/var/lib/mssql
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=Test123
network_mode: bridge
hostname: db
ports:
- "1433:1433"
app:
build: './testapp'
volumes:
- './system/build:/opt/tomcat/webapps/testapp/'
ports:
- "8080:8080"
- "8009:8009"
network_mode: bridge
tty: true
depends_on:
- db
volumes:
db_data:
Bring your service to the same network and target the service by service name. For that you need to define a docker network like below. For the following example I can access DB with http://mongo:27017.
mongo:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
networks:
- my-net
spring:
depends_on:
- mongo
image: docker-spring-http-alpine
ports:
- "8080:8080"
networks:
- my-net
networks:
my-net:

Running docker-compose up, stuck on a "infinite" "creating...[container/image]" php and mysql images

I'm new to Docker, so i don't know if it's a programming mistake or something, one thing i found strange is that in a Mac it worked fine, but running on windows, doesn't.
docker-compose.yml
version: '2.1'
services:
db:
build: ./backend
restart: always
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=123
- MYSQL_DATABASE=demo
- MYSQL_USER=user
- MYSQL_PASSWORD=123
php:
build: ./frontend
ports:
- "80:80"
volumes:
- ./frontend:/var/www/html
links:
- db
Docker file inside ./frontend
FROM php:7.2-apache
# Enable mysqli to connect to database
RUN docker-php-ext-install mysqli
# Document root
WORKDIR /var/www/html
COPY . /var/www/html/
Dockerfile inside ./backend
FROM mysql:5.7
COPY ./demo.sql /docker-entrypoint-initdb.d
Console:
$ docker-compose up
Creating phpsampleapp_db_1 ... done
Creating phpsampleapp_db_1 ...
Creating phpsampleapp_php_1 ...
It stays forever like that, i tried a bunch of things.
I'm using Docker version 17.12.0-ce. And enabled Linux container mode.
I think i don't need the "version" and "services", but anyway.
Thanks.
In my case, the fix was simply to restart Docker Desktop. After that all went smoothly

Access redis database in docker compose

I have a Django app that I want to move to docker. A redis dump.rdb file is in the root directory of the project, and contains data needed for the app to work. I normally start that by running redis-server while in the same directory. How can I move this configuration to docker? I know I can use volumes and suspect I need to mount my code folder as one, but will that cause other issues? Here is my current docker setup:
Dockerfile
FROM python:2.7.14
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
ADD requirements /requirements
RUN pip install -r /requirements/local.txt
docker-compose.yml
version: '3'
services:
db:
image: postgres:9.6.3
expose:
- "5432"
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:3.2.6
expose:
- "6379"
volumes:
- ./code
redis_cache:
image: redis:3.2.6
expose:
- "6379"
elasticsearch:
image: elasticsearch:5.6.6
expose:
- "9200"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://postgres#db/postgres
- ENVIRONMENT=development
- REDIS_URL=redis://redis:6379
- REDIS_CACHE_URL=redis://redis_cache:6379
- ELASTIC_ENDPOINT=elasticsearch:9200
env_file: docker.env
depends_on:
- db
- redis
- elasticsearch
volumes:
- .:/code
volumes:
pgdata: {}
There are several ways. What do you need to prefer depends on your project and what kind of information stored in dump.rds file.
you can create your custom redis image with dump.rds file inside. Then you need to push it to your repository.
You can, as you mention above, mount volume from source code. But I prefer mount not whole code directory but mount only redis directory which stores data intended for Redis.
Also, you can create some migration script in web container. It may create some data in redis container as well as in db container.

Exposing localhost ports in several local services

I'm currently attempting to use Docker to make our local dev experience involving two services easier, but I'm struggling to use host and container ports in the right way. Here's the situation:
One repo containing a Rails API, running on 127.0.0.1:3000 (lets call this backend)
One repo containing an isomorphic React/Redux frontend app, running on 127.0.0.1:8080 (lets call this frontend)
Both have their own Dockerfile and docker-compose.yml files as they are in separate repos, and both start with docker-compose up fine.
Currently not using Docker at all for CI or deployment, planning to in the future.
The issue I'm having is that in local development the frontend app is looking for the API backend on 127.0.0.1:3000 from within the frontend container, which isn't there - it's only available to the host and the backend container actually running the Rails app.
Is it possible to forward the backend container's 3000 port to the frontend container? Or at the very least the host's 3000 port as I can see the Rails app on localhost on my computer. I've tried 127.0.0.1:3000:3000 within the frontend docker-compose but I can't do that while running the Rails app as the port is in use and fails to connect. I'm thinking maybe I've misunderstood the point or am missing something obvious?
Files:
frontend Dockerfile
FROM node:8.7.0
RUN npm install --global --silent webpack yarn
RUN mkdir /app
WORKDIR /app
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN yarn install
COPY . /app
frontend docker-compose.yml
version: '3'
services:
web:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000' # rails backend exposed to localhost within container
backend Dockerfile
FROM ruby:2.4.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN bundle install
COPY . /app
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
You have to unite the containers in one network. Do it in your docker-compose.yml files.
Check this docs to learn about networks in docker.
frontend docker-compose.yml
version: '3'
services:
gui:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000'
networks:
- webnet
networks:
webnet:
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
back:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
networks:
- webnet
networks:
webnet:
Docker has its own DNS resolution, so after you do this you will be able to connect to your backend by setting the address to: http://back:3000
Managed to solve this using external links in the frontend app to link to the default network of the backend app like so:
version: '3'
services:
web:
build: .
command: yarn start:dev
environment:
- API_HOST=http://backend_web_1:3000
external_links:
- backend_default
networks:
- default
- backend_default
ports:
- '8080:8080'
volumes:
- .:/app
networks:
backend_default: # share with backend app
external: true

Resources