HTTPS for docker containers - docker

I am developing a workflow service as a training project. Abstracting from the details, everything you need to know for this question is in the image. For deployment, I rented a server and ran docker-compose on it. Everything works well, but what I'm worried about is that ports 8000 and 5432 are open.
The first question is, is it worth worrying? And if so, how to get rid of it?
Docker-compose file content below
version: "3"
services:
db:
container_name: 'emkk-db'
image: postgres
volumes:
- ./backend/data:/var/lib/postgresql/data
env_file:
- ./backend/db.env
ports:
- "5432:5432"
backend:
container_name: 'emkk-backend'
image: emkk_backend
build: ./backend
volumes:
- ./backend:/emkk/backend
env_file:
- ./backend/.env
ports:
- "8000:8000"
depends_on:
- db
frontend:
container_name: 'emkk-frontend'
image: emkk_frontend
build: ./frontend
command: npm run start
env_file:
- ./frontend/.env
volumes:
- /emkk/frontend/node_modules
- ./frontend:/emkk/frontend
ports:
- "80:80"
depends_on:
- backend
I also want to configure HTTPS protocol. I tried installing nginx and putting a certificate on it using a certbot, and then proxying requests to containers. I sat with this for several hours and I still did not manage to achieve anything better than a HTTPS for the nginx start page.
Maybe I'm doing completely wrong things, but I'm new to this, I haven't had to deal with deployments before. I would be grateful for your answers, which will contain an idea or an example of how you can do this.

If you don't have a connection to 8000 (probably WAS) or 5432 (database) from an external server, you can change docker-compose.yml to:
you have to expose only necessary ports for external clients.
when you connect to backend from web, you should use service name like backend:8000
when you connect to db from backend, you should use service name like db:5432
version: "3"
services:
db:
container_name: 'emkk-db'
image: postgres
volumes:
- ./backend/data:/var/lib/postgresql/data
env_file:
- ./backend/db.env
backend:
container_name: 'emkk-backend'
image: emkk_backend
build: ./backend
volumes:
- ./backend:/emkk/backend
env_file:
- ./backend/.env
depends_on:
- db
frontend:
container_name: 'emkk-frontend'
image: emkk_frontend
build: ./frontend
command: npm run start
env_file:
- ./frontend/.env
volumes:
- /emkk/frontend/node_modules
- ./frontend:/emkk/frontend
ports:
- "80:80"
depends_on:
- backend
And, you can use nginx proxy manager to service with HTTPS and a certificate from the certbot.

Related

docker-compose gives error: no such service

Im trying to deploy with docker compose an app; mi yml is:
version: '3.9'
services:
db:
image: mongo
restart: always
volumes:
- 'dbdata:/data/db'
container_name: database
server:
build: .
restart: always
ports:
- '2000:2000'
depends_on:
- db
container_name: api
links:
- database
frontend:
build: ./client
restart: always
ports:
- '3000:3000'
depends_on:
- server
container_name: client
links:
- api
volumes:
dbdata:
When i docker compose up -d this in my PC it works correctly, but when i do the same thing in a oracle linux 8.6 fedora; seems like is working fine until came to the lines:
Network soccer_default Created
no such service: database
And stop the process. Is something related with the machine or docker version?

Docker Postgres database not running or accessible

Below is my Dockerfile:
FROM node:14
WORKDIR /workspace
COPY . .
COPY /prisma ./prisma/
RUN npm install
EXPOSE 3333
EXPOSE 9229
CMD [ "npm", "run", "start" ]
And my docker-compose.yml
version: '3.8'
services:
todoapp-api:
container_name: todoapp-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
volumes:
postgres:
networks:
nestjs-crud:
And my .env:
DATABASE_URL="postgresql://myuser:mypassword#192.168.1.1/mydb?schema=public"
After struggling with making the database run and be accessible, I found out that one possible solution was to change the DATABASE_URL. As you can see, I am writing my IP Address there to get it to run and this works for me. However, when I replace 192.168.1.1 with the name of the service: postgres, it stops working and I get the error:
Can't reach database server at postgres:5432
Writing the IP address is not ideal of course. However, if I don't write the IP address then the database server just doesn't work.
I think you may need to atributte networks in the containers specs. You already defined what networks you have in the YAML but they need to be inserted in container's spec like
todoapp-api:
container_name: todoapp-api
networks:
- nestjs-crud
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
nestjs-crud:
internal: true
My recomendation is to create one network for the db and other for the API, then assing the network db for the db, and both in the API, thus, the API can acess db network. Than, you can acess the db by the host nestjs-crud.postgres
To bounce back, on the point of the comment above, the two services are not in the same network, which is why you have the concern. To solve this problem, it will be necessary to put the services in the same network by putting the mention:
networks:
- nestjs-crud
and depends_on in todoapp-api
in the todoapp-api and postgres service, this becomes:
version: '3.8'
services:
todoapp-api:
container_name: todoapp-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
- nestjs-crud
depends_on:
- postgres
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- nestjs-crud
volumes:
postgres:
networks:
nestjs-crud:
And add in .env database service name.

How do you set two docker containers from the same docker-compose to communicate with one another using HTTP-Get?

Okay so we have this C# .net core app going on, which has 3 parts. Each part communicates through HTTP requests. The docker-compose starts all 3 parts
Using postman, we're able to use dbconn directly and successfully connect to the db.
However, we can't go from the app to the dbconn. If we make a GET request from deployUS to connDB, it throws an error saying :
---> System.Net.Http.HttpRequestException: Connection refused (dbconn:5002)
This is our docker-compose.yml:
version: '3.9'
services:
job:
container_name: "job"
build:
context: .
dockerfile: Job/Dockerfile
ports:
- "5003:80"
dbconn:
container_name: "dbconn"
build:
context: .
dockerfile: ConnDB/Dockerfile
ports:
- "5002:80"
depends_on:
- database
deploy:
container_name: "deploy"
build:
context: .
dockerfile: Deploy/Dockerfile
# command: docker run -p 5002:5002
ports:
- "5001:80"
depends_on:
- dbconn
database:
container_name: "database"
image: postgres:latest
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=deploy
- POSTGRES_DB=deploy_DB
volumes:
- ./DB/init-script.sql:/docker-entrypoint-initdb.d/init-script.sql
- deploy-databse:/var/lib/postgresql/data/
ports:
- "5432:5432"
volumes:
deploy-database:
Note: The dbconn container is called with http://dbconn:5002
Is there a way that I can tell my containers to talk to one another? Thanks a lot:)

How to create 2 different running app with the same docker-compose.yml file?

I already have a docker-compose.yml file like this:
version: "3.1"
services:
memcached:
image: memcached:alpine
container_name: dl-memcached
redis:
image: redis:alpine
container_name: dl-redis
mysql:
image: mysql:5.7.21
container_name: dl-mysql
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: dl-phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=dl-mysql
- PMA_PORT=3306
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
restart: always
ports:
- 8002:80
volumes:
- /application
links:
- mysql
elasticsearch:
build: phpdocker/elasticsearch
container_name: dl-es
volumes:
- ./phpdocker/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "8003:9200"
webserver:
image: nginx:alpine
container_name: dl-webserver
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./logs:/var/log/nginx:delegated
ports:
- "9003:80"
php-fpm:
build: phpdocker/php-fpm
container_name: dl-php-fpm
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
- ./../docker/php-fpm/certs/store_stock/:/usr/local/share/ca-certificates/
- ./logs:/var/log:delegated # nginx logs
- /application/var/cache
environment:
XDEBUG_CONFIG: remote_host=host.docker.internal
PHP_IDE_CONFIG: "serverName=dl"
node:
build:
dockerfile: dl/phpdocker/node/Dockerfile
context: ./../
container_name: dl-node
working_dir: /application
ports:
- "8008:3000"
volumes:
- ./../:/application:cached
tty: true
My goal is to have 2 isolate environments working at the same time in the same server with the same docker-compose file? I wonder if it's possible?
I want to be able to stop and update one env. while the other one is still running and getting the traffic.
Maybe I need another approach in my case?
There are a couple of problems with what you're trying to do. If your goal is to put things behind a load balancer, I think that rather than trying to start multiple instances of your project, a better solution would be to use the scaling features available to docker-compose. In particular, if your goal is to put some services behind a load balancer, you probably don't want multiple instances of things like your database.
If you combine this with a dynamic front-end proxy like Traefik, you can make the configuration largely automatic.
Consider a very simple example consisting of a backend container running a simple webserver and a traefik frontend:
---
version: "3"
services:
webserver:
build:
context: web
labels:
traefik.enable: true
traefik.port: 80
traefik.frontend.rule: "PathPrefix:/"
frontend:
image: traefik
command:
- --api
- --docker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "80:80"
- "127.0.0.1:8080:8080"
If I start it like this, I get a single backend and a single frontend:
docker-compose up
But I can also ask docker-compose to scale out the backend:
docker-compose up --scale webserver=3
In this case, I get a single frontend and three backend servers. Traefik will automatically discover the backends and will round-robin connections between them. You can download this example and try it out.
Caveats
There are a few aspects of your configuration that would need to change in order to make this work (and in fact, you would need to change them even if you were to create multiple instances of your project as you have proposed in your question).
Conflicting paths
Take for example the configuration of your webserver container:
volumes:
- ./logs:/var/log/nginx:delegated
If you start two instances of this service, both containers will mount ./logs on /var/log/nginx. If they both attempt to write to /var/log/nginx/access.log, you're going to have problems.
The easiest solution here is to avoid bind mounts for things like log directories (and any other directories to which you will be writing), and instead use named docker volumes.
Hardcoding container names
In some places, you are hardcoding the container name, like this:
mysql:
image: mysql:5.7.21
container_name: dl-mysql
This will cause problems if you attempt to start multiple instances of this project or multiple instances of the mysql container. Don't statically set the container name.
Deprecated links syntax
Your configuration is using the deprecated links syntax:
links:
- mysql
Don't do that. In modern docker, containers on the same network can simply refer to each other by name. In other words, if your compose configuration has:
mysql:
image: mysql:5.7.21
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
Other containers in your compose stack can simply use the hostname mysql to refer to this service.
You won't be able to run same compose file on a host without changing the port mappings because that will cause port conflict. I'd recommend creating a base compose file and using extends to override port mappings for different environments.

Failed to setup apache, php , mysql and adminer

I am trying to use docker and setup apache, php, mysql and adminer using this docker-compose.yml
The apache, php and mysql have been run. I have test it using php codes. But, the adminer can't do login.
version: "3.2"
services:
php:
image: php:latest
build: './php/'
networks:
- backend
volumes:
- ./public_html/:/var/www/html/
apache:
image: httpd:latest
build: './apache/'
depends_on:
- php
- mysql
networks:
- frontend
- backend
ports:
- "8000:80"
volumes:
- ./public_html/:/var/www/html/
mysql:
image: mysql:latest
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=admin
adminer:
image: adminer
restart: always
links:
- mysql
ports:
- "8080:8080"
networks:
frontend:
backend:
You are already using port 8080 on the host, so you need to either proxy pass using apache and dont share the port on adminer, or use a different port
adminer:
image: adminer
ports:
- 8081:8080
Your docker container is named mysql other than the default in adminer db. So you need to add environment variable for your adminer container like below.
adminer:
image: adminer
restart: always
ports:
- "8080:8080"
environment:
- ADMINER_DEFAULT_SERVER=mysql
and links are deprecated remove it. For any other issue please read the docker hub description.

Resources