Best practices to merge docker volumes content? - docker

Since docker does not seem to allow us to do something like this :
version: '3.5'
services:
php-apache:
image: php:7.2.1-apache
ports:
- 80:80
volumes:
- ./sources:/var/www/html
- ./sources_bis:/var/www/html
with :
.
├── sources
│   ├── images
│   │   ├── image.png
│   │   └── logo.png
│   └── index.php
└── sources_bis
└── images
└── image.png
What would be the best practise if i want my container to include sources directory content and then merge the content of the directory sources_bis ?
The idea behind this question is to be able to share the same base code between different projects running through container but also being able to do specific developments for each project. So if you have any other practises that could help me doing this, i will take it !

Related

KrakenD - ERROR parsing the configuration file: loading flexible-config settings

I'm trying to use krakend's flexible configuration, but there's no way to get it started in a simple way
ERROR parsing the configuration file: loading flexible-config settings:
2022-07-19T08:48:21.279006680Z - "config/settings/dev": open "config/settings/dev": no such file or directory
I'm just trying to load a configuration file with a simple variable, to test the gateway.
but I'm not assigning that variable anywhere for now
dev/env.json
{
"port": 8080
}
I show you my configuration of docker-compose.yaml
shared-gateway:
build:
context: ${PWD}/.docker/krakend
container_name: 'shared-gateway'
restart: "unless-stopped"
volumes:
- ${PWD}/.docker/krakend/:/etc/krakend/
ports:
- "9191:8080"
networks:
- network-gateway
environment:
- FC_ENABLE=1
- FC_SETTINGS="config/settings/dev"
command: ['run', '-c', '/etc/krakend/krakend.json']
Dockerfile
FROM devopsfaith/krakend:2.0.5
COPY krakend.json /etc/krakend/krakend.json
I show you my directory tree
.
├── Dockerfile
├── config
│   ├── partials
│   ├── settings
│   │   ├── dev
│   │   │   └── env.json
│   │   └── prod
│   └── templates
└── krakend.json
When I start the container, it tells me that it can't find the directory
ERROR parsing the configuration file: loading flexible-config settings:
2022-07-19T09:25:12.390870759Z - "config/settings/dev": open "config/settings/dev": no such file or directory
Does anyone know where I'm going wrong or have an example of how to use krakend's flexible-configuration with docker?
It seems you either don't copy the "config" directory into directory "/etc/krakend/" in your docker image or mount it ("config") from outside in your docker compose file. I believe the the image work directory is at "/etc/krakend", so make sure you make your config folder available under that directory, before you start "run" command
The problem is that the config folder is not present in your Docker image. I would suggest using this Dockerfile example that uses Flexible Configuration that does exactly what you want:
https://www.krakend.io/docs/deploying/docker/

Dockerize nestjs microservices application

I am trying to dockerize a microservice-based application. The api is built with nestjs and MySQL. The following is the directory structure
.
├── docker-compose.yml
├── api
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ ├── package-lock.json
│ ├── ormconfig.js
│ └── .env
├── payment
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ └── package-lock.json
├── notifications
│ ├── src
│ ├── Dockerfile
│ ├── package.json
│ └── package-lock.json
└
The following is the Dockerfile inside the api directory
FROM node:12.22.3
WORKDIR /usr/src/app
COPY package*.json .
RUN npm install
CMD ["npm", "run", "start:dev"]
The below is the docker-compose.yml file. Please note that the details for payment & notifications are not added yet in the docker-compose file.
version: '3.7'
networks:
server-network:
driver: bridge
services:
api:
image: api
build:
context: .
dockerfile: api/Dockerfile
command: npm run start:dev
volumes:
- ".:/usr/src/app"
- "/usr/src/app/node_modules"
networks:
- server-network
ports:
- '4000:4000'
depends_on:
- mysql
mysql:
image: mysql:5.7
container_name: api_db
restart: always
environment:
MYSQL_DATABASE: api
MYSQL_ROOT_USER: root
MYSQL_PASSWORD: 12345
MYSQL_ROOT_PASSWORD: root
ports:
- "3307:3306"
volumes:
- api_db_db:/var/lib/mysql
networks:
- server-network
volumes:
api_db:
Now, when I try to start the application using docker-compose up I'm getting the following error.
no such file or directory, open '/usr/src/app/package.json'
UPDATE
Tried removing the volumes and it didn't help too. Also, try to see what is there in the api by listing the contents of the directory by running
docker-compose run api ls /usr/src/app
and it shows the following contents in the folder
node_modules package-lock.json
Any help is much appreciated.
Your build: { context: } directory is set wrong.
The image build mechanism uses a build context to send files to the Docker daemon. The dockerfile: location is relative to this directory; within the Dockerfile, the left-hand side of any COPY (or ADD) directives is always interpreted as relative to this directory (even if it looks like an absolute path; and you can't step out of this directory with ..).
For the setup you show, where you have multiple self-contained applications, the easiest thing is to set context: to the directory containing the application.
build:
context: api
dockerfile: Dockerfile # the default value
Or, if you are using the default value for dockerfile, an equivalent shorthand
build: api
You need to set the build context to a parent directory if you need to share files between images (see How to include files outside of Docker's build context?). In this case, all of the COPY instructions need to be qualified with the subdirectory in the combined source tree.
# Dockerfile, when context: .
COPY api/package*.json ./
RUN npm ci
COPY api/ ./
You should not normally need the volumes: you show. These have the core effect of (1) replacing the application in the image with whatever's on the local system, which could be totally different, and then (2) replacing its node_modules directory with a Docker anonymous volume, which will never be updated to reflect changes in the package.json file. In this particular setup you also need to be very careful that the volume mappings match the filesystem layout. I would recommend removing the volumes: block here; use a local Node for day-to-day development, maybe configuring it to point at the Docker-hosted database.
If you also remove things that are set in the Dockerfile (command:) and things Compose can provide reasonable defaults for (image:, container_name:, networks:) you could reduce the docker-compose.yml file to:
version: '3.8'
services:
api: # without volumes:, networks:, image:, command:
build: api # shorthand corrected directory-only form
ports:
- '4000:4000'
depends_on:
- mysql
mysql: # without container_name:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: api
MYSQL_ROOT_USER: root
MYSQL_PASSWORD: 12345
MYSQL_ROOT_PASSWORD: root
ports:
- "3307:3306"
volumes:
- api_db:/var/lib/mysql
volumes:
api_db:

Confluence docker: Cannot locate specified Dockerfile

I am trying to set up Confluence by Atlassian via docker-compose (so I dont have it running in the background all the time.)
I know next to nothing about docker - all I know is how to fire up my docker-compose.yml to start up a mysql microservice.
So I tried following this tutorial and I am already failing at step 1. So I tried creating a docker-compose.yml with this content:
version: '3'
services:
confluence:
image: atlassian/confluence-server
restart: always
volumes:
- /data/confluence:/var/atlassian/application-data/confluence
ports:
- 8090:8090
- 8091:8091
confl-mysql:
build: ./mysql
restart: always
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=yes
- MYSQL_DATABASE=confluence
- MYSQL_USER=confluence
- MYSQL_PASSWORD=your-password
and I am getting this in my terminal:
sudo docker-compose up
Building confl-mysql
ERROR: Cannot locate specified Dockerfile: Dockerfile
My directory tree looks like this:
├── atlassian-confluence-7.4.0-x64.bin
├── conf_dckr_cmp
│   ├── confl-mysql
│   │   └── Dockerfile
│   ├── data
│   │   └── confluence
│   │   ├── confl-mysql
│   │   └── Dockerfile
│   ├── docker-compose.yml
│   ├── Dockerfile
│   ├── mysql
│   │   └── Dockerfile
│   ├── mysql-connector-java-5.1.49-bin.jar
│   └── mysql-connector-java-5.1.49.jar
├── docker-compose.yml
└── mysql
How do I resolve the error shown in my terminal? What does it actually want me to do?
can you try this one.
confl-mysql:
build:
context: ./mysql
instead of this.
confl-mysql:
build: ./mysql

Share folder and Python file between Docker images using Docker Compose

I would like different Docker projects for my closely related projects server_1 and server_2 to live in one folder that I can build/deploy simultaneously using Docker Compose.
Example project directory:
.
├── common_files
│   ├── grpc_pb2_grpc.py
│   ├── grpc_pb2.py
│   └── grpc.proto
├── docker-compose.yml
├── flaskui
│   ├── Dockerfile
│   └── flaskui.py
├── server_1
│   ├── Dockerfile
│   └── server_1.py
├── server_2
│   ├── Dockerfile
│   └── server_2.py
└── server_base.py
Two questions I am hoping have one common solution:
How can I make it so I only have the common dependency common_files/ in only one place?
How can I use the common code server_base.py in both server projects?
I've tried importing using relative directories in my project Python scripts, like from ..common_files import grpc_pb2, but I get ValueError: attempted relative import beyond top-level package.
I've considered using read_only volume mounting in docker-compose.yml, but that doesn't explain how to reference the common_files from within a project file like flaskui/Dockerfile.
You need to mount your local directory that contains the grpc files and server_base.py as a volume in your server_1 and server_2 containers. That way, there is a single source of truth (your local directory) and you can use them from both your containers.
You can add the volumes definition in your docker-compose.yml file for your containers. Here's a bare-bones compose file I created for your use-case:
version: "3"
services:
server_1:
image: tutum/hello-world
ports:
- "8080:8080"
container_name: server_1
volumes:
- ./common_files:/common_files
server_2:
image: tutum/hello-world
ports:
- "8081:8081"
container_name: server_2
volumes:
- ./common_files:/common_files
common_files is the folder in your local directory that has the server_base.py along with the grpc files which you want to mount as volumes to your containers which need to use them. These are called host volumes since you are mounting your local files from your host as volumes for your containers.
With this setup, when you exec into server_1, you can see that there's a common_files folder sitting in the / directory. Similarly for server_2.
You can exec into server_1 using docker-compose exec server_1 /bin/sh
You can also read up more on the documentation for Docker volumes.

Dockerfile volume: local changes are not reflected in docker

I am using docker-compose to create a multi-container environment where I have one mongodb instance and two python applications. I am having trouble when I am changing my files locally, but docker-compose up doesn't reflect the changes I made in my file. What am I doing wrong?
My project structure:
.
├── docker-compose.yml
├── form
│   ├── app.py
│   ├── Dockerfile
│   ├── requirements.txt
│   ├── static
│   └── templates
│   ├── form_action.html
│   └── form_sumbit.html
├── notify
│   ├── app.py
│   ├── Dockerfile
│   ├── requirements.txt
└── README
Dockerfiles are pretty similar for two apps. One is given below:
FROM python:2.7
ADD . /notify
WORKDIR /notify
RUN pip install -r requirements.txt
Here is my docker-compose.yml file:
version: '3'
services:
db:
image: mongo:3.0.2
container_name: mongo
networks:
db_net:
ipv4_address: 172.16.1.1
web:
build: form
command: python -u app.py
ports:
- "5000:5000"
volumes:
- form:/form
environment:
MONGODB_HOST: 172.16.1.1
networks:
db_net:
ipv4_address: 172.16.1.2
notification:
build: notify
command: python -u app.py
volumes:
- notify:/notify
environment:
MONGODB_HOST: 172.16.1.1
networks:
db_net:
ipv4_address: 172.16.1.3
networks:
db_net:
external: true
volumes:
form:
notify:
Here is my output for docker volume ps:
local form
local healthcarereminder_form
local healthcarereminder_notify
local notify
[My understanding till now: You can see there are two instances of form and notify, with one having project folder name appended. So docker might be looking for changes in a different file. I am not sure.]
If you're trying to mount a host directory in the docker-compose file do not declare notify as a VOLUME directive.
Instead treat it like a local folder
notification:
build: notify
command: python -u app.py
volumes:
# this points to a relative ./notify directory.
- ./notify:/notify
environment:
....
volumes:
form:
#do not declare the volume here.
# notify:
When you were declare a VOLUME node at the bottom of the docker-compose file, docker makes an special internal directory meant to be shared between docker images. Here is more details: https://docs.docker.com/engine/tutorials/dockervolumes/#add-a-data-volume

Resources