Jumping around of the dockerfile/docker-compose context - docker

My current projects structure looks something like that:
/home/some/project
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
Compose makes four containers, two of them are dev & prod version of application, who uses appropriate prod & dev files. As you can see, following structure root is little overloaded, so i'd like to move all the deployment staff into the separate directory to make the following:
/home/some/project
├───deployment
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
The idea is to receieve in the result following structure on the docker host:
host
├───dev
│ ├───src
│ └───users
├───prod
│ ├───src
│ └───users
└───project
├───deployment
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
and two containers app_dev and app_prod, which volumes are appropriate mounted into folders /host/dev and /host/prod.
I tried multiple solutions found here, but all of them in different variations returned the following errors:
ERROR: Service 'app_dev' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder264200969/ca.cer: no such file or directory
ERROR: Service 'app_dev' failed to build: COPY failed: Forbidden path outside the build context: ../ca.cer ()
Error is always appears while docker-compose is trying to build an image, on that string:
COPY deployment/ca.cer /code/
Please tell me how to achieve the desired result.

The Deployment folder is outside of the build context. Docker will pass all the files inside the deployment file as the build context. However the deployment folder itself is outside of it.
Change your copy statement to be instead :
COPY ./ca.cer /code/
Since in the image you are already in that folder.

Related

How to create multiple containers in same pods which have separate deployment.yaml files?

tldr: in docker-compose, intercontainer communication is possible via localhost. I want to do the same in k8s, however, I have separate deployment.yaml files for each component. How to link them ?
I have a kubernetes helm package in which there are sub helm packages. The folder structure is as follows ::
A
├── Chart.yaml
├── values.yaml
├── charts
│ ├── component1
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── configmap.yaml
│ │ │ ├── deployment.yaml
│ │ │ ├── hpa.yaml
│ │ │ ├── ingress.yaml
│ │ │ ├── service.yaml
│ │ │ ├── serviceaccount.yaml
│ │ └── values.yaml
│ ├── component2
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── certs.yaml
│ │ │ ├── configmap.yaml
│ │ │ ├── pdb.yaml
│ │ │ ├── role.yaml
│ │ │ ├── statefulset.yaml
│ │ │ ├── pvc.yaml
│ │ │ └── svc.yaml
│ │ ├── values-production.yaml
│ │ └── values.yaml
In docker-compose, I was able to communicate between component1 and component2 via ports using localhost.
However, in this architecture, I have separate deployment.yaml files for those components. I know that if I keep them as containers in a single deployment.yaml file, I can communicate via localhost.
Question: How do I put these containers in same pod, provided that they are present in separate deployment.yaml files ?
That's not possible. Pods are the smallest deployable unit in kubernetes that consist of one or more containers. All containers inside the pod share the same network namespace (beside others). The containers can only be reached via fqdn or ip. For each container outside a pod "localhost" means something completely different. Similar to running docker compose on different hosts, they can not connect using localhost.
You can use the service's name to have a similar behaviour. Instead of calling http://localhost:8080 you can simple use http://component1:8080 to reach component1 from component2, supposing the service in component1/templates/service.yaml is named component1 and both are in the same namespace. Generally there is a dns record for every service with the schema <service>.<namespace>, e.g. component1.default for component1 running in the default namespace. If component2 where in a different namespace you would use http://component1.default:8080.

Docker mosquitto - Error unable to load auth plugin

I really need your help !
I'm encountering a problem with the loading of a plugin in a docker mosquitto.
I tried to load it on a local version of mosquitto and it worked well.
The error return in the docker console is:
dev_instance_mosquitto_1 exited with code 13
The errors return in the log file of mosquitto are:
1626352342: Loading plugin: /mosquitto/config/mosquitto_message_timestamp.so
1626352342: Error: Unable to load auth plugin "/mosquitto/config/mosquitto_message_timestamp.so".
1626352342: Load error: Error relocating /mosquitto/config/mosquitto_message_timestamp.so: __sprintf_chk: symbol not found
Here is a tree output of the project:
mosquitto/
├── Dockerfile
├── config
│ ├── acl
│ ├── ca_certificates
│ │ ├── README
│ │ ├── broker_CA.crt
│ │ ├── mqtt.test.perax.com.p12
│ │ ├── private_key.key
│ │ └── server_ca.crt
│ ├── certs
│ │ ├── CA_broker_mqtt.crt
│ │ ├── README
│ │ ├── serveur_broker.crt
│ │ └── serveur_broker.key
│ ├── conf.d
│ │ └── default.conf
│ ├── mosquitto.conf
│ ├── mosquitto_message_timestamp.so
│ └── pwfile
├── data
│ └── mosquitto.db
└── log
└── mosquitto.log
Here is the Dockerfile:
FROM eclipse-mosquitto
COPY config/ /mosquitto/config
COPY config/mosquitto_message_timestamp.so /usr/lib/mosquitto_message_timestamp.so
RUN install /usr/lib/mosquitto_message_timestamp.so /mosquitto/config/
here is the docker-compose.yml:
mosquitto:
restart: always
build: ./mosquitto/
image: "eclipse-mosquitto/latests"
ports:
- "1883:1883"
- "9001:9001"
volumes:
- ./mosquitto/config/:/mosquitto/config/
- ./mosquitto/data/:/mosquitto/data/
- ./mosquitto/log/mosquitto.log:/mosquitto/log/mosquitto.log
user: 1883:1883
environment:
- PUID=1883
- PGID=1883
Here is the mosquitto.conf:
persistence true
persistence_location /mosquitto/data
log_dest file /mosquitto/log/mosquitto.log
include_dir /mosquitto/config/conf.d
plugin /mosquitto/config/mosquitto_message_timestamp.so
I'm using mosquitto 2.0.10 on a ubuntu serveur with the version 18.04.5 LTS.
In thanking you for your help.
Your best bet here is probably to set up a multi step Docker build file that uses an Alpine based image to build the plugin then copy it into the eclipse-mosquitto image.

How to setup pm2-logrotate for docker with nodejs running pm2?

I have the docker image from keymetrics/pm2:8-jessie and running my nodejs application well with pm2. I tried to add pm2-logrotate for sizing the log with date. I added the following in my Dockerfile. The module pm2-logrotate can be started but the Target PID is null. Anyone can help please?
FROM keymetrics/pm2:8-jessie
RUN npm install
RUN pm2 install pm2-logrotate
RUN pm2 set pm2-logrotate:retain 90
RUN pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
RUN pm2 set pm2-logrotate:max_size 10M
RUN pm2 set pm2-logrotate:rotateInterval 0 0 * * *
RUN pm2 set pm2-logrotate:rotateModule true
RUN pm2 set pm2-logrotate:workerInterval 10
ENV NODE_ENV=$buildenv
ENV NPM_CONFIG_LOGLEVEL warn
CMD ["sh", "-c", "pm2-runtime start pm2.${NODE_ENV}.config.js"]
pm2 ls
┌──────────────┬────┬─────────┬─────────┬─────┬────────┬─────────┬────────┬─────┬────────────┬──────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────────┼────┼─────────┼─────────┼─────┼────────┼─────────┼────────┼─────┼────────────┼──────┼──────────┤
│ app_server │ 1 │ 1.0.0 │ cluster │ 150 │ online │ 1 │ 2h │ 0% │ 104.4 MB │ root │ disabled │
└──────────────┴────┴─────────┴─────────┴─────┴────────┴─────────┴────────┴─────┴────────────┴──────┴──────────┘
Module
┌───────────────┬────┬─────────┬─────┬────────┬─────────┬─────┬───────────┬──────┐
│ Module │ id │ version │ pid │ status │ restart │ cpu │ memory │ user │
├───────────────┼────┼─────────┼─────┼────────┼─────────┼─────┼───────────┼──────┤
│ pm2-logrotate │ 2 │ 2.7.0 │ 205 │ online │ 0 │ 0% │ 44.5 MB │ root │
└───────────────┴────┴─────────┴─────┴────────┴─────────┴─────┴───────────┴──────┘
One reason is as pm2 logrotate is not the primary process of the Docker container, but a managed process by pm2, so you can verify this behaviour by stopping main process that is defined pm2.${NODE_ENV}.config.js your container will die no matter if pm2-logrotate is running.
Also, I do not think it should be null, it should be something like
pm2 ls
┌─────┬──────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 1 │ www │ default │ 0.0.0 │ fork │ 26 │ 13s │ 0 │ online │ 0% │ 40.3mb │ root │ disabled │
└─────┴──────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
Module
┌────┬───────────────────────────────────────┬────────────────────┬───────┬──────────┬──────┬──────────┬──────────┬──────────┐
│ id │ module │ version │ pid │ status │ ↺ │ cpu │ mem │ user │
├────┼───────────────────────────────────────┼────────────────────┼───────┼──────────┼──────┼──────────┼──────────┼──────────┤
│ 0 │ pm2-logrotate │ 2.7.0 │ 17 │ online │ 0 │ 0.5% │ 43.1mb │ root │
└────┴───────────────────────────────────────┴────────────────────┴───────┴──────────┴──────┴──────────┴──────────┴──────────┘
Also will suggest to use the alpine base image as the above image seem very heavy, the below image is 150MB while the above image is arround 1GB.
FROM node:alpine
RUN npm install pm2 -g
RUN npm install
RUN pm2 install pm2-logrotate
RUN pm2 set pm2-logrotate:retain 90
RUN pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ss
RUN pm2 set pm2-logrotate:max_size 10M
RUN pm2 set pm2-logrotate:rotateInterval 0 0 * * *
RUN pm2 set pm2-logrotate:rotateModule true
RUN pm2 set pm2-logrotate:workerInterval 10
ENV NODE_ENV=$buildenv
ENV NPM_CONFIG_LOGLEVEL warn
WORKDIR /app
COPY . /app
CMD ["sh", "-c", "pm2-runtime start confi"]

Docker isn't mounting the directory? "OCI runtime create failed: container_linux.go:346: no such file or directory: unknown"

On my Windows 10 Home computer with Docker Toolbox, Docker is having trouble mounting the drives. I've already run dos2unix on the entrypoint.sh file.
The full error is as such:
ERROR: for users Cannot start service users: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/src/app/entrypoint.sh\": stat /usr/src/app/entrypoint.sh: no such file or directory": unknown
My docker-compose.yml:
version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile
entrypoint: ['/usr/src/app/entrypoint.sh']
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgresql://postgres:postgres#users-db:5432/users_dev
- DATABASE_TEST_URL=postgresql://postgres:postgres#users-db:5432/users_test
depends_on:
- users-db
Curiously, when I comment out the "volumes" section, it works! But I want to be able to mount volumes in the future.
Directory structure can be seen as such:
D:\flask-react-auth
│ .gitignore
│ .gitlab-ci.yml
│ docker-compose.yml
│ README.md
│ release.sh
│
└───services
│
└───users
│ .coveragerc
│ .dockerignore
│ Dockerfile
│ Dockerfile.prod
│ entrypoint.sh
│ manage.py
│ requirements-dev.txt
│ requirements.txt
│ setup.cfg
│ tree.txt
│
└───project
│ config.py
│ __init__.py
│
├───api
│ │ ping.py
│ │ __init__.py
│ │
│ └───users
│ admin.py
│ crud.py
│ models.py
│ views.py
│ __init__.py
│
├───db
│ create.sql
│ Dockerfile
│
└───tests
conftest.py
pytest.ini
test_admin.py
test_config.py
test_ping.py
test_users.py
test_users_unit.py
__init__.py
I have added the D:\flask-react-auth\ to the 'Shared Folders' on virtualbox as well.
The answer seems obvious to me:
When you run the code as is
* it mounts the current working directory to '/usr/src/app'.
* The current working directory does not have a file 'entrypoint.sh'.
* It tries to run '/usr/src/app/entrypoint.sh' but it is not there so it fails.
When you comment out that volume mount
* I assume the image already has '/usr/src/app/entrypoint.sh' so it just works.
I think you probably should change the mounting code from
volumes:
- '.:/usr/src/app'
to
volumes:
- './services/users:/usr/src/app'

jenkins - pm2 restart all command in jenkins not updating the node-app for public

I am trying to deploy node.js project on a server with the help of jenkins ,i have added GitHub web-hook and every thing is working fine. pm2 restart index.js when i running this command from my user hamza its updating the content with new pull code but jenkins do command running successfully but not updating it even i tried su in my shell
+ ./script/deploy
su: must be run from a terminal
From https://github.com/hamza-younas94/node-app
* branch master -> FETCH_HEAD
a7e9a1a..188a395 master -> origin/master
Updating a7e9a1a..188a395
Fast-forward
index.js | 2 +-
script/deploy | 2 +-
test/test.js | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
audited 190 packages in 1.706s
found 55 vulnerabilities (16 low, 19 moderate, 19 high, 1 critical)
run `npm audit fix` to fix them, or `npm audit` for details
Use --update-env to update environment variables
[PM2] Applying action restartProcessId on app [index.js](ids: 0)
[PM2] [index](0) ✓
┌──────────┬────┬─────────┬──────┬───────┬────────┬─────────┬────────┬─────┬───────────┬─────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼─────────┼──────┼───────┼────────┼─────────┼────────┼─────┼───────────┼─────────┼──────────┤
│ index │ 0 │ 0.0.2 │ fork │ 10159 │ online │ 138 │ 0s │ 0% │ 22.0 MB │ jenkins │ disabled │
└──────────┴────┴─────────┴──────┴───────┴────────┴─────────┴────────┴─────┴───────────┴─────────┴──────────┘
Use `pm2 show <id|name>` to get more details about an app
Finished: SUCCESS
my ubtunu terminal output of pm2 command which working fine
┌──────────┬────┬─────────┬──────┬───────┬────────┬─────────┬────────┬─────┬───────────┬─────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼─────────┼──────┼───────┼────────┼─────────┼────────┼─────┼───────────┼─────────┼──────────┤
│ index │ 0 │ 0.0.2 │ fork │ 10159 │ online │ 25 │ 0s │ 0% │ 22.0 MB │ hamza │ disabled │
└──────────┴────┴─────────┴──────┴───────┴────────┴─────────┴────────┴─────┴───────────┴─────────┴──────────┘
my deploy shell code:
#!/bin/sh
su - hamza
cd /home/hamza/node-app
git pull origin master
npm install --production
pm2 restart index.js
exit
well i did it via shell in my shell i am connecting other user via ssh.
login in jenkins user generated ssh key
add the key in authorized_keys
wrote a shell which connection otheruser#my_ip_add and wrote command which i need
Why i have to do this?
Because pm2 restart all was working but working as jenkins user and you can see it in my question, when i was restart it with my otheruser which started it was working fine.
PS: may be pm2 require same user/session for doing activity

Resources