Copy single file into container via docker-compose and environmental variable - docker

Any ideas why this isn't putting the file into the container?
k6:
image: loadimpact/k6:0.32.0
environment:
- ENVIRONMENT=${ENVIRONMENT}
- AMOUNT_OF_USERS=${AMOUNT_OF_USERS}
- RUN_TIME=${RUN_TIME}
command: run /test.mjs
volumes:
- ./load-test.mjs:/test.mjs # <-- this works
- ./${USERS_FILE}:/users.csv # <-- this does not work
I'm running with the command:
ENVIRONMENT=bla AMOUNT_OF_USERS=5 RUN_TIME=1m USERS_FILE=users2.csv docker-compose up
I did a check inside the container:
console.log(exists('users.csv'))
console.log(exists('test.mjs'))
Results:
k6_1 | time="2021-05-24T11:31:51Z" level=info msg=false source=console
k6_1 | time="2021-05-24T11:31:51Z" level=info msg=true source=console
The USERS_FILE variable file exists in the same directory as the current working directory, i.e. users2.csv.
If I set the volume to the following it works:
- ./users2.csv:/users.csv

volumes:
- ./load-test.mjs:/test.mjs
- ${PWD}/${USERS_FILE}:/users.csv

Related

FreeIPA Docker Compose WEB UI

After spending hours searching why I cannot access to my webUI, I turn to you.
I setup freeipa on docker using docker-compose. I opened some port to gain remote access using host-ip:port on my own computer. Freeipa is supposed to be run on my server (lets say 192.168.1.2) and the webui accessible with any other local computer on port 80 / 443 (192.168.1.4:80 or 192.168.1.4:443)
When I run my .yaml file, freeipa get setup with a "the ipa-server-install command was successful" message.
I thought it could come from my tight iptables rules and tried to put all policies to ACCEPT to debug. It didn't do it.
I'm a bit lost to how I could debbug this or find how to fix it.
OS : ubuntu 20.04.3
Docker version: 20.10.12, build e91ed57
freeipa image: freeipa/freeipa:centos-8-stream
Docker-compose version: 1.29.2, build 5becea4c
My .yaml file:
version: "3.8"
services:
freeipa:
image: freeipa/freeipa-server:centos-8-stream
hostname: sanctuary
domainname: serv.sanctuary.local
container_name: freeipa-dev
ports:
- 80:80
- 443:443
- 389:389
- 636:636
- 88:88
- 464:464
- 88:88/udp
- 464:464/udp
- 123:123/udp
dns:
- 10.64.0.1
- 1.1.1.1
- 1.0.0.1
restart: unless-stopped
tty: true
stdin_open: true
environment:
IPA_SERVER_HOSTNAME: serv.sanctuary.local
IPA_SERVER_IP: 192.168.1.100
TZ: "Europe/Paris"
command:
- -U
- --domain=sanctuary.local
- --realm=sanctuary.local
- --admin-password=pass
- --http-pin=pass
- --dirsrv-pin=pass
- --ds-password=pass
- --no-dnssec-validation
- --no-host-dns
- --setup-dns
- --auto-forwarders
- --allow-zone-overlap
- --unattended
cap_add:
- SYS_TIME
- NET_ADMIN
restart: unless-stopped
volumes:
- /etc/localtime:/etc/localtime:ro
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- ./data:/data
- ./logs:/var/logs
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
- net.ipv6.conf.lo.disable_ipv6=0
security_opt:
- "seccomp:unconfined"
labels:
- dev
I tried to tinker with the deployment file (add or remove conf found on internet such as add/remove IPA_SERVER_IP, add/remove an external bridge network)
Thank you very much for any help =)
Alright, for those who might have the same problem, I will explain everything I did to debug this.
I extensively relieded on the answers found here : https://floblanc.wordpress.com/2017/09/11/troubleshooting-freeipa-pki-tomcatd-fails-to-start/
First, I checked the status of each services with ipactl status. Depending of the problem, you might have different output but mine was like this :
Directory Service: RUNNING
krb5kdc Service: RUNNING
kadmin Service: RUNNING
named Service: RUNNING
httpd Service: RUNNING
ipa-custodia Service: RUNNING
pki-tomcatd Service: STOPPED
ipa-otpd Service: RUNNING
ipa-dnskeysyncd Service: RUNNING
ipa: INFO: The ipactl command was successful
I therefore checked the logs for tomcat /var/log/pki/pki-tomcat/ca/debug-xxxx. I realised I had connection refused with something related to the certificates.
Here, I first checked that my certificate was present in /etc/pki/pki-tomcat/alias using sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca'.
## output :
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4 (0x4)
...
...
Then I made sure that the private key can be read using the password found in /var/lib/pki/pki-tomcat/conf/password.conf (with the tag internal=…)
grep internal /var/lib/pki/pki-tomcat/conf/password.conf | cut -d= -f2 > /tmp/pwdfile.txt
certutil -K -d /etc/pki/pki-tomcat/alias -f /tmp/pwdfile.txt -n 'subsystemCert cert-pki-ca'
I still had nothings strange so I assumed that at this point :
pki-tomcat is able to access the certificate and the private key
The issue is likely to be on the LDAP server side
I tried to read the user entry in the LDAP to compare it to the certificate using ldapsearch -LLL -D 'cn=directory manager' -W -b uid=pkidbuser,ou=people,o=ipaca userCertificate description seeAlso but had an error after entering the password. Because my certs were OK and LDAP service running, I assumed something was off with the certificates date.
Indeed, during the install freeipa setup the certs using your current system date as base. But it also install chrony for server time synchronization. After reboot, my chrony conf were wrong and set my host date 2 years ahead.
I couldnt figure out the problem with the chrony conf so I stopped the service and set the date manually using timedatectl set-time "yyyy-mm-dd hh:mm:ss".
I restarted freeipa services amd my pki-tomcat service was working again.
After that, I set the freeipa IP in my router as DNS. I restarted services and computer in the local network so DNS config were refreshed. After that, the webUI was accessible !

docker-compose unlink network from child containers when stopping parent containers?

This is a continuation of my journey of creating multiple docker projects dynamically. I did not mention previously, to make this process dynamica as I want devs to specify what project they want to use, I'm using ansible to up local env.
Logic is:
running ansible-playbook run.yml -e "{projectsList:
['app-admin']}" - providing list of projects I want to start
stop existing main containers (in case they are running from the previous time)
Start the main containers
Depend on the provided list of projects run role tasks () I have a separate role for each supported project
stop the existing child project containers (in case they are running from the previous time)
start the child project containers
make some configuration depend on the role
And here is the issue (again) with the network, when I stop the main containers it's failing with a message:
error while removing network: network appnetwork has active endpoints
it makes sense as child docker containers use the same network, but I do not see so far way to change ordering of tasks as I'm using the roles, so main docker tasks always running before role-specific tasks.
main ansible file:
---
#- import_playbook: './services/old.yml'
- hosts: localhost
gather_facts: true
vars:
# add list of all supported projects, THIS SHOULD BE UPDATED FOREACH NEW PROJECT!
supportedProjects: ['all', 'app-admin', 'app-landing']
vars_prompt:
- name: "ansible_become_pass"
prompt: "Sudo password"
private: yes
pre_tasks:
# List of projects should be provided
- fail: msg="List of projects you want to run playbook for not provided"
when: (projectsList is not defined) or (projectsList|length == 0)
# Remove unsupported projects from list
- name: Filter out not supported projects
set_fact:
filteredProjectsList: "{{ projectsList | intersect(supportedProjects) }}"
# Check if any of projects exist after filtering
- fail: msg="All project you provided not supported. Supported projects {{ supportedProjects }}"
when: filteredProjectsList|length == 0
# Always stop existing docker containers
- name: stop existing common app docker containers
docker_compose:
project_src: ../docker/common/
state: absent
- name: start common app docker containers like nginx proxy, redic, mailcatcher etc. (this can take a while if running by the first time)
docker_compose:
project_src: ../docker/common/
state: present
build: no
nocache: no
- name: Get www-data id
command: docker exec app-php id -u www-data
register: wwwid
- name: Get current user group id
command: id -g
register: userid
- name: Register user and www-data ids
set_fact:
userid: "{{userid.stdout}}"
wwwdataid: "{{wwwid.stdout}}"
roles:
- { role: app-landing, when: '"app-landing" in filteredProjectsList or "all" in filteredProjectsList' }
- { role: app-admin, when: ("app-admin" in filteredProjectsList) or ("all" in filteredProjectsList) }
and role example app-admin/tasks/mian.yml:
---
- name: Sync {{name}} with git (can take while to clone repo by the first time)
git:
repo: "{{gitPath}}"
dest: "{{destinationPath}}"
version: "{{branch}}"
- name: stop existing {{name}} docker containers
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: absent
- name: start {{name}} docker containers (this can take a while if running by the first time)
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: present
build: no
nocache: no
- name: Copy {{name}} env file
copy:
src: development.env
dest: "{{destinationPath}}.env"
force: no
- name: Set file permissions for local {{name}} project files
command: chmod -R ug+w {{projectPath}}
become: yes
- name: Set execute permissions for local {{name}} bin folder
command: chmod -R +x {{projectPath}}/bin
become: yes
- name: Set user/group for {{name}} to {{wwwdataid}}:{{userid}}
command: chown -R {{wwwdataid}}:{{userid}} {{projectPath}}
become: yes
- name: Composer install for {{name}}
command: docker-compose -f {{mainDockerComposeFileDestination}}docker-compose.yml exec -T app-php sh -c "cd {{containerProjectPath}} && composer install"
Maybe there is a way to somehow unlink the network if the main container stop. I thought when a child container network set like external:
networks:
appnetwork:
external: true
solves the issue, but it's not.
A quick experiment with an external network:
dc1/dc1.yml
version: "3.0"
services:
nginx:
image: nginx
ports:
- "8080:80"
networks:
- an0
networks:
an0:
external: true
dc2/dc2.yml
version: "3.0"
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- an0
networks:
an0:
external: true
Starting and stopping:
$ docker network create -d bridge an0
1e07251e32b0d3248b6e70aa70a0e0d0a94e457741ef553ca5f100f5cec4dea3
$ docker-compose -f dc1/dc1.yml up -d
Creating dc1_nginx_1 ... done
$ docker-compose -f dc2/dc2.yml up -d
Creating dc2_redis_1 ... done
$ docker-compose -f dc1/dc1.yml down
Stopping dc1_nginx_1 ... done
Removing dc1_nginx_1 ... done
Network an0 is external, skipping
$ docker-compose -f dc2/dc2.yml down
Stopping dc2_redis_1 ... done
Removing dc2_redis_1 ... done
Network an0 is external, skipping

How to get Docker-Compose to run a shell command before starting IBM-MQ server?

Hey guys so I've been trying to get my docker-compose.yml file to just cat the contents of a local file before it boots up the IBM-MQ server but I can't seem to get the MQ server to work correctly. I have a simple helloworld.txt in the files folder that just consists of HELLO WORLD in it that I'm trying to cat.
version: '3'
mq:
image: ibmcom/mq:latest
ports:
- "1414:1414"
environment:
- LICENSE=accept
- MQ_QMGR_NAME=MQA01
volumes:
- ./files:/var/mqm
stdin_open: true
tty: true
restart: always
command: >
sh -c "cat helloworld.txt"
But running docker-compose up gives the below error -
mq_1 | 2019-09-11T18:15:24.360Z CPU architecture: amd64
mq_1 | 2019-09-11T18:15:24.360Z Linux kernel version: 4.15.0-20-generic
mq_1 | 2019-09-11T18:15:24.360Z Container runtime: docker
mq_1 | 2019-09-11T18:15:24.361Z Base image: Red Hat Enterprise Linux Server 7.6 (Maipo)
mq_1 | 2019-09-11T18:15:24.365Z Running as user ID 888 () with primary group 888, and supplementary groups 0
mq_1 | 2019-09-11T18:15:24.365Z Capabilities (bounding set): chown,dac_override,fowner,fsetid,kill,setgid,setuid,setpcap,net_bind_service,net_raw,sys_chroot,mknod,audit_write,setfcap
mq_1 | 2019-09-11T18:15:24.366Z seccomp enforcing mode: filtering
mq_1 | 2019-09-11T18:15:24.366Z Process security attributes: docker-default (enforce)
mq_1 | 2019-09-11T18:15:24.366Z Detected 'ext4' volume mounted to /mnt/mqm/data
mq_1 | 2019-09-11T18:15:24.467Z Set password for "admin" user
mq_1 | 2019-09-11T18:15:24.579Z Using queue manager name: MQA01
mq_1 | 2019-09-11T18:15:24.580Z Error: Unable to change ownership of /mnt/mqm/data
mq_1 | 2019-09-11T18:15:24.580Z chown /mnt/mqm/data: operation not permitted
EDIT - I changed volumes to
volumes:
- ./files/helloworld.txt:/usr/local/tomcat/webapps/helloworld.txt
But now it just seems like the manager runs indefinitely and the shell command cat helloworld.txt is never run.
Error: Unable to change ownership of /mnt/mqm/data
Running with the default configuration and a volume
The above example will not persist any configuration data or messages
across container runs. In order to do this, you need to use a volume.
For example, you can create a volume with the following command:
docker volume create qm1data
You can then run a queue manager using this volume as follows:
docker run \
--env LICENSE=accept \
--env MQ_QMGR_NAME=QM1 \
--publish 1414:1414 \
--publish 9443:9443 \
--detach \
--volume qm1data:/mnt/mqm \
ibmcom/mq
or after
docker volume create qm1data
volume once created then
image: ibmcom/mq:latest
ports:
- "1414:1414"
environment:
- LICENSE=accept
- MQ_QMGR_NAME=MQA01
volumes:
- qm1data:/var/mqm
stdin_open: true
tty: true
restart: always
The Docker image always uses /mnt/mqm for MQ data, which is correctly linked for you under /var/mqm at runtime. This is to handle problems with file permissions on some platforms.
running-with-the-default-configuration-and-a-volume

Error output when running a postfix container

I have the following in my docker-compose.yml file:
simplemail:
image: tozd/postfix
ports:
- "25:25"
So far, so good. But I get the following output when I run docker-compose run simplemail:
rsyslogd: cannot create '/var/spool/postfix/dev/log': No such file or
directory rsyslogd: imklog: cannot open kernel log (/proc/kmsg):
Operation not permitted. rsyslogd: activation of module imklog failed
[try http://www.rsyslog.com/e/2145 ] rsyslogd: Could no open output
pipe '/dev/xconsole': No such file or directory [try
http://www.rsyslog.com/e/2039 ] * Starting Postfix Mail Transport
Agent postfix [ OK ]
What can I do to fix the errors above?
The documentation for the tozd/postfix image states:
You should make sure you mount spool volume (/var/spool/postfix) so
that you do not lose e-mail data when you are recreating a container.
If a volume is empty, image will initialize it at the first startup.
your docker-compose.yml file should then be:
version: "3"
volumes:
postfix-data: {}
services:
simplemail:
image: tozd/postfix
ports:
- "25:25"
volumes:
- postfix-data:/var/spool/postfix

Ejabberd + Docker | Node name mismatch: I'm [ejaberd#...], the database is owned by [ejabberd#...]

I'm trying to setup ejabberd server via docker so I could use pidgin to chat with my teammates.
I have the following docker compose file:
version: "2"
services:
ejabberd:
image: rroemhild/ejabberd
ports:
- "5222:5222"
- "5269:5269"
- "5280:5280"
environment:
- ERLANG_NODE=ejabberd
- XMPP_DOMAIN=localhost
- EJABBERD_ADMINS=admin
- EJABBERD_USERS=admin:pass1 user1:pass2 user2:pass3
volumes:
- ssl:/opt/ejabberd/ssl
- backup:/opt/ejabberd/backup
- upload:/opt/ejabberd/upload
- database:/opt/ejabberd/database
volumes:
ssl:
backup:
upload:
database:
Whenever I try to launch ejabberd I get this error:
ejabberd_1 | 05:52:58.912 [critical] Node name mismatch: I'm
[ejabberd#986834bd1bc8], the database is owned by
[ejabberd#319f85780c99] ejabberd_1 | 05:52:58.912 [critical] Either
set ERLANG_NODE in ejabberdctl.cfg or change node name in Mnesia
Is there something I'm missing?
Because your hostname is changing when Docker is initialised, you'll need to override the ERLANG_NODE setting in /etc/ejabberd/ejabberdctl.cfg. For example:
ERLANG_NODE=ejabberd#mypermanenthostname
If others are attempting to migrating an ejabberd instance, they can do something similar, but the instructions they really want are:
https://docs.ejabberd.im/admin/guide/managing/#ad-hoc-commands

Resources