shell loop did not finish after exec a docker cmd? - docker

I had met a problem while running the code below:
I want to have a loop under a dir and loop in the sub dir.
cd /mysql_back
ls | while read line
do
echo $line
if [[ -d "$line" ]];then
echo $line
cd $line
ls *.txt | while read datafile
do
echo "start load data:" $datafile
echo "copy ${datafile%.txt} from '/docker-entrypoint-initdb.d/mysql_back/${line}/${datafile}' delimiter ',' csv;" >> add_data.sql
done
docker exec -i $pg_continer psql -U postgres -d $line -f "/docker-entrypoint-initdb.d/mysql_back/${line}/add_data.sql" 2>/dev/null
echo "start next dir"
cd ../
fi
done
cd ${RootPath}
the outputs:
dstore_notification
dstore_notification
start load data: global.txt
start load data: message.txt
start load data: templet_info.txt
start load data: templet.txt
start next dir
the loop ended after finish docker cmd in the first sub dir,
after removing the docker cmd code docker exec -i $pg_continer psql -U postgres -d $line -f "/docker-entrypoint-initdb.d/mysql_back/${line}/add_data.sql" 2>/dev/null
the outputs:
dstore_notification
dstore_notification
start load data: global.txt
start load data: message.txt
start load data: templet_info.txt
start load data: templet.txt
start next dir
dstore_rbac
dstore_rbac
start load data: rbac_admin.txt
start load data: rbac_app_admin_role.txt
start load data: rbac_app_admin.txt
start load data: rbac_app_role.txt
start load data: rbac_service_app.txt
start next dir
I can't figure out why
anybody can tell me why this happens?
and how should I do the docker cmd in every sub dir?
Thanks!

You are running the docker exec command with -i. This means that you are running the exec command in interactive mode with the terminal and docker container waiting for input from yourself and providing output from the container.
To run the container to the background, you need to use -d as opposed to -i. This will then also allow the script to continue as expected.
As a side note. Don't parse the output of ls. Instead use "for datafile in *"

Related

Are the files in the cli for Docker celery worker the same, if not what's a good way to create a common file for the threads to write to?

I have a legacy Docker application I'm working with that uses multiple Celery workers. There is a long running process I need to track. I'm able to write data to a file that is visible from the CLI interface of the worker thread:
I'm writing to the file like this:
def log(msg):
now = datetime.now()
dt_string = now.strftime("%Y-%m-%d %H:%M:%S")
fu.mkdirs(defs.LRP_LOG_DIR)
fu.append_string_to_file(dt_string + ": " + msg + "\n", defs.LRP_LOG_FILE)
def append_string_to_file(string, file_path):
with open(file_path, "a") as text_file:
text_file.write(string)
LRP_LOG_DIR = "/opt/project/backend"
LRP_LOG_FILE = LRP_LOG_DIR + "/lrp-log.txt"
The question is: If I add multiple Celery workers will they each write to their own file (not the desired behaviory) or will they all write to a common /opt/project/backend/lrp-log.txt file (the desired behavior)?
If they don't write to a common file, what do I need to do to get multiple Celery workers to write to the same file?
Also, it would be nice if this file was available on the host file system (I'm running on a Windows machine).
I ended up writing a couple of .sh scripts for Cygwin (I'm on windows). I would like to get the tail to work in the same script but this is good enough for now.
Script to start Docker and write to log file
echo
echo
echo
# STOP CONTAINERS
echo "Stopping all Containers..."
docker kill $(docker ps -q)
# DELETE CONTAINERS
echo "Deleting Containers..."
docker rm $(docker ps -aq)
echo
# PRUNE VOLUMES
echo "Pruning orphaned volumes"
docker volume prune -f
echo
# CREATE LOG DIR
mkdir ./logs
# DELETE OLD FULL LOG FILE
echo "Deleting old full log file..."
touch ./logs/full-log.txt
rm ./logs/full-log.txt
touch ./logs/full-log.txt
# SET UP LRP LOG FILE
echo "Deleting old lrp log file..."
touch ./logs/lrp-log.txt
rm ./logs/lrp-log.txt
# TAIL THE LOG FILE (display the running process in a cygwin window)
cygstart tail -f ./logs/full-log.txt
cygstart tail -f ./logs/lrp-log.txt
# START AES
echo "Starting anonlink entity service (aes)..."
echo "Process is running and writing log to ./full-log.txt"
echo "Long Running Process Log (LRP) is being written to lrp-log.txt"
echo "! ! ! DO NOT CLOSE THIS WINDOW ! ! !"
echo "(<ctrl-c> to quit the process)"
docker-compose -p anonlink -f ../tools/docker-compose.yml up --remove-orphans > ./logs/full-log.txt
echo
echo
echo "Done."
echo
echo
Script to create truncated log file to track long running processes
tail -f ./logs/full-log.txt | grep --line-buffered "LOG_FILE:" > ./logs/lrp-log.txt

Makefile docker wait for database to be ready

I'm attempting to create a makefile that will launch my db container, wait for it to complete before launching the rest of my app.
I have 2 compose files.
docker-compose.db.yml
docker-compose.yml
My make file is as follows:
default:
#echo "Preparing database"
docker-compose -f docker-compose.db.yml build
docker-compose -f docker-compose.db.yml pull
docker-compose -f docker-compose.db.yml up -d
#echo ""
#echo "Waiting for database \"ready for connections\""
#while [ -z "$(shell docker logs $(PROJECT_NAME)_mariadb 2>&1 | grep -o "ready for connections")" ]; \
do \
sleep 5; \
done
#echo "Database Ready for connections!"
#echo ""
#echo "Launching App Containers"
docker-compose build
docker-compose pull
docker-compose up -d
What happens is that it immediately goes to "Database Ready for connections!" even before the database is ready. If I run the same command in terminal it response with empty for about the first 20 seconds and then finally returns "ready for connections".
Thank you in advance
The GNU make $(shell ...) function gets run once when the Makefile is processed. So when your rule has
#while [ -z "$(shell docker logs $(PROJECT_NAME)_mariadb 2>&1 | grep -o "ready for connections")" ]
Make first runs the docker logs command on its own, then substitutes the result in the shell command it runs
while [ -z "ready for connections" ]
which is trivially false, and the loop exits immediately.
Instead you probably want to escape the $ in the shell substitution command
#while [ -z "$$(docker-compose logs mariadb ...) "]
It's fairly typical to configure containers to be able to wait for the database startup themselves, and to run the application and database from the same docker-compose.yml file. Docker Compose wait for container X before starting Y describes this setup.

how to load all saved docker images in parallel

I have 20 images TARed, now I want to load those images on another system. However, loading itself is taking 30 to 40 minutes. All images are independent of each other so all images loading should happen in parallel, I believe.
I tried solution like running load command in background(&) and wait till loading finishes, but observed that it is taking even more time. Any help here is highly appreciated.
Note:- not sure about the option -i to docker load command.
Try
find /path/to/image/archives/ -iname "*.tar" -o -iname "*.tar.xz" |xargs -r -P4 -i docker load -i {}
This will load Docker image archives in parallel (adjust -P4 to the desired number of parallel loads or set to -P0 for unlimited concurrency).
For speeding up the pulling/saving processes, you can use ideas from the snippet below:
#!/usr/bin/env bash
TEMP_FILE="docker-compose.image.pull.yaml"
image_name()
{
local name="$1"
echo "$name" | awk -F '[:/]' '{ print $1 }'
}
pull_images_file_gen()
{
local from_file="$1"
cat <<EOF >"$TEMP_FILE"
version: '3.4'
services:
EOF
while read -r line; do
cat <<EOF >>"$TEMP_FILE"
$(image_name "$line"):
image: $line
EOF
done < "$from_file"
}
save_images()
{
local from_file="$1"
while read -r line; do
docker save -o /tmp/"$(image_name "$line")".tar "$line" &>/dev/null & disown;
done < "$from_file"
}
pull_images_file_gen "images"
docker-compose -f $TEMP_FILE pull
save_images "images"
rm -f $TEMP_FILE
images - contains needed Docker images names list line by line.
Good luck!

issues in accessing docker environment variables in systemd service files

1) I am running a docker container with following cmd (passing few env variables with -e option)
$ docker run --name=xyz -d -e CONTAINER_NAME=xyz -e SSH_PORT=22 -e NWMODE=HOST -e XDG_RUNTIME_DIR=/run/user/0 --net=host -v /mnt:/mnt -v /dev:/dev -v /etc/sysconfig/network-scripts:/etc/sysconfig/network-scripts -v /:/hostroot/ -v /etc/hostname:/etc/host_hostname -v /etc/localtime:/etc/localtime -v /var/run/docker.sock:/var/run/docker.sock --privileged=true cf3681e04bfb
2) After running the container as above, i check the env variable NWMODE inside the container, and it shows correctly as shown below :
$ docker exec -it xyz bash
$ env | grep NWMODE
NWMODE=HOST
3) Now, i created a sample service 'b' shown below which executes a script b.sh (where i try to access NWMODE) :
root#ubuntu16:/etc/systemd/system# cat b.service
[Unit]
Description=testing service b
[Service]
ExecStart=/bin/bash /etc/systemd/system/b.sh
root#ubuntu16:/etc/systemd/system# cat b.sh
#!/bin/bash`
systemctl import-environment
echo "NWMODE:" $NWMODE`
4) Now if i start service 'b' and see its logs, it shows that it is not able to access NWMODE env variable
$ systemctl start b
$ journalctl -fu b
...
systemd[1]: Started testing service b.
bash[641]: NWMODE: //blank for $NWMODE here`
5) Now rather than having 'systemctl import-environment' in b.sh, if i do following then the b.service logs show the correct value of NWMODE env variable:
$ systemctl import-environment
$ systemctl start b
Though the step 5 above works i can't go for it, as all the services in my system will be started automatically by systemd. In that case, can anyone please let me know how can i access the environment variables (passed using 'docker run...' cmd above) in a service file (say for e.g. in b.sh above). Can this be achieved somehow with systemctl import-environment or there is some other way ?
systemd unsets all environment variables to provide a clean environment. Afaik that is intended to be a security feature.
Workaround: Create a file /etc/systemd/system.conf.d/myenvironment.conf:
[Manager]
DefaultEnvironment=CONTAINER_NAME=xyz NWMODE=HOST XDG_RUNTIME_DIR=/run/user/0
systemd will set the environment variables declared in this file.
You can set up an ENTRYPOINT script that automatically creates this file before running systemd. Example:
RUN echo '#! /bin/bash \n\
echo "[Manager] \n\
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env) \n\
" >/etc/systemd/system.conf.d/myenvironment.conf \n\
exec /lib/systemd/systemd \n\
' >/usr/local/bin/setmyenv && chmod +x /usr/bin/setmyenv
ENTRYPOINT /usr/bin/setmyenv
Instead of creating the script within Dockerfile you can store it outside and add it with COPY:
#! /bin/bash
echo "[Manager]
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env)
" >/etc/systemd/system.conf.d/myenvironment.conf
exec /lib/systemd/systemd
TL;DR
Run the the command using bash, first store the docker environment variables to a file (or just pipe them two awk), extract & export the variable and finally run your main script.
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"
Whatever #mviereck is saying is true, still I have found another solution to this problem.
My use case is to pass an environment variable to my system-d container in the Docker run command (docker run -e MY_ENV_VARIABLE="some_val") and use that in the python script that is run through the system-d unit file.
According to this post (https://forums.docker.com/t/where-are-stored-the-environment-variables/65762) the container environment variables can be found in the running process /proc/1/environ inside the container. Performing a cat does show that the environment variable MY_ENV_VARIABLE=some_val does exist, though in some mangled form.
$ cat /proc/1/environ
HOSTNAME=271fbnd986bdMY_ENV_VARIABLE=some_valcontainer=dockerLC_ALL=CDEBIAN_FRONTEND=noninteractiveHOME=/rootroot#271fb0d986bd
The main task now would be to extract MY_ENV_VARIABLE="some_val" value and pass it to the ExecStart directive in the system-d unit file.
(extraction code referenced from How to grep for value in a key-value store from plain text)
# this outputs a nice key,value pair
$ cat /proc/1/environ | tr '\0' '\n'
HOSTNAME=861f23cd1b33
MY_ENV_VARIABLE=some_val
container=docker
LC_ALL=C
DEBIAN_FRONTEND=noninteractive
HOME=/root
# we can store this in a file for use, too
$ cat /proc/1/environ | tr '\0' '\n' > /home/env_var_file
# we can then reuse the file to extract the value of interest against a key
$ awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file
some_val
Now in the ExecStart directive in the system-d unit file we can do this:
[Service]
Type=simple
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"

How to workaround "the input device is not a TTY" when using grunt-shell to invoke a script that calls docker run?

When issuing grunt shell:test, I'm getting warning "the input device is not a TTY" & don't want to have to use -f:
$ grunt shell:test
Running "shell:test" (shell) task
the input device is not a TTY
Warning: Command failed: /bin/sh -c ./run.sh npm test
the input device is not a TTY
Use --force to continue.
Aborted due to warnings.
Here's the Gruntfile.js command:
shell: {
test: {
command: './run.sh npm test'
}
Here's run.sh:
#!/bin/sh
# should use the latest available image to validate, but not LATEST
if [ -f .env ]; then
RUN_ENV_FILE='--env-file .env'
fi
docker run $RUN_ENV_FILE -it --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
Here's the relevant package.json scripts with command test:
"scripts": {
"test": "mocha --color=true -R spec test/*.test.js && npm run lint"
}
How can I get grunt to make docker happy with a TTY? Executing ./run.sh npm test outside of grunt works fine:
$ ./run.sh npm test
> yaktor#0.59.2-pre.0 test /app
> mocha --color=true -R spec test/*.test.js && npm run lint
[snip]
105 passing (3s)
> yaktor#0.59.2-pre.0 lint /app
> standard --verbose
Remove the -t from the docker run command:
docker run $RUN_ENV_FILE -i --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
The -t tells docker to configure the tty, which won't work if you don't have a tty and try to attach to the container (default when you don't do a -d).
This solved an annoying issue for me. The script had these lines:
docker exec **-it** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file
mutt -s "File is here" someone#somewhere.com < /var/tmp/temp.file
The script would run great if run directly and the mail would come with the correct output. However, when run from cron, (crontab -e) the mail would come with no content. Tried many things around permissions and shells and paths etc. However no joy!
Finally found this:
*/20 * * * * scriptblah.sh > $HOME/cron.log 2>&1
And on that cron.log file found this output:
the input device is not a TTY
Search led me here. And after I removed the -t, it's working great now!
docker exec **-i** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file

Resources