Redirection of stdout and stderr to file not working on raspbian - stdout

I am trying to redirect STDOUT and STDERR to a log file on raspberry pi.
My .sh script contains this line
sudo ./main.py &> client.log &
The script runs correctly as it transfers data to and from my server, but client.log file remains empty. I tried &>; &>>; >> with 2>&1; and |&. None of them write any data to client.log.
sudo ./main.py
produces both stdout and stderr output. What am I doing wrong?

The syntax you are looking for is:
sudo ./main.py > client.log 2>&1 &
> client.log redirects standard output to the file client.log
2>&1 redirects stderr to stdout
& at the end of the line runs this in the background so you can continue working at the command prompt.
Note: if you log off while the background command is running, it will be killed. You can override this behavior by adding nohup to the beginning of the line. For more information google bash jobs
Edited to add additional information after comment below
revised syntax:
sudo stdbuf -o L -e L ./main.py > client.log 2>&1 &
stdbuf modifies the default linux output buffering
-o L Flushes stdout at the end of every line
-e L Flushes stderr at the end of every line

python -u test.py > output.txt &
Python will buffer your output by default, and simply killing the script doesn't immediately flush that standard output to disk

Related

Why neo4j docker container restart causes the container to hang and quit

I created a custom docker image in order to launch a wrapper script to load initial data. The first time I launch the container I kinda works, sometimes fails but I guess there is something cached or I don't wait enough for neo4j to be up.
The problem comes when I stop the container and i restart it. It downloads the plugins then it seems to hang and it fails to bring the process to foreground.
./wrapper.sh: line 57: fg: job has terminated
In /logs/debug.log there is no log when i restart the container. So it is hard to understand what's going on. Some permission issue?
Here my wrapper file
#!/bin/bash
# THANK YOU! Special shout-out to #marcellodesales on GitHub
# https://github.com/marcellodesales/neo4j-with-cypher-seed-docker/blob/master/wrapper.sh for such a great example script
# Log the info with the same format as NEO4J outputs
log_info() {
# https://www.howtogeek.com/410442/how-to-display-the-date-and-time-in-the-linux-terminal-and-use-it-in-bash-scripts/
# printf '%s %s\n' "$(date -u +"%Y-%m-%d %H:%M:%S:%3N%z") INFO Wrapper: $1" # Display UTC time
printf '%s %s\n' "$(date +"%Y-%m-%d %H:%M:%S:%3N%z") INFO Wrapper: $1" # Display local time (PST/PDT)
return
}
# Adapted from https://github.com/neo4j/docker-neo4j/issues/166#issuecomment-486890785
# Alpine is not supported anymore, so this is newer
# Refactoring: Marcello.deSales+github#gmail.com
# turn on bash's job control
# https://stackoverflow.com/questions/11821378/what-does-bashno-job-control-in-this-shell-mean/46829294#46829294
set -m
# Start the primary process and put it in the background
/docker-entrypoint.sh neo4j &
# Wait for Neo4j
log_info "Checking to see if Neo4j has started at http://${DB_HOST}:${DB_PORT}..."
wget --quiet --tries=20 --waitretry=10 -O /dev/null http://${DB_HOST}:${DB_PORT}
log_info "Neo4j has started 🤓"
log_info "Importing data with auth ${NEO4J_AUTH}"
# Import data
log_info "Loading and importing Cypher file(s)..."
for cypherFile in /var/lib/neo4j/import/*.data.cypher; do
[ -f "$cypherFile" ] || break
log_info "Running cypher ${cypherFile}"
cat ${cypherFile} | bin/cypher-shell -u ${NEO4J_USER} -p ${NEO4J_PASSWORD} --fail-fast --format plain
log_info "Renaming import file ${cypherFile}"
mv ${cypherFile} ${cypherFile}.applied
done
log_info "Finished loading data"
log_info "Running startup cypher script..."
for cypherFile in /var/lib/neo4j/import/*.startup.cypher; do
[ -f "$cypherFile" ] || break
log_info "Running cypher ${cypherFile}"
cat ${cypherFile} | bin/cypher-shell -u ${NEO4J_USER} -p ${NEO4J_PASSWORD} --fail-fast --format plain
done
log_info "Finished running startup script"
# now we bring the primary process back into the foreground
# and leave it there
fg %1
And here my dockerfile
FROM neo4j
ENV NEO4J_USER=neo4j
ENV NEO4J_PASSWORD=s3cr3t
ENV NEO4J_AUTH=${NEO4J_USER}/${NEO4J_PASSWORD}
ENV NEO4JLABS_PLUGINS='["apoc", "graph-data-science"]'
ENV NEO4J_HOME='/var/lib/neo4j'
ENV DB_HOST='localhost'
ENV DB_PORT=7474
ENV NEO4J_dbms_logs_debug_level='DEBUG'
ENV NEO4J_dbms_logs_user_stdout__enabled='true'
EXPOSE 7474 7473 7687
COPY initial-data/ /var/lib/neo4j/import/
COPY ./docker-scripts/wrapper.sh wrapper.sh
ENTRYPOINT ["./wrapper.sh"]
Any idea how to solve this issue or at least to understand what's wrong?
It seems to happen with the latest version, when I switched to neo4j:4.2 then it started to work correctly.
I tried to run the clean images and both work, but using the wrapper script it seems to me that 4.4.10 has some issues in shutting down, maybe leaving some inconsistent state

Are the files in the cli for Docker celery worker the same, if not what's a good way to create a common file for the threads to write to?

I have a legacy Docker application I'm working with that uses multiple Celery workers. There is a long running process I need to track. I'm able to write data to a file that is visible from the CLI interface of the worker thread:
I'm writing to the file like this:
def log(msg):
now = datetime.now()
dt_string = now.strftime("%Y-%m-%d %H:%M:%S")
fu.mkdirs(defs.LRP_LOG_DIR)
fu.append_string_to_file(dt_string + ": " + msg + "\n", defs.LRP_LOG_FILE)
def append_string_to_file(string, file_path):
with open(file_path, "a") as text_file:
text_file.write(string)
LRP_LOG_DIR = "/opt/project/backend"
LRP_LOG_FILE = LRP_LOG_DIR + "/lrp-log.txt"
The question is: If I add multiple Celery workers will they each write to their own file (not the desired behaviory) or will they all write to a common /opt/project/backend/lrp-log.txt file (the desired behavior)?
If they don't write to a common file, what do I need to do to get multiple Celery workers to write to the same file?
Also, it would be nice if this file was available on the host file system (I'm running on a Windows machine).
I ended up writing a couple of .sh scripts for Cygwin (I'm on windows). I would like to get the tail to work in the same script but this is good enough for now.
Script to start Docker and write to log file
echo
echo
echo
# STOP CONTAINERS
echo "Stopping all Containers..."
docker kill $(docker ps -q)
# DELETE CONTAINERS
echo "Deleting Containers..."
docker rm $(docker ps -aq)
echo
# PRUNE VOLUMES
echo "Pruning orphaned volumes"
docker volume prune -f
echo
# CREATE LOG DIR
mkdir ./logs
# DELETE OLD FULL LOG FILE
echo "Deleting old full log file..."
touch ./logs/full-log.txt
rm ./logs/full-log.txt
touch ./logs/full-log.txt
# SET UP LRP LOG FILE
echo "Deleting old lrp log file..."
touch ./logs/lrp-log.txt
rm ./logs/lrp-log.txt
# TAIL THE LOG FILE (display the running process in a cygwin window)
cygstart tail -f ./logs/full-log.txt
cygstart tail -f ./logs/lrp-log.txt
# START AES
echo "Starting anonlink entity service (aes)..."
echo "Process is running and writing log to ./full-log.txt"
echo "Long Running Process Log (LRP) is being written to lrp-log.txt"
echo "! ! ! DO NOT CLOSE THIS WINDOW ! ! !"
echo "(<ctrl-c> to quit the process)"
docker-compose -p anonlink -f ../tools/docker-compose.yml up --remove-orphans > ./logs/full-log.txt
echo
echo
echo "Done."
echo
echo
Script to create truncated log file to track long running processes
tail -f ./logs/full-log.txt | grep --line-buffered "LOG_FILE:" > ./logs/lrp-log.txt

while infinite loop SH does not work as expected on docker startup

I have sh code (DashBoardImport.sh) like down below. It checks apı response to import a kibana dashboard in a infinite loop, If it gets a reponse with success, it breaks the loop :
#!/bin/sh
# use while loop to check if kibana is running
while true
do
response=$(curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson | grep -oE "^\{\"success")
#curl -X GET elk:9200/git-demo-topic | grep -oE "^\{\"git" > /dev/null
#match=$?
echo $response
if [ '{"success' = $response ]
then
echo "Running import dashboard.."
#curl -X POST elk:5601/api/saved_objects/_import -H "kbn-xsrf: true" --form file=#/etc/elasticsearch/CityCountDashBoard.ndjson
break
else
echo "Kibana is not running yet"
sleep 5
fi
done
I run DashBoardImport.sh via docker file:
ADD ./CityCountDashBoard.ndjson /etc/elasticsearch/CityCountDashBoard.ndjson
ADD ./DashBoardImport.sh /etc/elasticsearch/DashBoardImport.sh
#ENTRYPOINT /etc/elasticsearch/DashBoardImport.sh &
USER root
RUN chmod +x /etc/elasticsearch/DashBoardImport.sh
#RUN /etc/elasticsearch/DashBoardImport.sh &
RUN nohup bash -c "/etc/elasticsearch/DashBoardImport.sh" >/dev/null 2>&1 &
I tried many options as you can see commented out. The sh works perfectly when I run it manually on the Docker Container. I kill the kibana service. then run the code. after I started the kibana, code succesfully workes as expected and imports the dashboard. But It does not work when it start on container automatically.
Do you have any idea?
Thanks alot in advance :)
A RUN step executes in a temporary container until the command returns and then docker captures the changes to the filesystem as a new layer in your image. Nothing else remains, no environment variables, running processes, etc, only the filesystem changes.
So when you RUN nohup ... & that process immediately returns since it's in the background (nohup ... & explicitly does that), and so the container exits, killing any processes that were running in the container, and captures the filesystem changes made, if any, to your image.
If you want something to run when you start the container, add it to your ENTRYPOINT or CMD.

/usr/bin/sudo: Permission denied when calling sudo from sh script via telegra-cli with lua script

Im trying to run my .sh scipt status.sh via a telegram message:
Ubuntu 20.04.1 LTS server
Telegram-cli with a lua script to action status.sh script
when i send the message "status" to my server via telegram it actions the status.sh script, in this script i have a bunch of stuff that gathers info for me and sends it back to telegram so i can see what the status of my server is, however (i recently did a fresh install of the server) for some reason if the script has a line of code starting with sudo i get:
line 38: /usr/bin/sudo: Permission denied
if i run the script from the command line ./status.sh it runs without any problem!? so im thinking its because it is being called from telegram or lua!?
example of code that generates the error: sudo ifconfig enp0s25 >> file
on the other hand this line works without a problem:
sudo echo Time: $(date +"%H:%M:%S") > file
/usr/bin has 0755 permission set
sudo has 4755 permission set
The following command
sudo ifconfig enp0s25 >> file
would not work if file requires root privilege to be modified.
sudo affects ifconfig but not the redirection.
To fix it:
sudo sh -c 'ifconfig enp0s25 >> file'
As mentioned in Egor Skriptunoff's answer, sudo only affects the command being run with sudo, and not the redirect.
Perhaps nothing is being written to file in your case because ifconfig is writing the output you are interested in to stderr instead of to stdout.
If you want to append both stdout and stderr to file as root, use this command:
sudo sh -c 'ifconfig enp0s25 >> file 2>&1'
Here, sh is invoked via sudo so that the redirect to file will be done as root.
Without the 2>&1, only ifconfig's stdout will be appended to file. The 2>&1 tells the shell to redirect stderr to stdout.
If file can be written to without root, this may simplify to
sudo ifconfig enp0s25 >> file 2>&1

Crontab task doesn't work when I edit crontab by vim instead of "crontab -e" on docker container ubuntu18.04

Crontab task doesn't work when I edit crontab by vim instead of "crontab -e" on docker container ubuntu18.04
step 1:
Use docker run a container, the image is ubuntu18.04 os.
step 2:
vim /var/spool/cron/crontabs/root
and Write content to root file as following:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
*/1 * * * * . /etc/profile; /bin/sh /test_cron/xx.sh 2>&1
step 3:
cd /
mkdir test_cron
cd test_cron
step 4:
Edit xx.sh in /test_cron/xx.sh, the content as following:
echo "cron job has start" >> /test_cron/run.log
step 5:
service cron restart
step 6:
There is no run.log in /test_cron/, that’s to say, the crontab task doesn't work. But if I using "crontab -e" to open the /var/spool/cron/crontabs/root file and don't make anything modify. Just open and close the /var/spool/cron/crontabs/root, I can see the run.log file in /test_cron/, amazing, the crontab task worked. Could you tell me the reason?
Few points,
The corntab -e makes sure certain formatting and error tracking to an extent.
crontab file should contain one empty line at the end.
There are certain permission to set to the crontab chmod 600
After completing these I could see the manual entry was working. However it is not recommended to directly edit the crontab file and the best practice is to use crontab -e
EDIT: Actually emptyline should be corrected as newline character or % as per the man page
The "sixth" field (the rest of the line) specifies the command to be
run. The entire command portion of the line, up to a newline or %
character, will be executed by /bin/sh or by the shell specified in
the SHELL variable of the cronfile. Percent-signs (%) in the command,
unless escaped with backslash (), will be changed into newline
characters, and all data after the first % will be sent to the command
as standard input.

Resources