Openwrt Script - Autostartup Shadowsocks - openwrt

I would like to create a script for openwrt that every day changes some variables inside the Shadowsocks service. This is the script but i don't know where to put it or how to manage to call it every day or every reboot of router.
#!/bin/sh /etc/rc.common
restart=0
for i in `uci show shadowsocks | grep alias | sed -r 's/.*\[(.*)\].*/\1/'`
do
server=$(uci get shadowsocks.#servers[${i}].alias)
result=$(nslookup $server)
new_ip=$(echo "${result}" | tail -n +3 | awk -F" " '/^Address 1/{ print $3}')
if [ -n "$new_ip" ]; then
logger -t shadowsocks "nslookup $server -> $new_ip"
old_ip=$(uci get shadowsocks.#servers[${i}].server)
if [ "$old_ip" != "$new_ip" ]; then
logger -t shadowsocks "detect $server ip address change ($old_ip -> $new_ip)"
restart=1
uci set shadowsocks.#servers[${i}].server=${new_ip}
fi
else
logger -t shadowsocks "nslookup $server fail"
fi
done
if [ $restart -eq 1 ]; then
logger -t shadowsocks "restart for server ip address change"
uci commit shadowsocks
/etc/init.d/shadowsocks restart
fi

You can use cron utility. Cron is a time-based job scheduler in Unix-like computer OS. It allows to run jobs/programs/scripts at specified times.
OpenWrt comes with a cron system by default, provided by busybox.
Cron is not enabled by default, so your jobs won't be run. To activate cron in Openwrt:
/etc/init.d/cron start
/etc/init.d/cron enable
Ref: https://oldwiki.archive.openwrt.org/doc/howto/cron
Now considering your question, if you want to run mentioned script every day:
Edit cron file using crontab -e command. And write below line.
0 0 * * * sh /path/to/your/script.sh
This command will run your script at 00:00 (every day mid night). You can easily modify the above command to schedule your job at any other time. Good reference to generate cron job entry: https://crontab.guru/
To see if crontab is working properly:
tail -f /var/log/syslog | grep CRON
Now coming to your second question "Run script at every reboot of router":
You can put your script in /etc/rc.local. This file will be executed as as a shell script on every boot up by /etc/rc.d/S95done in Openwrt. So just edit /etc/rc.local with sh /path/to/your/script.sh Make sure your script is executable and doing your task properly.

Related

Are the files in the cli for Docker celery worker the same, if not what's a good way to create a common file for the threads to write to?

I have a legacy Docker application I'm working with that uses multiple Celery workers. There is a long running process I need to track. I'm able to write data to a file that is visible from the CLI interface of the worker thread:
I'm writing to the file like this:
def log(msg):
now = datetime.now()
dt_string = now.strftime("%Y-%m-%d %H:%M:%S")
fu.mkdirs(defs.LRP_LOG_DIR)
fu.append_string_to_file(dt_string + ": " + msg + "\n", defs.LRP_LOG_FILE)
def append_string_to_file(string, file_path):
with open(file_path, "a") as text_file:
text_file.write(string)
LRP_LOG_DIR = "/opt/project/backend"
LRP_LOG_FILE = LRP_LOG_DIR + "/lrp-log.txt"
The question is: If I add multiple Celery workers will they each write to their own file (not the desired behaviory) or will they all write to a common /opt/project/backend/lrp-log.txt file (the desired behavior)?
If they don't write to a common file, what do I need to do to get multiple Celery workers to write to the same file?
Also, it would be nice if this file was available on the host file system (I'm running on a Windows machine).
I ended up writing a couple of .sh scripts for Cygwin (I'm on windows). I would like to get the tail to work in the same script but this is good enough for now.
Script to start Docker and write to log file
echo
echo
echo
# STOP CONTAINERS
echo "Stopping all Containers..."
docker kill $(docker ps -q)
# DELETE CONTAINERS
echo "Deleting Containers..."
docker rm $(docker ps -aq)
echo
# PRUNE VOLUMES
echo "Pruning orphaned volumes"
docker volume prune -f
echo
# CREATE LOG DIR
mkdir ./logs
# DELETE OLD FULL LOG FILE
echo "Deleting old full log file..."
touch ./logs/full-log.txt
rm ./logs/full-log.txt
touch ./logs/full-log.txt
# SET UP LRP LOG FILE
echo "Deleting old lrp log file..."
touch ./logs/lrp-log.txt
rm ./logs/lrp-log.txt
# TAIL THE LOG FILE (display the running process in a cygwin window)
cygstart tail -f ./logs/full-log.txt
cygstart tail -f ./logs/lrp-log.txt
# START AES
echo "Starting anonlink entity service (aes)..."
echo "Process is running and writing log to ./full-log.txt"
echo "Long Running Process Log (LRP) is being written to lrp-log.txt"
echo "! ! ! DO NOT CLOSE THIS WINDOW ! ! !"
echo "(<ctrl-c> to quit the process)"
docker-compose -p anonlink -f ../tools/docker-compose.yml up --remove-orphans > ./logs/full-log.txt
echo
echo
echo "Done."
echo
echo
Script to create truncated log file to track long running processes
tail -f ./logs/full-log.txt | grep --line-buffered "LOG_FILE:" > ./logs/lrp-log.txt

Makefile docker wait for database to be ready

I'm attempting to create a makefile that will launch my db container, wait for it to complete before launching the rest of my app.
I have 2 compose files.
docker-compose.db.yml
docker-compose.yml
My make file is as follows:
default:
#echo "Preparing database"
docker-compose -f docker-compose.db.yml build
docker-compose -f docker-compose.db.yml pull
docker-compose -f docker-compose.db.yml up -d
#echo ""
#echo "Waiting for database \"ready for connections\""
#while [ -z "$(shell docker logs $(PROJECT_NAME)_mariadb 2>&1 | grep -o "ready for connections")" ]; \
do \
sleep 5; \
done
#echo "Database Ready for connections!"
#echo ""
#echo "Launching App Containers"
docker-compose build
docker-compose pull
docker-compose up -d
What happens is that it immediately goes to "Database Ready for connections!" even before the database is ready. If I run the same command in terminal it response with empty for about the first 20 seconds and then finally returns "ready for connections".
Thank you in advance
The GNU make $(shell ...) function gets run once when the Makefile is processed. So when your rule has
#while [ -z "$(shell docker logs $(PROJECT_NAME)_mariadb 2>&1 | grep -o "ready for connections")" ]
Make first runs the docker logs command on its own, then substitutes the result in the shell command it runs
while [ -z "ready for connections" ]
which is trivially false, and the loop exits immediately.
Instead you probably want to escape the $ in the shell substitution command
#while [ -z "$$(docker-compose logs mariadb ...) "]
It's fairly typical to configure containers to be able to wait for the database startup themselves, and to run the application and database from the same docker-compose.yml file. Docker Compose wait for container X before starting Y describes this setup.

How can i execute an shell script in my own jenkins pipeline plugin?

my problem is that i want to execute an script inside my jenkins pipeline plugin, and the 'perf script' command do not work.
My script is:
#! /bin/bash
if test $# -lt 2
then
sudo perf record -F 99 -a -g -- sleep 20
sudo perf script > info.perf
echo "voila"
fi
exit 0
My Jenkins can execute sudo so this is not the problem, and in my own Linux Shell this script works perfectly..
How can i solve this?
I solved this adding the -i option to perf script command:
sudo perf record -F 99 -a -g -- sleep 20
sudo perf script -i perf.data > info.perf
echo "voila"
Seems like Jenkins is not able to read perf.data without -i option
If the redirection does not work within the script, try and see if it is working within the DSL Jenkinsfile.
If you call that script with the sh step supports returnStdout (JENKINS-26133):
res = sh(returnStdout: true, script: '/path/to/your/bash/script').trim()
You could process the result directly in res, bypassing the need for a file.

How to set environment variable in pre-start in Upstart script?

We have a custom C++ daemon application that forks once. So we've been doing this in our Upstart script on Ubuntu 12.04 and it works perfectly:
expect fork
exec /path/to/the/app
However now we need to pass in an argument to our app which contains the number of CPUs on the machine on which it runs:
cat /proc/cpuinfo | grep processor | wc -l
Our first attempt was this:
expect fork
exec /path/to/the/app -t `cat /proc/cpuinfo | grep processor | wc -l`
While that starts our app with the correct -t value, Upstart tracks the wrong pid value, I'm assuming because those cat, grep & wc commands all launch processes in exec before our app.
I also tried this, and even it doesn't work, I guess because setting an env var runs a process? Upstart still tracks the wrong pid:
expect fork
script
NUM_CORES=32
/path/to/the/app -t $NUM_CORES
end script
I've also tried doing this in an env stanza but apparently those don't run commands:
env num_cores=`cat /proc/cpuinfo | grep processor | wc -l`
Also tried doing this in pre-start, but env vars set there don't have any values in the exec stanza:
pre-start
NUM_CORES=32
end script
Any idea how to get this NUM_CORES set properly, and still get Upstart to track the correct pid for our app that forks once?
It's awkward. The recommended method is to write an env file in the pre-start stanza and then source it in the script stanza. It's ridiculous, I know.
expect fork
pre-start script
exec >"/tmp/$UPSTART_JOB"
echo "NUM_CORES=$(cat /proc/cpuinfo | grep processor | wc -l)"
end script
script
. "/tmp/$UPSTART_JOB"
/path/to/app -t "$NUM_CORES"
end script
post-start script
rm -f "/tmp/$UPSTART_JOB"
end script
I use the exec line in the pre-start because I usually have multiple env variables and I don't want to repeat the redirection code.
This only works because the '. ' command is a built-in in dash and thus no process is spawned.
According to zram-config's upstart config:
script
NUM_CORES=$(grep -c ^processor /proc/cpuinfo | sed 's/^0$/1/')
/path/to/the/app -t $NUM_CORES
end script
I would add
export NUM_CORES
after assigning it a value in "script". I remember that a /bin/sh symlinked to a non-Bash shell may run scripts, so I would avoid Bash-only constructs.
Re: using the "env" stanza, it passes values literally and does not process them using shell conventions.

run a cron job only once

Hi I have to restart Apache from rails controller I tried to do that with %x{} and system commands but it fails so I decided to do it with cron Is it possible to make cron task that will be executed only once ?
The run once version of cron is called at. See http://en.wikipedia.org/wiki/At_%28Unix%29 for an explanation, and note that specifying "now" as the time causes it to run immediately.
To schedule a cron job to run only once is a bit tricky but can be done by a self deleting script! Schedule your script in the cron for the next minute or for other preferable time,
* * * * * /path/to/self-deleting-script
The self deleting script will be like,
#!/bin/bash
# <your job here>
crontab -l | grep -v $0 | crontab - # to delete your script from the cron
#restart your cron service
rm -f $0 #delete the script now
It solves my problem in an openwrt router where I could not install at command.

Resources