From a base image of nginx, I installed certbot, successfully got the certs and website works fine over ssl, so now I want to put the renew script as a cron job, but it doesnt seem to be working, I just wanted to check it was working with echo Helloworld:
* * * * * echo "Hello world!" >> /root/cron2.log 2>&1
But nothing shows up in /root, also, there are no logs present in the usual dirs like /var/log/, and there is no file syslog except for /usr/include, and no rsyslog,
What am I doing wrong with CRON? so that I can assess my dry-run renew script in a log so I know its working?
Needed to make sure that the service is actually run, with:
service cron start
Related
So I have a mariadb in a container in /home/admin/containers/mariadb.
This directory also contains an .env-file with MARIADB_ROOT_PASSWORD specified.
I want to backup the database using the following command:
* * * * * root docker exec --env-file /home/admin/containers/mariadb/.env mariadb sh -c 'exec mysqldump --databases dbname -uroot -p"$MARIADB_ROOT_PASSWORD"' > /home/admin/containers/mariadb/backups/dbname.sql
The command works when running from the terminal but crontab only creates an empty sql file.
I assume there are some issues with cron ready the .env file.
Bash command line is nice.
Cron is "different".
Let me count the ways.
Here are things to pay attention to.
To simplify the description,
let's assume you put the above instructions
into backup.sh, so the crontab line is simply
* * * * * root sh backup.sh
Cron is running under UID zero here. Test interactively with: $ sudo backup.sh
Cron uses a restricted $PATH. Test with: $ env PATH=/usr/bin:/bin backup.sh
Cron's $CWD won't be your home directory.
More generally, env will report different results from interactive. Your login dot files have not all been sourced.
Cron doesn't necessarily set umask to 0022. Likely not an issue here.
Output of ulimit -a might differ from what you see interactively.
Cron does not provide a pty, which can affect e.g. password prompts. Likely not an issue here.
Likely there are other details that differ.
If you find that some aspect of the
environment is crucial to a successful
run, then arrange for that near the
top of backup.sh. You might want
to adjust PATH, source a file,
or cd somewhere.
Now let's examine what diagnostic clues
you're gathering from each cron run.
The most important detail is that
while you're logging stdout,
you are regrettably discarding messages sent to
FD 2, stderr.
You can accomplish your logging
on the crontab command line,
or within the backup.sh script.
Use 2>&1 to merge stderr with stdout.
Or capture each stream separately:
docker ... 2> errors.txt > dbname.sql
With no errors, you will see a zero-byte
text file.
Also, remember the default behavior of crond.
If you just run a command, with no
redirect, cron assumes it should
complete silently with zero exit status,
such as /usr/bin/true does.
If there's a non-zero status, cron will
report the error.
If there's any stdout text,
such as /usr/bin/date produces,
cron wants to email you that text.
If there's any stderr text,
again it should be emailed to you.
Test your email setup.
Set the cron MAILTO=me#some.where
variable if the default of root
wasn't suitable.
Interactively verify that email sending on
that server actually works.
Repair your setup for postfix or
whatever if you find that emails are
not reliably being delivered.
I'm trying to get a Jenkins job to run sfdx force:data:soql:query commands in order to migrate configuration data sets between our production org and our sandboxes after a refresh. Certain configurations do not persist on a refresh so we need a way to move that data.
Running the queries from the command line on the Jenkins server work as expected, however the job when it runs fails with the following error:
'C:\Program' is not recognized as an internal or external command, operable program or batch file.
Build step 'Execute shell' marked build as failure
The job does three things:
Authorizes to the DevHub, lists out the connected orgs, and then performs a SQOL query to just print some data - 16 lines to be exact. Here are the commands in the shell script of the job:
sfdx force:auth:jwt:grant -i ${CONNECTED_APP_CONSUMER_KEY} -u ${DEV_HUB} -f ${JENKINS_HOME}/certs/prod/server.key -r [...] -a DevHub
sfdx force:org:list
sfdx force:data:soql:query -u ${DEV_HUB} -q "SELECT Id, Name FROM [...tablename...]" -r human
I am completely stumped on why this is happening. Again, running the SOQL command directly on the server through PowerShell or Command Line works as expected. I would appreciate any help with this.
This one stumped me for a long time but we finally got it figured out.
If you are seeing this error, make sure to check your machine's environmental variables. I saw a TON of other answers pointing to this as the issue where the install of SFDX path name had spaces in it as in C:|P:rogram Files\SFDX\bin but only showed some weird command line FOR loop that made no sense what so ever.
What we did was to completely uninstall all of SFDX making sure none of it was left on the machine and reinstalled into a folder we made where there was no spaces in the path name.
Once we did that, our job worked like it was supposed to. I hope this helps others who run into this same issue.
I'm trying to run a script via crontab on my Raspberry Pi.
I have created the script: ScreenShot.sh
The content of the file is:
#!/bin/sh
export DISPLAY=:0 && \
import -window root -resize 20% /pathtofolder/screenshot.jpg
This works fine when I run it via SSH
/home/pi/ScreenShot.sh
I have made the script executable.
I then added it to cron via sudo crontab -e
*/1 * * * * /home/pi/ScreenShot.sh
I want the script to run ever 1 minute (I'll extend this later, but for testing purposes I have it at 1 minute).
For some reason the script does not run in crontab and does not take a screenshot.
I have noticed that if I run the script via sudo:
sudo /home/pi/ScreenShot.sh
I get the following error:
No protocol specified
import.im6: unable to open X server `:0' # error/import.c/ImportImageCommand/368.
I'm assuming when Crontab runs, it runs the script as Root, which might be causing the failure.
I enabled logging on crontab and if I view the log I see the following:
Nov 6 06:26:01 IRDigitalDisplay /USR/SBIN/CRON[12634]: (root) CMD (/home/pi/ScreenShot.sh)
Nov 6 06:26:02 IRDigitalDisplay /USR/SBIN/CRON[12633]: (CRON) info (No MTA installed, discarding output
So I'm assuming something goes wrong. However it's not writing the error to the log, but rather trying to email it to me.....
My question is:
How do I get my ImageMagick script to run in crontab, take a screen shot every X minutes, and save this into a predetermined folder?
You need to add the script to the "pi" users crontab, not root's. Start the crontab edior with this command as user "pi":
crontab -e
No sudo needed.
The crontab entry has to be:
*/5 * * * * /home/pi/ScreenShot.sh
I'm currently struggling with executing a simple command which I know works when I run it manually when logged in as either root or non-root user:
god -c path/to/app/queue_worker.god
I'm trying to run this when the server starts (I'm running Ubuntu 12.04), and I've investigated adding it to /etc/rc.local just to see if it runs. I know I can add it to /etc/init.d and then use update-rc.d but as far as I understand it's basically the same thing.
My question is how I run this command after everything has booted up as clean as possible without any fuzz.
Im probably missing something in the lifecycle of how everything's initialized, but then I gladly encourage some education! Are there alternative ways or places of putting this command?
Thanks!
You could write a bash script to determine when Apache has started and then set it to run as a cron job at a set interval...
if [ "$(pidof apache)" ]
then
# process was found
else
# process not found
fi
Of course then you'll have a useless cron job running all the time, and you'll have to somehow flip a switch once it's run once so it doesn't run again.. This should give you an idea to start from..
I have three apps that I want to run with the rails server at the same time, and I also want the option to kill all the servers from one location.
I don't have much experience with Bash so I'm not sure what command I would use to launch the server for a specific app. Since the script won't be in the app directory plain rails s won't work.
From there, I suppose if I can gather the PIDs of the processes the three servers are running on, I can have the script prompt for user input and whenever something is entered kill the three processes. I'm just unsure of how to get the PIDs.
Additionally, each app has a few environment variables that I wanted to have different values than those assigned in the apps config files. Previously, I was using export var=value before rails s, but I'm not sure how to guarantee each separate process is getting the right variables.
Any help is much appreciated!
The Script
You could try something like the following:
#!/bin/bash
case "$1" in
start)
pushd app/directory
(export FOO=bar; rails s ...; echo $! > pid1)
(export FOO=bar; rails s ...; echo $! > pid2)
(export FOO=bar; rails s ...; echo $! > pid3)
popd
;;
stop)
kill $(cat pid1)
kill $(cat pid2)
kill $(cat pid3)
rm pid1 pid2 pid3
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
;;
esac
exit 0
Save this script into a file such as script.sh and chmod +x script.sh. You'd start the servers with a ./script.sh start, and you can kill them all with a ./script.sh stop. You'll need to fill in all the details in the three lines that startup the servers.
Explanation
First is the pushd: this will change the directory to where your apps live. The popd after the three startup lines will return you back to the location where the script lives. The parentheses around the (export blah blah) create a subshell so the environment variables that you set inside the parentheses, via export, shouldn't exist outside of the parentheses. Additionally, if your three apps live in different directories, you could put a cd inside each of the three parantheses to move to the app's directory before the rails s. The lines would then look something like: export FOO=bar; cd app1/directory; rails s ...; echo $! > pid1. Don't forget that semicolon after the cd command! In this case, you can also remove the pushd and popd lines.
In Bash, $! is the process ID of the last command. We echo that and redirect (with >) to a file called pid1 (or pid2 or pid3). Later, when we want to kill the servers, we run kill $(cat pid1). The $(...) runs a command and returns the output inline. Since the pid files only contain the process ID, cat pid1 will just return the process ID number, which is then passed to kill. We also delete the pid files after we've killed the servers.
Disclaimer
This script could use some more work in terms of error checking and configuration, and I haven't tested it, but it should work. At the very least, it should give you a good starting point for writing your own script.
Additional Info
My favorite bash resource is the Advanced Bash-Scripting Guide. Bash is actually a fairly powerful language with some neat features. I definitely recommend learning how bash works!
Why don't you try capistrano, framework for executing commands in parallel on multiple remote machines, via SSH. Its has lots of recipes to do this.
You are probably better off setting up pow.cx, which would run each server as it's needed, rather than having to spin up and shut down servers manually.
You could use Foreman to run, monitor, and manage your processes.
I realize I'm late to the party here, but after searching the internet for a good solution to this (and finding this page but few others and none with a full solution) and after trying unsuccessfully to get prax working, I decided to write my own solution to this problem and give it back to the community!
Check out my rdev bash script gist - a bash script you put in your ~/bin directory. This will create a new tab in gnome-terminal for each rails app with the app name and port in the tab's title. It verifies the app launched successfully by checking the port is in use and the process is actually running. It also verifies the rails app shutdown is successful by ensuring the port is no longer in use and the process is no longer running.
Setup is super easy, just change these two config values:
# collection of rails apps you want to start in development (should match directory name of rails project)
# note: the first app in the collection will receive port 3000, the second 3001 and so on
#
rails_apps=(app1 app2 app3 etc)
#
# The root directory of your rails projects (~/ is assumed, do not include)
#
projects_root="ruby/projects/root/path"
With this script you can start all your rails apps in one command or stop them all and you can stop, start and restart individual rails apps as well. While the OP requested 3 apps run, this will allow you to run as many as you need with port being assigned in order starting with 3000 for the first app in the list. Each app is started using the proper ruby version thanks to chruby and the .env is sourced on the way up so your app will have everything it needs. Once you are done developing just rdev stop and all your rails apps will be killed and the terminal windows closed.
# Usage Examples:
#
# Show Help
# ~/> rdev
# Usage: rdev {start|stop|restart} [app port]
#
# start all rails apps
# ~/> rdev start
#
# start a single rails app
# ~/> rdev start app port
#
# stop all rails apps
# ~/> rdev stop
#
# stop a single rails app
# ~/> rdev stop app port
#
# restart a single rails app
# ~/> rdev restart app port
For the record, all testing was done on Ubuntu 18.04. This script requires: bash, chruby, gnome-terminal, lsof and takes advantage of the BASH_POST_RC trick.