Ruby background process STDOUT is empty - ruby-on-rails

I'm having a weird issue with a start-up script which runs a Sinatra script using the shell's "daemon" function. The problem is that when I run the command at the command line, I get output to STDOUT. If I run the command at the command line exactly as it is in the script -- less the daemon part -- the output is correctly redirected to the output file. However, when the startup script runs it (see below), I get stuff to the STDERR log but not to the STDOUT log.
The relevant lines of the script:
#!/bin/sh
# (which is and has been a symlink to /bin/bash
# Source function library.
. /etc/init.d/functions
# Set Some Variables
RUNAS="joeuser"
PID=/var/run/myapp.pid
LOG="/var/log/myapp/app-out.log"
ERR_LOG="/var/log/myapp/app-err.log"
APPLICATION_COMMAND="RAILS_ENV=production ruby /opt/myapp/lib/daemons/my-sinatra-app.rb -p 8002 2>>${ERR_LOG} >>${LOG} &"
# Snip a bunch. This is the applicable line from the "start" case:
daemon --user $RUNAS --pidfile $PID $APPLICATION_COMMAND &> /dev/null
Now, the funky parts:
The error log is written to correctly via the redirect of STDERR.
If I reverse the order of the >> and the 2>> (I'm grasping at straws, here!), the behavior does not change: I still get STDERR logged correctly and STDOUT is empty.
If the output log doesn't exist, the STDOUT redirect creates the file. But, the file remains 0-length.
This used to work. The log directory is maintained by log-rotate. All of the more-recent 'out' logs are 0-length. The older ones are not. It seems like it stopped working some time in April. The ruby code didn't change at any time near then; neither did the startup script.
We're running three different services in this way. Two of them are ruby daemons (one uses sinatra, one does not) and the other is a background java process. This is occurring for BOTH of the ruby processes but is not happening on the java process. Maybe something changed in Ruby?
FTR, we've got ruby 1.8.5 and RHEL 5.4.
I've done some more probing. The daemon function does a bunch of stuff, but the meat of the matter is that it runs the program using runuser. The command essentially looks like this:
runuser -s /bin/bash - joeuser -c "ulimit -S -c 0 >/dev/null 2>&1 ; RAILS_ENV=production ruby /opt/myapp/lib/daemons/my-sinatra-app.rb -p 8002 '</dev/null' '>>/var/log/myapp/app-out.log' '2>>/var/log/myapp/app-err.log' '&'"
When I run exactly that at the command line (both with and without the single-ticks that got added somewhere along the line), I get the exact same screwy behavior w.r.t. the output log. So, it seems to me that this is an issue of how ruby (?) interacts with runuser?

Too long to put in a comment :-)
change the shebang to add #!/bin/sh -x and verify that everything is expanded according to your expectations. Also, when executing from terminal, your .bashrc file is sourced, when executing from script, it is not; might be something in you're environment that differ. One way to find out is to do env from terminal and from script and diff the output
env > env_terminal
env > env_script
diff env_terminal env_script
Happy hunting...

Related

Cannot backup mariadb docker from crontab

So I have a mariadb in a container in /home/admin/containers/mariadb.
This directory also contains an .env-file with MARIADB_ROOT_PASSWORD specified.
I want to backup the database using the following command:
* * * * * root docker exec --env-file /home/admin/containers/mariadb/.env mariadb sh -c 'exec mysqldump --databases dbname -uroot -p"$MARIADB_ROOT_PASSWORD"' > /home/admin/containers/mariadb/backups/dbname.sql
The command works when running from the terminal but crontab only creates an empty sql file.
I assume there are some issues with cron ready the .env file.
Bash command line is nice.
Cron is "different".
Let me count the ways.
Here are things to pay attention to.
To simplify the description,
let's assume you put the above instructions
into backup.sh, so the crontab line is simply
* * * * * root sh backup.sh
Cron is running under UID zero here. Test interactively with: $ sudo backup.sh
Cron uses a restricted $PATH. Test with: $ env PATH=/usr/bin:/bin backup.sh
Cron's $CWD won't be your home directory.
More generally, env will report different results from interactive. Your login dot files have not all been sourced.
Cron doesn't necessarily set umask to 0022. Likely not an issue here.
Output of ulimit -a might differ from what you see interactively.
Cron does not provide a pty, which can affect e.g. password prompts. Likely not an issue here.
Likely there are other details that differ.
If you find that some aspect of the
environment is crucial to a successful
run, then arrange for that near the
top of backup.sh. You might want
to adjust PATH, source a file,
or cd somewhere.
Now let's examine what diagnostic clues
you're gathering from each cron run.
The most important detail is that
while you're logging stdout,
you are regrettably discarding messages sent to
FD 2, stderr.
You can accomplish your logging
on the crontab command line,
or within the backup.sh script.
Use 2>&1 to merge stderr with stdout.
Or capture each stream separately:
docker ... 2> errors.txt > dbname.sql
With no errors, you will see a zero-byte
text file.
Also, remember the default behavior of crond.
If you just run a command, with no
redirect, cron assumes it should
complete silently with zero exit status,
such as /usr/bin/true does.
If there's a non-zero status, cron will
report the error.
If there's any stdout text,
such as /usr/bin/date produces,
cron wants to email you that text.
If there's any stderr text,
again it should be emailed to you.
Test your email setup.
Set the cron MAILTO=me#some.where
variable if the default of root
wasn't suitable.
Interactively verify that email sending on
that server actually works.
Repair your setup for postfix or
whatever if you find that emails are
not reliably being delivered.

How to run repo from a script inside a container in a jenkins job

I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.

User job not enabled

i have a codestains.conf file in ~/.init folder
description "Codestains"
author "Varun Mundra"
start on virtual-filesystems
stop on runlevel [06]
env PATH=/opt/www/codestains.com/current/bin:/usr/local/rbenv/shims:/usr/local/rbenv/bin:/usr/local/bin:/us$
env RAILS_ENV=production
env RACK_ENV=production
setuid ubuntu
setgid sudo
chdir /opt/www/codestains.com
pre-start script
exec >/home/ubuntu/codestains.log 2>&1
exec /opt/www/codestains.com/current/bin/unicorn -D -c /opt/www/codestains.com/current/config/unicorn.rb $
end script
post-stop script
exec kill 'cat /tmp/unicorn.codestains.pid'
end script
I have added https://gist.github.com/bradleyayers/1660182 in /etc/dbus-1/system.d/Upstart.conf` to enable Upstart user jobs
But everytime I run
start codestains
sudo start codestains
I get "start: Unknown job: codestains".
I have tried a lot of things available online. Nothing seems to help.
Also,
init-checkconf codestains.conf
gives "File codestains.conf: syntax ok"
I spot one error that is certainly a problem; I do not know if it is the only problem. I haven't made any attempt to test it. However, this bit:
exec kill 'cat /tmp/unicorn.codestains.pid'
is definitely wrong, it would pass the string cat /tmp/unicorn.codestains.pid to the kill command, which will not do what you want.
You may have seen an example, and missed that they are backtick characters, which causes the shell to execute cat /tmp/unicorn.codestains.pid, capture its STDOUT, and then interpolate the result where you put the backticks; IOW it passes the contents of that pid file to the kill command.
Like this:
exec kill `cat /tmp/unicorn.codestains.pid`
Note the subtly different backtick character
Which shells (bash, at least) will treat specially as I described: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html
(see the section on "Command substitution")
HTH

Edit bash PATH variable for Rails application

I have application minizinc in file ~/.bashrc, and I can call it on bash. I am building a Rails application that calls minizinc from bash, but I cannot do it. After executing this:
#cmd = ` bash -c "minizinc #{path} -n 1" `
I get the following error:
bash: minizinc: command not found
How can I change the Rails application user's PATH variable from the application? Or how do I tell the Rails application where this bash application is located?
You have several options here. The one I think best suits your case and would recommend is using the command directly, instead of calling Bash to do the same as Ruby:
#cmd = `minizinc #{path} -n 1`
If you use it like this, the command is executed in a shell with an environment similar to the one where Ruby is running. Which means that the PATH variable will be the same. So if the dir containing the executable minizinc is in PATH when you start the Rails server, it should also be in the PATH variable of the shell running that command.
Now, if you really need to use Bash in the middle, I strike it as odd that the PATH variable is not the same as in Ruby (I tried it using IRB and seems to work as expected). You can check it by replacing your command with
bash -c "echo $PATH"
It should print the same value as
puts ENV['PATH']
when run in the Rails console.
If, after checking it, you see that the PATH variable of your Rails environment is incorrect, you can set it specifically for the Rails server:
PATH="<path_to_minizinc_dir>:$PATH" rails server
This sets the value of the PATH environment variable only for the command you are about to execute, in this case rails server.
Alternatively, you can surpass all this by simply using the absolute path to the executable:
#cmd = `bash -c "/full/path/to/minizinc #{path} -n 1"`
If you provide the full path to the command you want to execute, the PATH environment variable simply won't come into play, but I imagine this would be suboptimal for your case.

Bash Script to run three different rails apps on the local server?

I have three apps that I want to run with the rails server at the same time, and I also want the option to kill all the servers from one location.
I don't have much experience with Bash so I'm not sure what command I would use to launch the server for a specific app. Since the script won't be in the app directory plain rails s won't work.
From there, I suppose if I can gather the PIDs of the processes the three servers are running on, I can have the script prompt for user input and whenever something is entered kill the three processes. I'm just unsure of how to get the PIDs.
Additionally, each app has a few environment variables that I wanted to have different values than those assigned in the apps config files. Previously, I was using export var=value before rails s, but I'm not sure how to guarantee each separate process is getting the right variables.
Any help is much appreciated!
The Script
You could try something like the following:
#!/bin/bash
case "$1" in
start)
pushd app/directory
(export FOO=bar; rails s ...; echo $! > pid1)
(export FOO=bar; rails s ...; echo $! > pid2)
(export FOO=bar; rails s ...; echo $! > pid3)
popd
;;
stop)
kill $(cat pid1)
kill $(cat pid2)
kill $(cat pid3)
rm pid1 pid2 pid3
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
;;
esac
exit 0
Save this script into a file such as script.sh and chmod +x script.sh. You'd start the servers with a ./script.sh start, and you can kill them all with a ./script.sh stop. You'll need to fill in all the details in the three lines that startup the servers.
Explanation
First is the pushd: this will change the directory to where your apps live. The popd after the three startup lines will return you back to the location where the script lives. The parentheses around the (export blah blah) create a subshell so the environment variables that you set inside the parentheses, via export, shouldn't exist outside of the parentheses. Additionally, if your three apps live in different directories, you could put a cd inside each of the three parantheses to move to the app's directory before the rails s. The lines would then look something like: export FOO=bar; cd app1/directory; rails s ...; echo $! > pid1. Don't forget that semicolon after the cd command! In this case, you can also remove the pushd and popd lines.
In Bash, $! is the process ID of the last command. We echo that and redirect (with >) to a file called pid1 (or pid2 or pid3). Later, when we want to kill the servers, we run kill $(cat pid1). The $(...) runs a command and returns the output inline. Since the pid files only contain the process ID, cat pid1 will just return the process ID number, which is then passed to kill. We also delete the pid files after we've killed the servers.
Disclaimer
This script could use some more work in terms of error checking and configuration, and I haven't tested it, but it should work. At the very least, it should give you a good starting point for writing your own script.
Additional Info
My favorite bash resource is the Advanced Bash-Scripting Guide. Bash is actually a fairly powerful language with some neat features. I definitely recommend learning how bash works!
Why don't you try capistrano, framework for executing commands in parallel on multiple remote machines, via SSH. Its has lots of recipes to do this.
You are probably better off setting up pow.cx, which would run each server as it's needed, rather than having to spin up and shut down servers manually.
You could use Foreman to run, monitor, and manage your processes.
I realize I'm late to the party here, but after searching the internet for a good solution to this (and finding this page but few others and none with a full solution) and after trying unsuccessfully to get prax working, I decided to write my own solution to this problem and give it back to the community!
Check out my rdev bash script gist - a bash script you put in your ~/bin directory. This will create a new tab in gnome-terminal for each rails app with the app name and port in the tab's title. It verifies the app launched successfully by checking the port is in use and the process is actually running. It also verifies the rails app shutdown is successful by ensuring the port is no longer in use and the process is no longer running.
Setup is super easy, just change these two config values:
# collection of rails apps you want to start in development (should match directory name of rails project)
# note: the first app in the collection will receive port 3000, the second 3001 and so on
#
rails_apps=(app1 app2 app3 etc)
#
# The root directory of your rails projects (~/ is assumed, do not include)
#
projects_root="ruby/projects/root/path"
With this script you can start all your rails apps in one command or stop them all and you can stop, start and restart individual rails apps as well. While the OP requested 3 apps run, this will allow you to run as many as you need with port being assigned in order starting with 3000 for the first app in the list. Each app is started using the proper ruby version thanks to chruby and the .env is sourced on the way up so your app will have everything it needs. Once you are done developing just rdev stop and all your rails apps will be killed and the terminal windows closed.
# Usage Examples:
#
# Show Help
# ~/> rdev
# Usage: rdev {start|stop|restart} [app port]
#
# start all rails apps
# ~/> rdev start
#
# start a single rails app
# ~/> rdev start app port
#
# stop all rails apps
# ~/> rdev stop
#
# stop a single rails app
# ~/> rdev stop app port
#
# restart a single rails app
# ~/> rdev restart app port
For the record, all testing was done on Ubuntu 18.04. This script requires: bash, chruby, gnome-terminal, lsof and takes advantage of the BASH_POST_RC trick.

Resources