Edit bash PATH variable for Rails application - ruby-on-rails

I have application minizinc in file ~/.bashrc, and I can call it on bash. I am building a Rails application that calls minizinc from bash, but I cannot do it. After executing this:
#cmd = ` bash -c "minizinc #{path} -n 1" `
I get the following error:
bash: minizinc: command not found
How can I change the Rails application user's PATH variable from the application? Or how do I tell the Rails application where this bash application is located?

You have several options here. The one I think best suits your case and would recommend is using the command directly, instead of calling Bash to do the same as Ruby:
#cmd = `minizinc #{path} -n 1`
If you use it like this, the command is executed in a shell with an environment similar to the one where Ruby is running. Which means that the PATH variable will be the same. So if the dir containing the executable minizinc is in PATH when you start the Rails server, it should also be in the PATH variable of the shell running that command.
Now, if you really need to use Bash in the middle, I strike it as odd that the PATH variable is not the same as in Ruby (I tried it using IRB and seems to work as expected). You can check it by replacing your command with
bash -c "echo $PATH"
It should print the same value as
puts ENV['PATH']
when run in the Rails console.
If, after checking it, you see that the PATH variable of your Rails environment is incorrect, you can set it specifically for the Rails server:
PATH="<path_to_minizinc_dir>:$PATH" rails server
This sets the value of the PATH environment variable only for the command you are about to execute, in this case rails server.
Alternatively, you can surpass all this by simply using the absolute path to the executable:
#cmd = `bash -c "/full/path/to/minizinc #{path} -n 1"`
If you provide the full path to the command you want to execute, the PATH environment variable simply won't come into play, but I imagine this would be suboptimal for your case.

Related

cannot access variable environment from .profile using rails

In my rails application, I exported my database variables in .profile using ansible. the variables are accessible by the command printenv. However, when I run the application or use rails c via ENV['NAME'] , the variables aren't there.
Does anyone have any idea why rails doesn't load variables from .profile?
The ~/.profile script is only meant to be read by your (presumably POSIX compatible) shell. And even then only if it is what is known as a "login" shell. For example, from the bash man page:
When invoked as an interactive login shell, or a non-interactive shell with the --login option, it first attempts to read and execute commands from /etc/profile and ~/.profile, in that order.
If you're running your rails application from an interactive shell prompt it should have access to the env vars you're setting in ~/.profile. If you're starting your rails app some other way (e.g., from your window manager) then you'll need to find some other way to set the env vars that it inherits.

Could not login with bash shell by default

I want to run a Ruby on Rails application. When I tried to run it, it shows me this,
The program 'rails' is currently not installed. You can install it by typing:
sudo apt install ruby-railties
So, I figured out the problem and I found that the problem is due to not login into bash shell. My terminal could not execute 'ruby' or 'ruby on rails' scripts. I checked .bashrc and .bash_profile files if PATH variable is set to point to rvm file.
When I did,
/bash/bin -l
it shows me ruby or rails are installed on system and I could start Rails server successfully. But if I opened another Terminal window, same problem occurs. Basically, I want to log into bash shell by default. Please correct and help me to sort out this. Thanks!
If you are sure the location of your bash shell is /bin/bash you could use this command (replacing "username" with your username):
chsh -s /bin/bash username
That will change your default shell in most unix like operating systems.
Afterwards you can verify it checking /etc/passwd where you will see the default shell at the end of the line of your username.
Warning: Try it first with a new user, in order to avoid losing your shell access if the path to bash is different :-)

csh script as executable does not setenv

I am not able to set env variables through an executable csh/tcsh script
An env variable set inside a csh/tcsh executable script "myscript"
contents of the script ...
setenv MYVAR /abc/xyz
which is not able to set on the shell and reports "Undefined variable"
I have made the csh/tcsh script as executable by the following shell command
chmod +x /home/xx/bin/myscript
also the path is updated to
set path = (/home/xx/bin $path)
which myscript
/home/xx/bin/myscript
When I run the script on command line and echo the env variable ..
myscript
echo $MYVAR
MYVAR "Undefined variable"
but if i source on command line
source /home/xx/bin/myscript
echo $MYVAR
/abc/xyz
you need to source your code rather than execute it so that it is evaluated by the current shell where you want to modify the environment.
You can of course embed
source /home/xx/bin/myscript
within your .cshrc
the script does not need to be executable or have any #! shebang (though they don't hurt)
This is not how environment variables work.
An environment variable is set for a process (in this case, tcsh) which is passed on to all child processes. So when you do:
$ setenv LS_COLORS=foo
$ ls
You first set LS_COLORS for the tcsh process, tcsh then starts the child process ls which inheres tcsh's environment (including LS_COLORS), which it can then use.
However, what you're doing is setting the environment is a child process, and then want to propagate this back to the parent process (somehow). This is not possible. This has nothing to do with tcsh, it works like this for any process on the system.
It works with source because source reads a file, and executes it line-by-line in the current process. So it doesn't start a new tcsh process.
I will leave it as an exercise to you what the implications would mean if it would be possible :-) Do you really want to deal with unwise shell scripts that set some random environment variables? And what about environment variables set by a php process, do we want those to go back in the parent httpd process? :-)
You didn't really describe what goal you're trying to achieve, but in general, you want to do something like:
#!/bin/csh -f
# ... Do stuff ...
echo "Please copy this line to your environment:"
echo "setenv MYVAR $myvar"

How to add PATH variable to sudo in Fabric

when I try to use fabric to deploy Apache server remotely using Fabric, I encountered a problem. I tried to add a new path to the PATH variable first using sudo(), then I tried to echo $PATH using sudo() too. However, I found that it looks like the new path wasn't added to PATH at all. As a result, I cannot execute the bins in that path via sudo().
[name#IP:port] Executing task 'reboot'
[name#IP:port] sudo: export PATH=$PATH:/new/path/to/add/install/bin
[name#IP:port] out: sudo password:
[name#IP:port] sudo: echo $PATH
[name#IP:port] out: sudo password:
[name#IP:port] out: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Could anyone tell me how to add a path variable to sudo command in Fabric? Thanks in advance.
It should be habit to always give a full path to the executable when running as root, to avoid having trojan horses being pushed into your PATH.
Setting an environment variable via export works only for the current shell session - which is the one invoked by sudo. Once your command (export, in this case) is executed, the shell exits, and takes your environment variable with it. The next time you execute sudo, a new shell (with default environment) is set up, which does know nothing about your previous export.
The configuration file /etc/sudoers usually contains an entry like Defaults env_reset, the effect of which is that environment variables set in the calling environment are not copied to the environment invoked by sudo, so calling export in your current environment and then executing sudo does not work either. This is done for security reasons (ref. 1) above).
It is possible to set up /etc/sudoers to make exceptions to 3), via env_keep. Refer to man sudoers for details. However, see 1) - it is not a good idea.
There is the -E option to sudo, which allows to keep the caller's environment (including e.g. an extended PATH), but this requires SETENV being set in /etc/sudoers. Again, refer to man sudoers for details, and be mindful of 1).
use
sudo('PATH=$PATH:/new/path/to/add/install/bin commad')

Ruby background process STDOUT is empty

I'm having a weird issue with a start-up script which runs a Sinatra script using the shell's "daemon" function. The problem is that when I run the command at the command line, I get output to STDOUT. If I run the command at the command line exactly as it is in the script -- less the daemon part -- the output is correctly redirected to the output file. However, when the startup script runs it (see below), I get stuff to the STDERR log but not to the STDOUT log.
The relevant lines of the script:
#!/bin/sh
# (which is and has been a symlink to /bin/bash
# Source function library.
. /etc/init.d/functions
# Set Some Variables
RUNAS="joeuser"
PID=/var/run/myapp.pid
LOG="/var/log/myapp/app-out.log"
ERR_LOG="/var/log/myapp/app-err.log"
APPLICATION_COMMAND="RAILS_ENV=production ruby /opt/myapp/lib/daemons/my-sinatra-app.rb -p 8002 2>>${ERR_LOG} >>${LOG} &"
# Snip a bunch. This is the applicable line from the "start" case:
daemon --user $RUNAS --pidfile $PID $APPLICATION_COMMAND &> /dev/null
Now, the funky parts:
The error log is written to correctly via the redirect of STDERR.
If I reverse the order of the >> and the 2>> (I'm grasping at straws, here!), the behavior does not change: I still get STDERR logged correctly and STDOUT is empty.
If the output log doesn't exist, the STDOUT redirect creates the file. But, the file remains 0-length.
This used to work. The log directory is maintained by log-rotate. All of the more-recent 'out' logs are 0-length. The older ones are not. It seems like it stopped working some time in April. The ruby code didn't change at any time near then; neither did the startup script.
We're running three different services in this way. Two of them are ruby daemons (one uses sinatra, one does not) and the other is a background java process. This is occurring for BOTH of the ruby processes but is not happening on the java process. Maybe something changed in Ruby?
FTR, we've got ruby 1.8.5 and RHEL 5.4.
I've done some more probing. The daemon function does a bunch of stuff, but the meat of the matter is that it runs the program using runuser. The command essentially looks like this:
runuser -s /bin/bash - joeuser -c "ulimit -S -c 0 >/dev/null 2>&1 ; RAILS_ENV=production ruby /opt/myapp/lib/daemons/my-sinatra-app.rb -p 8002 '</dev/null' '>>/var/log/myapp/app-out.log' '2>>/var/log/myapp/app-err.log' '&'"
When I run exactly that at the command line (both with and without the single-ticks that got added somewhere along the line), I get the exact same screwy behavior w.r.t. the output log. So, it seems to me that this is an issue of how ruby (?) interacts with runuser?
Too long to put in a comment :-)
change the shebang to add #!/bin/sh -x and verify that everything is expanded according to your expectations. Also, when executing from terminal, your .bashrc file is sourced, when executing from script, it is not; might be something in you're environment that differ. One way to find out is to do env from terminal and from script and diff the output
env > env_terminal
env > env_script
diff env_terminal env_script
Happy hunting...

Resources