Rails Env Issue - ruby-on-rails

So the other day I was trying to replicate some of the tests that were being ran by circleci and before some of the commands I called:
export RAILS_ENV=test
export RACK_ENV=test
and now I guess the problem is that I seem to be stuck in test?
Also,
I've tried executing those same commands except with each of the vars set to development, yet to no avail.
Any ideas?

try 'unsetting' those env variables.
export RAILS_ENV=
export RACK_ENV=
or try restarting the server to set them back to default.

Related

Deployed digitalocean app isn't reading environment variables

I have deployed an app to digitalocean using the Ruby on Rails image. It is set up by default with a user called rails. My rails.service file looks like:
[Unit]
Description=OneMathsExamQuestions
Requires=network.target
[Service]
Type=simple
User=rails
Group=rails
WorkingDirectory=/home/rails/one_maths_exam_questions/
ExecStart=/bin/bash -lc 'bundle exec puma'
TimeoutSec=30s
RestartSec=30s
Restart=always
[Install]
WantedBy=multi-user.target
I need to use some environment variables in my application. So I have added some lines to my /home/rails/.bashrc and /root/.bashrc files (I suspect only the first one should be necessary but neither seems to work):
export A="val1"
export B="val2"
...
Now: if I call echo $A in a terminal I get the expected output. If I go into the Rails console and do ENV["A"] I get the expected output. But my app does not seem to behave correctly (the desired behaviour is connecting to Amazon S3; the exact error is not important).
If I go into my controller and explicitly log the env vars with Rails.logger.debug ENV I just get ENV, and Rails.logger.debug ENV["A"] returns empty string (I guess nil). Similarly if I try to do ENV["RAILS_ENV"] which should definitely work, I get the same. But Rails.env returns "development", as expected.
Moreover, if I explicitly write
ENV["A"] = "val1"
ENV["B"] = "val2"
...
in my config/application.rb, the app works correctly. But this is obviously not a permanent solution, since I can't commit this to version control.
I'm not using the figaro gem, which I think a lot of places are suggesting, but I don't see why I should have to since it works just fine on my local machine.
OK, it looks like if I export my environment variables in .profile then they are picked up by the server no problem. If I remove them from .bashrc then the server has no problems, but I can't get the variables in terminal. I guess they just do different things?

How to run initialization script on starting a Rails server?

I have a simple shell script (happy.sh) that I am currently running by hand (. ./happy.sh) every time I restart a rails server. It sets some environment variables that I need because of various APIs I'm using. However, I was wondering if there's some way of having rails run the script itself whenever I run "guard" or "rails s"
Thanks
If you use foreman, you can define all processes you needed started on application start into a Procfile. (including bbundle exec rails server -p $PORT)
By calling foreman start, all the process starts up.
More information can be gotten here on this cast
Proper way of setting ENV variables is putting them in bash_proflle or bashrc depending of linux distro.
vi ~/.bash_proflle
And then add something like
export MY_RAILS_VAR=123
Then you don't need to run any ENV initialization scripts on rails start.

delayed_job not seeing environment variables

The Rails 4 application is deployed to the production server using Capistrano 3. The delayed jobs daemon starts. When a job is created, it does not recognize environment variables set in /etc/profile using ENV (returns nil). Any thoughts?
Moved the environment variables from /etc/profile to /etc/environment and that seemed to have fixed the problem. Maybe that's the way I was supposed to do it in the first place...I'm not a Linux expert.

Capistrano deployment on Windows machine

I have been trying to deploy a rails application using capistrano. However I am not able to reach the end of deployment and there is no way to figure out why. The script returns an error code 256 and stops at the following line in the deploy script
Command bundle exec rake assets:clean && EXECJS_RUNTIME='Node' JRUBY_OPTS='-J-d32 -X-C' bundle exec rake assets:precompile returned status code 256
There is no more explanation to it. Has anybody faced the similar issue while deploying on windows?
You should provide more information on the context of your situation. For instance, show the relevant code from your deploy.db file. That could tell us more about a possible problem source.
As a tip, when facing capistano deploying bugs, use the debugging flag to get more verbose output, and step through the process.
cap deploy:cold -d
Anywho... I was facing a similar problem, while optimizing my capistrano deploying times. In case you only want to modify your assets:precompile task, to include your EXECJS_RUNTIME and JRUBY_OPTS custom values, you could try doing this.
set :jruby_opts, "-X-C"
set :asset_env, "RAILS_GROUPS=assets EXECJS_RUNTIME='Node'"

Capistrano: Can I set an environment variable for the whole cap session?

I've got a staging server with both standard Ruby and Ruby Enterprise installed. As standard Ruby refuses to install a critical gem, I need to set $PATH so that ruby/gem/rake/etc. always refer to the REE versions. And since I use Capistrano to deploy to our machines, I need to do it in Capistrano.
How can I set an environment variable once, and have it persist throughout the Capistrano session?
1) It's easy to do in bashrc files, but Capistrano doesn't read bashrc files.
2) I'd use Capistrano's
default_environment['PATH'] = 'Whatever'
but Capistrano uses these environment variables like
env PATH=Whatever command arg ...
and they're lost whenever another shell is spun up within the executable passed to env. Like when you use sudo. Which is kinda important:
[holt#Michaela trunk]$ env VAR=hello ruby -e "puts ENV['VAR']"
hello
[holt#Michaela trunk]$ env VAR=hello sudo ruby -e "puts ENV['VAR']"
nil
3) And I can't use the bash export command, as these are lost too - Capistrano seems to start up a new shell for each command (or something like that), and that's lost, too:
cap> export MYVAR=12
[establishing connection(s) to xxx.xxx.xxx.xxx]
cap> echo $MYVAR
** [out :: xxx.xxx.xxx.xxx]
cap>
4) I've tried messing with Capistrano's :shell and :pty options as well (and in combination with the other approaches), but no luck there, either.
So - what's the right way to do this? This seems like such a basic task that there should be a really simple way to accomplish it, but I'm out of ideas. Anyone?
Thanks in advance!
I have the exactly same problem, but I think this solution is better:
set :default_environment, {
'env_var1' => 'value1',
'env_var2' => 'value2'
}
This works for me like a charm.
If you need to set a variable on the remote host other than PATH, you should know that sshd only allows certain /etc/profile or ~/.bashrc environment variables by default, for security reasons. As Lou said, you can either do cap shell and use the cap> printenv command, or you can do cap COMMAND=printenv invoke in one command.
If you see the variable when you ssh into the remote shell normally, but you don't see it in the cap printenv command, here's one solution:
Set PermitUserEnvironment yes in your remote server's /etc/ssh/sshd_config file, and restart sshd
Edit the ~/.ssh/environment file for the remote user you are ssh'ing in as, and put your variable(s) there as VARIABLE=value
Now those should show up when you do cap COMMAND=printenv invoke
I think you have in fact 2 problems:
1) You want to change the PATH on your remote host(s).
Alter/set the path in your .bashrc on your remote host(s) and run cap> printenv, if your path is right, goto #2, else try to add export BASH_ENV=~/.bashrc to your /etc/profile (be careful, ~/.bashrc will then be run for all non-interactive shell for all users)
2) You want sudo to keep your PATH
Run visudo on your remote host(s) and add:
Defaults exempt_group = "<your_user>"
I needed to set an environment variable for a specific task to work. The "run" command allows you to pass options which include :env:
run "cmd", :env => { 'name' => 'value' }
In my case, I wanted to add the environment variable to a task that I didn't write, so I used default_run_options which is used by all invocations of run. I added this to the top of my Capfile:
default_run_options[:env] = { 'name' => 'value' }
I tried unsuccessfully to use #brian-deterling's technique, which is pretty commonly used by others who have discussed this... Maybe I'm doing something wrong, but meanwhile I found the dotenv-rails gem, and it worked very nicely for loading up values out of a .env file in my project root.
The instructions on their Github repo are pretty straight-forward. I added the Dotenv.load to my config/application.rb

Resources