We're running Elastic Beanstalk (64bit Amazon Linux 2016.09 v2.3.1 running Ruby 2.3 (Puma)) with a Rails app.
The app log is writing to /var/apps/current/log/production.rb like standard. As standard configure with EB, that file is symlinked to /var/apps/containerfiles/logs/ and used for rotation and upload to S3.
For some reason, production.log appear to be overriden or truncated every time we eb deploy, which seems unintended.
Have we misconfigured something and how would you suggest we debug?
We came to the (perhaps obvious) conclusion, that there is no log magic to EB deploys. It just replace the /var/apps/current/ directory, including /var/apps/current/log. Thereby deleting all existing logs.
Our solution therefore was to place logs in a separate folder and patch EB to know where the log is placed. By overriding the production.log symlink in app_log_dir (/var/app/containerfiles/logs/) we still rely on EB's normal procedure for rotation and publishing to S3.
.ebextensions/log-rotation.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/01a_override_log_symlinks.sh":
mode: "000777"
content: |
#!/bin/bash
EB_APP_LOG_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_log_dir)
CUSTOM_APPLOG_DIR=/var/log/applog
mkdir -p $CUSTOM_APPLOG_DIR
chown webapp $CUSTOM_APPLOG_DIR
chmod 777 $CUSTOM_APPLOG_DIR
cd $EB_APP_LOG_DIR
ln -sf $CUSTOM_APPLOG_DIR/production.log production.log
ln -sf $CUSTOM_APPLOG_DIR/development.log development.log
/config/environments/production.rb
...
# Specific for Rails 5!
config.paths['log'] = "/var/log/applog/#{Rails.env}.log"
...
I was also really surprised when I found this, it seems to be the reverse of what we want. The owner of the log files should be /var/app/containerfiles.
On Amazon Linux 2, I just added a post-deploy hook to switch them back around and seems to work great... I had to do this before with Amazon Linux 1 (AMI) also.
This is the contents of .platform/hooks/postdeploy/logs.sh:
#!/bin/bash
# Switch over the master location of the log files to be /var/app/containerfiles/logs/, with a symlink into /var/app/current/log/ so logs are kept between deploys
# Effectively reversing this line:
# [INFO] adding builtin Rails logging support
# [INFO] log publish feature is enabled, setup configurations for rails
# [INFO] create soft link from /var/app/current/log/production.log to /var/app/containerfiles/logs/production.log
if [ -L /var/app/containerfiles/logs/production.log ]; then
unlink /var/app/containerfiles/logs/production.log
mv /var/app/current/log/production.log /var/app/containerfiles/logs/production.log
fi
touch /var/app/containerfiles/logs/production.log
ln -sf /var/app/containerfiles/logs/production.log /var/app/current/log/production.log
Related
I have two users on my server, an Ubuntu 12.04 virtual server that I manage myself:
projectx is used to deploy the application and is the user/group for most files in /var/www/projectx
projectx_rails and it's used to run the Rails application. That way, the running rails application doesn't have access to modify the source code.
Some directories, like public/uploads, are configured to belong to projectx_rails:projectx_rails, so that the rails app can write the uploaded files.
My problem comes to the directory tmp. This directory is located in /var/www/projectx/shared and linked to each release in the usual capistrano way of handling releases. The problem is that some files created during deployment are then not writable by the running rails app and files created by the rails app are not writable by the deployment process.
Is there a way to handle this? Having all the files there belong to projectx_rails:projectx_rails and be group writable would be good enough, but I'm not sure how to trigger this.
I'm using: Capistrano 3, Rails 3.2, Ruby 2.1.2, Unicorn 4.8.3, nginx.
Well, this is my theory. It is obviously hard to test on my end, so consider it conjecture.
First: make a group that both users belong to. Like projectx_shared.
Second: make this group the group owner of the tmp directory:
chown projectx_rails:projectx_shared tmp
Third: set the setgid bit on this directory:
chmod g+s tmp
Now, the group owner of files added to tmp should be set to projectx_shared automatically. I think this will apply to capistrano tasks as well.
I'm assuming when you deploy, files already get rw-rw-r-- permissions automatically. If not, you'll need to set your UMASK to 002 in your, e.g. .bashrc as well.
Let me know if it works...
May be use ACL for shared files? The only thing that, enable ACL support in fstab.
setfacl -m d:u:projectx:rwx,u:projectx:rwx,\
d:u:projectx_rails:rwx,u:projectx_rails:rwx /var/www/projectx/shared/tmp
You can run commands on the remote machine through capistrano. You could run a directory owner change after, lets say, symlinking the application.
In your deploy.rb file, add a callback for it:
after 'deploy:create_symlink' do
run "chown -R projectx_rails:projectx_rails #{current_release}/tmp"
end
My current solution is to have this task:
namespace :deploy do
desc "Fix permissions"
task :fix_permissions do
on roles(:app) do
execute "sudo chown -R projectx_rails:projectx_rails #{shared_path}/tmp"
execute "sudo chmod ug+rwX,o+rw #{shared_path}/tmp"
end
end
end
and run it both at the beginning and the end of my deployment:
after "deploy:started", "deploy:fix_permissions"
before "deploy:restart", "deploy:fix_permissions"
and to make it work I had to add this to my sudoers:
projectx ALL=NOPASSWD: /bin/chown -R projectx_rails\:projectx_rails /var/www/projectx/shared/tmp
projectx ALL=NOPASSWD: /bin/chmod ug+rwX\,o+rw /var/www/projectx/shared/tmp
which makes me rather uncomfortable.
1) ensure both projectx and projectx_rails are members of the group projectx
2) add this to deploy:
task :change_tmp_pems do
run "chmod -Rf 775 #{shared_path}/tmp"
end
after "deploy:started", :change_tmp_pems
the -f will silently fail / skip any files it doesn't have access to, so that wont be an issue.
4 lines of code, pretty succinct.
Dont messa about with chown as it requires sudo normally and is unnecessary.
I have a multi-app system running on a centOS box, that consists of our main app and a deployer app. when a client wants a new instance of our app, they use our deployer, fill in some info and the new install is created on our server. the issue i am having is that i can't get nginx to reload it's config file automatically. so after the deploy when visiting the new app we receive a 404 until i reload manually.
I've tried a few different ways including chmod /opt/nginx/sbin/nginx to 777, chmod the install script and deployer app to 777,
the script goes like this:
#create install directory -- works correctly
#copy files over -- works correctly
#run install script
##-- and then at this point i've tried multiple lines, including:
system("nginx -s reload") ## this works manually
system("/etc/init.d/nginx reload") ## this works manually
i've followed directions here: Restart nginx without sudo? to create a script to run without a sudo password and then tried this:
system("sudo /var/www/vhosts/deployer/lib/nginx_reload")
nothing seems to work, i'm assuming this is a permissions error, but maybe i'm wrong, if anyone could point me in any direction, that would be very helpful since i've been trying to figure this out for a few days too long and i'm fresh out of new ideas
sudo /etc/init.d/nginx reload
I am using juggernaut push server. How to start redis and juggernaut in production mode cause I
juggernaut
or
redis-server will keep on showing me log etc.
I am using ruby on rails 3.
EDIT
I followed these two guides to setup juggernaut and redis on production server
Seems like both the servers are running smooth now. But how can i access
:8080/application.js for juggernaut.
I tried
my_ip:8080/application.js but nothing.
For hosting I am using Linode.
EDIT2
When I am trying to stop/start redis server its gives me output ie:
Starting/Stopping redis-server: redis-server.
But nothing when i m doing the same for juggernaut. Check screenshot.
EDIT
I can't see any log for juggernaut.. There is one for redis but nothin for juggernaut
EDIT
Executable file permissions to /etc/init.d/juggernaut file -- YES
-rwxr-xr-x 1 fizzy fizzy 1310 Sep 19 11:06 juggernaut
PIDFILE=/var/run/juggernaut.pid' is defined. Does that exist? --- NO
In the 'start' part it runs 'chown juggernaut:juggernaut'. Does the user juggernaut exist and is it member of the group juggernaut? -- YES/YES
cat /etc/group
redis:x:1002:
juggernaut:x:113:
groups juggernaut
juggernaut : juggernaut
EDIT
fizzy#li136-198:~$ sudo ls -l /usr/bin/juggernaut
ls: cannot access /usr/bin/juggernaut: No such file or directory
fizzy#li136-198:~$ sudo ls -l /usr/local/bin/juggernaut
lrwxrwxrwx 1 root root 40 Sep 20 02:48 /usr/local/bin/juggernaut -> ../lib/node_modules/juggernaut/server.js
I tried changing
DAEMON=/usr/bin/juggernaut
to
DAEMON=/usr/local/bin/juggernaut
after that i tried restarting the juggernaut using
sudo /etc/init.d/juggernaut start
Server started but not as background process/service.
EDIT
Running script in debugging mode ie
changing the shebang line at the top to add an -x, eg
#! /bin/bash -x
Here is the output:-
+ PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
+ DAEMON=/usr/bin/juggernaut
+ NAME=Juggernaut2
+ DESC=Juggernaut2
+ PIDFILE=/var/run/juggernaut.pid
+ test -x /usr/bin/juggernaut
+ exit 0
EDIT
Changing path of my juggernaut as it seems my juggernaut is installed somewhere else. Now here is the output
fizzy#li136-198:~$ sudo /etc/init.d/juggernaut start
+ PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
+ DAEMON=/usr/local/bin/juggernaut
+ NAME=Juggernaut2
+ DESC=Juggernaut2
+ PIDFILE=/var/run/juggernaut.pid
+ test -x /usr/local/bin/juggernaut
+ set -e
+ case "$1" in
+ echo -n 'Starting Juggernaut2: '
Starting Juggernaut2: + touch /var/run/juggernaut.pid
+ chown juggernaut:juggernaut /var/run/juggernaut.pid
+ start-stop-daemon --start --quiet --umask 007 --pidfile /var/run/juggernaut.pid --chuid juggernaut:juggernaut --exec /usr/local/bin/juggernaut
20 Sep 06:41:16 - Your node instance does not have root privileges. This means that the flash XML policy file will be served inline instead of on port 843. This will slow down initial connections slightly.
20 Sep 06:41:16 - socket.io ready - accepting connections
node.js:134
throw e; // process.nextTick error, or 'error' event on first tick
^
Error: EADDRINUSE, Address already in use
at Server._doListen (net.js:1106:5)
at net.js:1077:14
at Object.lookup (dns.js:153:45)
at Server.listen (net.js:1071:20)
at Object.listen (/usr/local/lib/node_modules/juggernaut/lib/juggernaut/server.js:51:21)
at Object.listen (/usr/local/lib/node_modules/juggernaut/lib/juggernaut/index.js:9:10)
at Object.<anonymous> (/usr/local/lib/node_modules/juggernaut/server.js:21:12)
at Module._compile (module.js:402:26)
at Object..js (module.js:408:10)
at Module.load (module.js:334:31)
+ echo failed
failed
+ exit 0
You probably want to start Juggernaut and Redis as a service / background process. Starting it as a service gives the opportunity to redirect the logs to a file which you then can inspect regularly.
To create a service that automatically starts at boot time, you have to do different things based on the OS you're using:
In Linux you can add an init.d script (Juggernaut Ubuntu example, Redis Ubuntu example)
In Mac OS X you use launchd.
In Windows you use this method.
Make sure you start the services after creating them by adding the service to the default runlevel (will start automatically during boot time) or start them manually.
Add service to default runlevel (Linux), is also part of both the Linux tutorials above:
sudo update-rc.d -f juggernaut defaults
sudo update-rc.d -f redis-server defaults
After adding the service to the default runlevel, you still need to start the service manually (Linux):
sudo /etc/init.d/juggernaut start
sudo /etc/init.d/redis-server start
I ran into the same problem (using Ubuntu 12.04 LTS). Using upstart did it for me.
Create a file 'juggernaut.conf' containing:
start on filesystem and started networking
stop on shutdown
script
# We found $HOME is needed. Without it, we ran into problems
export HOME="/root"
exec /usr/local/bin/juggernaut 2>&1 >> /var/log/juggernaut.log
end script
Save this file in /etc/init/ (not init.d) and make it executable (chmod +x). This is it, Juggernaut runs as a deamon if the server started.
A note: next to the juggernaut.log of juggernaut itself, there is a juggernaut.log located in /var/log/upstart/ where information is written on the attempts of upstart to start juggernaut.
I more or less copy-pasted the above script from this blog . However, the script shown there started with:
start on startup
This did not work for me, because the filesystem was not mounted properly at startup, so no possibility to create juggernaut.log (Read-only filesystem error). Credits to this post on serverfault for solving that.
Hey dudes.i am having this problem while symlinking. I have successfully deployed a ruby on rails application on server and all the migrations are done. It is deployed with phusion passenger. The application is in /home/username/rails_apps/myapp. I want to symlink it to a subdomain in my site. the path to subdomain is /home/username/public_html/subdom. So i used this command to symlink it.
ln -s '/home/username/rails_apps/myapp/public/' '/home/username/public_html/subdom'
when it is done, it creates http://subdom.maxsy.net/public
but it is supposed to be accessible by http://subdom.maxsy.net/
anybody have a sensible explanation for this problem? thanks
If /home/username/public_html/subdom already exists as a directory, the symlink does not replace the directory: instead, you get /home/username/public_html/subdom/public as a symlink pointing to /home/username/rails_app/myapp.
Since it appears that you really do want to replace /home/username/public_html/subdom by the symlink, you must first remove the /home/username/public_html/subdom directory before running ln -s /home/username/rails_app/myapp /home/username/public_html/subdom.
I think you just have one extra /, and possibly an existing subdom
rm -f /home/username/public_html/subdom
ln -s /home/username/rails_apps/myapp/public /home/username/public_html/subdom
I have a Rails script that I would like to run daily. I know there are many approaches, and that a cron'd script/runner approach is frowned upon by some, but it seems to meet my needs.
However, my script is not getting executed as scheduled.
My application lives at /data/myapp/current, and the script is in script/myscript.rb. I can run it manually without problem as root with:
/data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb
When I do that, the special log file (log/myscript.log) gets logged to as expected:
Tue Mar 03 13:16:00 -0500 2009 Starting to execute script...
...
Tue Mar 03 13:19:08 -0500 2009 Finished executing script in 188.075028 seconds
I have it set to run with cron every morning at 4 am. root's crontab:
$ crontab -l
0 4 * * * /data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb
In fact, it looks like it's tried to run as recently as this morning!
$ tail -100 /var/log/cron
...
Mar 2 04:00:01 hostname crond[8894]: (root) CMD (/data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb)
...
Mar 3 04:00:01 hostname crond[22398]: (root) CMD (/data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb)
...
However, there is no entry in my log file, and the data that it should update has not been getting updated. The log file permissions (as a test) were even set to globally writable:
$ ls -lh
total 19M
...
-rw-rw-rw- 1 myuser apps 7.4K Mar 3 13:19 myscript.log
...
I am running on CentOS 5.
So my questions are...
Where else can I look for information to debug this?
Could this be a SELinux issue? Is there a security context that I could set or change that might resolve this error?
Thank you!
Update
Thank you to Paul and Luke both. It did turn out to be an environment issue, and capturing the stderr to a log file enabled me to find the error.
$ cat cron.log
/usr/bin/env: ruby: No such file or directory
$ head /data/myapp/current/script/runner
#!/usr/bin/env ruby
require File.dirname(__FILE__) + '/../config/boot'
require 'commands/runner'
Adding the specific Ruby executable to the command did the trick:
$ crontab -l
0 4 * * * /usr/local/bin/ruby /data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb >> /data/myapp/current/log/cron.log 2>&1
By default cron mails its output to the user who ran it. You could look there.
It's very useful to redirect the output of scripts run by cron so that you can look at the results in a log file instead of some random user's local mail on the server.
Here's how you would redirect stdout and stderr to a log file:
cd /home/deploy/your_app/current; script/runner -e production ./script/my_cron_job.rb >> /home/deploy/your_app/current/log/my_file.log 2>&1
The >> redirect stdout to a file, and and the 2>&1 redirects stderr to stdout so any error messages will be logged as well.
Having done this, you will be able to examine the error messages to see what's really going on.
The usual problem when somebody discovers their script won't run in a cron job when it will run from the command line is that it relies on some piece of the environment that an interactive session has but cron doesn't get. Some frequent candidates are the "PATH" environment, and possibly "HOME".
On Linux, make sure all the config files (/etc/crontab, /etc/crond.{daily,hourly,etc}/* and /etc/cron.d/*) are only writeable to user root and are not symlinks, otherwise they will not even be considered.
To allow non-root and/or symlinks, specify the -p option to the crond daemon.