I am storing some .dat files in the public folder of my rails app on heroku. However, I cannot display those files in the heroku bash.
Notice the "total 4" in the ls -l result
~/public/files $ ls
README.txt
~/public/files $ ls -a
. .. README.txt
~/public/files $ ls -l
total 4
-rw------- 1 u5517 5517 25 2013-04-13 23:35 README.txt
So I know they are there, they are just not being shown. I need to be able to look at them to verify my app is working correctly.
Thanks.
Heroku dynos have ephemeral file systems, so if you are writing files in your web process, they will not be available to other dynos, including one created from a heroku run bash session. Please also remember that dynos are restarted at least every 24 hours, so unless these are just temp files, it would be better to put the file somewhere like S3 that has long-term persistance and can be accessed by all your dynos.
Related
I have a Rails app running on Heroku and on every deploy I get a warning:
Warning: Your slug size (368 MB) exceeds our soft limit (300 MB) which may affect boot time.
I want to get under 300 MB. So I ran du -sh .[^.]* * | sort -hr which returned:
2,1G .git
176M node_modules
79M vendor
25M tmp
5,4M app
5,1M public
1,2M db
420K test
168K config
132K log
116K package-lock.json
32K bin
12K lib
12K Gemfile.lock
8,0K dump.rdb
8,0K .DS_Store
4,0K package.json
4,0K config.ru
4,0K Rakefile
4,0K README.md
4,0K Procfile
4,0K Gemfile
4,0K .gitignore
0B storage
tmp/* and log/* are in my .gitignore and removed from git.
The others, if I sum them up don't equal 368 MB. Where are those MB coming from?
I know that there are ways to reduce the node_modules etc. but first I would like to solve the issue above.
It's possible that before you added the .slugignore file you had some large files added to the git repo and now they are in the slug cache or as git refs.
You can remove the cache using the heroku-repo plugin
Install the plugin using the command
heroku plugins:install heroku-repo
gc
heroku repo:gc -a appname
This will run a git gc --aggressive against the applications repo. This is done inside a run process on the application.
purge-cache
heroku repo:purge_cache -a appname
This will delete the contents of the build cache stored in the repository. This is done inside a run process on the application.
After following these steps you may push your code to the Heroku.
If it does not work then maybe your requirements files are large in size such that it crosses the slug size limit.
Seems like Heroku may have updated clearing the build cache - from their docs https://help.heroku.com/18PI5RSY/how-do-i-clear-the-build-cache
Clear The Build Cache
You can clear the build cache for an app by using the Heroku Builds plugin:
First install the plugin:
heroku plugins:install heroku-builds
Then use the following command to clear the cache:
heroku builds:cache:purge -a example-app
The cache will be rebuilt on the next deploy. If you do not have any new code to deploy, you can push an empty commit.
$ git commit --allow-empty -m "Purge cache"
$ git push heroku master
Where appname is replaced by the name of the app you want to clear the cache for.
We're running Elastic Beanstalk (64bit Amazon Linux 2016.09 v2.3.1 running Ruby 2.3 (Puma)) with a Rails app.
The app log is writing to /var/apps/current/log/production.rb like standard. As standard configure with EB, that file is symlinked to /var/apps/containerfiles/logs/ and used for rotation and upload to S3.
For some reason, production.log appear to be overriden or truncated every time we eb deploy, which seems unintended.
Have we misconfigured something and how would you suggest we debug?
We came to the (perhaps obvious) conclusion, that there is no log magic to EB deploys. It just replace the /var/apps/current/ directory, including /var/apps/current/log. Thereby deleting all existing logs.
Our solution therefore was to place logs in a separate folder and patch EB to know where the log is placed. By overriding the production.log symlink in app_log_dir (/var/app/containerfiles/logs/) we still rely on EB's normal procedure for rotation and publishing to S3.
.ebextensions/log-rotation.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/01a_override_log_symlinks.sh":
mode: "000777"
content: |
#!/bin/bash
EB_APP_LOG_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_log_dir)
CUSTOM_APPLOG_DIR=/var/log/applog
mkdir -p $CUSTOM_APPLOG_DIR
chown webapp $CUSTOM_APPLOG_DIR
chmod 777 $CUSTOM_APPLOG_DIR
cd $EB_APP_LOG_DIR
ln -sf $CUSTOM_APPLOG_DIR/production.log production.log
ln -sf $CUSTOM_APPLOG_DIR/development.log development.log
/config/environments/production.rb
...
# Specific for Rails 5!
config.paths['log'] = "/var/log/applog/#{Rails.env}.log"
...
I was also really surprised when I found this, it seems to be the reverse of what we want. The owner of the log files should be /var/app/containerfiles.
On Amazon Linux 2, I just added a post-deploy hook to switch them back around and seems to work great... I had to do this before with Amazon Linux 1 (AMI) also.
This is the contents of .platform/hooks/postdeploy/logs.sh:
#!/bin/bash
# Switch over the master location of the log files to be /var/app/containerfiles/logs/, with a symlink into /var/app/current/log/ so logs are kept between deploys
# Effectively reversing this line:
# [INFO] adding builtin Rails logging support
# [INFO] log publish feature is enabled, setup configurations for rails
# [INFO] create soft link from /var/app/current/log/production.log to /var/app/containerfiles/logs/production.log
if [ -L /var/app/containerfiles/logs/production.log ]; then
unlink /var/app/containerfiles/logs/production.log
mv /var/app/current/log/production.log /var/app/containerfiles/logs/production.log
fi
touch /var/app/containerfiles/logs/production.log
ln -sf /var/app/containerfiles/logs/production.log /var/app/current/log/production.log
I have Plesk 12, and I installed another version of PHP using this guide. I switched a specific domain to use this version. So in the hosting settings, it says
PHP support (Run PHP as [FastCGI application], PHP version 5.3.1
Now, I needed to edit the php.ini file to turn on short tags (eww gross I know), but I can't turn them on. When I run service apache2 restart it doesn't restart FastCGI. It is installed at /usr/local/php531-cgi
ls -l
-rw-r--r-- 1 root root 1204 Mar 18 22:47 pear.conf
-rw-r--r-- 1 root root 69623 Mar 18 23:36 php.ini
I tried restarting the entire server, setting ini_set('short_open_tag',true);, and the commands service php5-fpm restart / service php-fpm restart
But the results are the same, short_open_tag Off
I followed #mario's advice, and checked phpinfo(). I was using the wrong php.ini file. I was using /usr/local/php531-cgi/etc/php.ini. The one I needed to use was in /var/www/vhosts/system/[domainname.com]/php.ini
I didnt even need to restart anything. Thanks Mario!
a quick command to see which php.ini file you're using
php -i | grep /php.ini
On my production server, which is hosted on digital ocean, if that helps, Ubuntu 12.04, I have RoR 4 and rake 10.1.1.
When I deploy, I run rake assets:precompile, and I've noticed a strange issue where if I have a rails console session open when I do this, I get the following output
~# rake assets:precompile
~# Killed
It's mainly annoying, but the reason I want it resolved is when hiring new developers, there will be deploy/console conflict nightmare.
Thanks,
Brian
Your precompile process is probably being killed because you are running out of RAM. You can confirm this by running top in another ssh session. To fix this, create a swap file that will be used when RAM is full.
Create SWAP Space on Ubuntu
You will probably end up needing some swap space if you plan on using Rails on Digital Ocean 512MB RAM droplet. Specifically, you will run out of RAM when compiling the assets resulting in the process being quietly killed and preventing successful deployments.
To see if you have a swap files:
sudo swapon -s
No swap file shown? Check how much disk space space you have:
df
To create a swap file:
Step 1: Allocate a file for swap
sudo fallocate -l 2048m /mnt/swap_file.swap
Step 2: Change permission
sudo chmod 600 /mnt/swap_file.swap
Step 3: Format the file for swapping device
sudo mkswap /mnt/swap_file.swap
Step 4: Enable the swap
sudo swapon /mnt/swap_file.swap
Step 5: Make sure the swap is mounted when you Reboot. First, open fstab
sudo nano /etc/fstab
Finally, add entry in fstab (only if it wasn't automatically added)
# /etc/fstab
/mnt/swap_file.swap none swap sw 0 0
Save and exit. You're done adding swap. Now your rake assets:precompile should complete without being killed.
Rake assets:precompile is a memory eating process.
So make sure you have enough RAM before using that command
I have an opsworks stack on aws and I'd to change my instance type.
I was using t1.micro and i just changed it to t1.small
Thanks a lot.
This uses a lot of RAM. To check how much available RAM memory you have free, use the command
free -m
This will show the available RAM in MB
A temporary solution would be to create a swap space.
I was going to add this as a comment to Jason R post above before you go into his steps, just to make sure it is a RAM resource issue.
you could also run
echo {1,2,3} > /proc/sys/vm/drop_caches
to clean up the cache memory, but it probably will not free up enough.
This might help someone. For me, since i couldn't use 'fallocate' command, i had to do:
sudo dd if=/dev/zero of=/mnt/4GB.swap bs=4096 count=1048576
sudo chmod 600 /mnt/4GBB.swap
sudo mkswap /mnt/4GB.swap
sudo swapon /mnt/4GB.swap
I have a Rails script that I would like to run daily. I know there are many approaches, and that a cron'd script/runner approach is frowned upon by some, but it seems to meet my needs.
However, my script is not getting executed as scheduled.
My application lives at /data/myapp/current, and the script is in script/myscript.rb. I can run it manually without problem as root with:
/data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb
When I do that, the special log file (log/myscript.log) gets logged to as expected:
Tue Mar 03 13:16:00 -0500 2009 Starting to execute script...
...
Tue Mar 03 13:19:08 -0500 2009 Finished executing script in 188.075028 seconds
I have it set to run with cron every morning at 4 am. root's crontab:
$ crontab -l
0 4 * * * /data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb
In fact, it looks like it's tried to run as recently as this morning!
$ tail -100 /var/log/cron
...
Mar 2 04:00:01 hostname crond[8894]: (root) CMD (/data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb)
...
Mar 3 04:00:01 hostname crond[22398]: (root) CMD (/data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb)
...
However, there is no entry in my log file, and the data that it should update has not been getting updated. The log file permissions (as a test) were even set to globally writable:
$ ls -lh
total 19M
...
-rw-rw-rw- 1 myuser apps 7.4K Mar 3 13:19 myscript.log
...
I am running on CentOS 5.
So my questions are...
Where else can I look for information to debug this?
Could this be a SELinux issue? Is there a security context that I could set or change that might resolve this error?
Thank you!
Update
Thank you to Paul and Luke both. It did turn out to be an environment issue, and capturing the stderr to a log file enabled me to find the error.
$ cat cron.log
/usr/bin/env: ruby: No such file or directory
$ head /data/myapp/current/script/runner
#!/usr/bin/env ruby
require File.dirname(__FILE__) + '/../config/boot'
require 'commands/runner'
Adding the specific Ruby executable to the command did the trick:
$ crontab -l
0 4 * * * /usr/local/bin/ruby /data/myapp/current/script/runner -e production /data/myapp/current/script/myscript.rb >> /data/myapp/current/log/cron.log 2>&1
By default cron mails its output to the user who ran it. You could look there.
It's very useful to redirect the output of scripts run by cron so that you can look at the results in a log file instead of some random user's local mail on the server.
Here's how you would redirect stdout and stderr to a log file:
cd /home/deploy/your_app/current; script/runner -e production ./script/my_cron_job.rb >> /home/deploy/your_app/current/log/my_file.log 2>&1
The >> redirect stdout to a file, and and the 2>&1 redirects stderr to stdout so any error messages will be logged as well.
Having done this, you will be able to examine the error messages to see what's really going on.
The usual problem when somebody discovers their script won't run in a cron job when it will run from the command line is that it relies on some piece of the environment that an interactive session has but cron doesn't get. Some frequent candidates are the "PATH" environment, and possibly "HOME".
On Linux, make sure all the config files (/etc/crontab, /etc/crond.{daily,hourly,etc}/* and /etc/cron.d/*) are only writeable to user root and are not symlinks, otherwise they will not even be considered.
To allow non-root and/or symlinks, specify the -p option to the crond daemon.