My problem in simple terms is that I have an executable that can't be run on Heroku, because it doesn't have the right permissions.
In more details, I have a RoR application on Heroku and I want to use server pdftk. But after installing it I need to chmod the file to be able to use it. And if I run a console on Heroku dashboard, put the chmod command in and try running pdftk it works, but it works just for that temporary dyno and it doesn't work on production server.
I tried creating .profile and putting the command in and that didn't work.
I tried creating Procfile and put release: chmod u+x /app/vendor/pdftk/bin/pdftk and it didn't work.
I tried all different versions of release, web, worker...
I tried creating a .sh file and putting the command in there and then running the file and it doesn't work either.
command for setting permission: chmod u+x /app/vendor/pdftk/bin/pdftk
If you need more info, please tell me.
Any help would be appreciated.
Okay, I figured out what the problem was.
I have a pipeline from gitlab and the permissions just needed to be set through git, so that they were correct when they came to the production enviroment.
I needed to run this code:git update-index --add --chmod=+x pdftk
Related
I'm trying out MGT Development Environment 7.0 and installed a fresh copy of Magento 2.
every time after php bin/magento setup:upgrade, and reload the page, generated files in var, pub, generated have different user and group clp:clp.
Instead of running chmod -R 777 . every time. Can anyone suggest a better solution?
Thank in advance.
After view phpinfo(), found out that php in running by user clp
Simply chown the webroot clp:clp and everytime run php command by sudo -u clp php yourCommand, which solve the problem.
I have a rails app that I inherited. In deploy.rb, it performs the following commands:
run "mv #{shared_path}/log/#{rails_env}.log #{shared_path}/log/#{rails_env}_old"
run "touch #{shared_path}/log/#{rails_env}.log && chmod -R 777 #{shared_path}/log"
So you can see it's moving the existing log file to one called _old and then creating a new one.
This causes a problem when in some situations, the first deploy fails. When I deploy again, it overwrites the _old file a second time and now the previously existing logs are gone.
The thing is, that I don't understand why the deploy script is doing this. I don't understand why it was written like this in the first place. I believe everybody would be fine if we just left the log files alone during the deploy.
Does anybody have any clues for me?
Remove it, and use log rotate.
What the deploy script is doing is good because log files get big really soon and writing to big file is costly. You should use log rotate or some other utility. But if you want to keep it simple, give unique file names by appending timestamps
run "mv #{shared_path}/log/#{rails_env}.log #{shared_path}/log/#{rails_env}_old_#{Time.now.Time.now.to_i}"
run "touch #{shared_path}/log/#{rails_env}.log && chmod -R 777 #{shared_path}/log"
I have a multi-app system running on a centOS box, that consists of our main app and a deployer app. when a client wants a new instance of our app, they use our deployer, fill in some info and the new install is created on our server. the issue i am having is that i can't get nginx to reload it's config file automatically. so after the deploy when visiting the new app we receive a 404 until i reload manually.
I've tried a few different ways including chmod /opt/nginx/sbin/nginx to 777, chmod the install script and deployer app to 777,
the script goes like this:
#create install directory -- works correctly
#copy files over -- works correctly
#run install script
##-- and then at this point i've tried multiple lines, including:
system("nginx -s reload") ## this works manually
system("/etc/init.d/nginx reload") ## this works manually
i've followed directions here: Restart nginx without sudo? to create a script to run without a sudo password and then tried this:
system("sudo /var/www/vhosts/deployer/lib/nginx_reload")
nothing seems to work, i'm assuming this is a permissions error, but maybe i'm wrong, if anyone could point me in any direction, that would be very helpful since i've been trying to figure this out for a few days too long and i'm fresh out of new ideas
sudo /etc/init.d/nginx reload
I am running Ruby on Rails 3.0.9 in a remote VPS machine running Ubuntu 10.04 LTS and I would like to know if it is possible to run some related Linux folder and file permission commands "directly" stating those in the RAILS_ROOT/config/environments/production.rb file. If so, how to accomplish that to set my RAILS_ROOT/public directory and its sub directories with 755 permissions?
I would like to make that because I would like to automate "setting" processes.
try this
system "chmod 755 public"
check out: System call from Ruby
You can use system command to run the OS commands. Make sure you don't face any security related issues.
Instead see if you can change the permissions while deploying the code itself.
In a windows environment I am getting the following error when trying to deploy to Heroku
C:/Ruby/lib/ruby/gems/1.8/gems/heroku-1.9.13/lib/heroku/commands/base.rb:32:in ': No such file or directory - git
remote (Errno::ENOENT)
from C:/Ruby/lib/ruby/gems/1.8/gems/heroku-1.9.13/lib/heroku/commands/ba
se.rb:32:in shell'
from C:/Ruby/lib/ruby/1.8/fileutils.rb:121:in
chdir'
from C:/Ruby/lib/ruby/1.8/fileutils.rb:121:in
cd'
from C:/Ruby/lib/ruby/gems/1.8/gems/heroku-1.9.13/lib/heroku/commands/ba
se.rb:32:inshell'
from C:/Ruby/lib/ruby/gems/1.8/gems/heroku-1.9.13/lib/heroku/commands/ap
p.rb:52:in create'
from C:/Ruby/lib/ruby/gems/1.8/gems/heroku-1.9.13/lib/heroku/command.rb:
48:insend'
from C:/Ruby/lib/ruby/gems/1.8/gems/heroku-1.9.13/lib/heroku/command.rb:
48:in run_internal'
from C:/Ruby/lib/ruby/gems/1.8/gems/heroku-1.9.13/lib/heroku/command.rb:
20:inrun'
from C:/Ruby/lib/ruby/gems/1.8/gems/heroku-1.9.13/bin/heroku:13
from C:/Ruby/bin/heroku:19:in `load'
from C:/Ruby/bin/heroku:19
Any idea how I can correct this? This is being run from the Ruby Command line (which seems to me like the regular command line)
Ok so I figured out a way to make it work and why it is likely happening.
For some reason I can only run the Ruby commands from the CMD prompt however the GIT commands only seem to work from the GIT Bash. When in the GIT Bash the Ruby commands don't work.
When you run the Heroku commands to create the service it seems to want to run certain GIT commands which don't work from the CMD prompt the way I have it set up.
To get around this for the moment I am adding the Heroku path for GIT as a remote manually and then pushing that manually when needed. An extra step but everything still works as intended.
If you need help with the work around check out the information in this link: http://www.wiki.devchix.com/index.php?title=Working_around_the_%22heroku_create%22_error
I'd still recommend using Git Bash over the normal windows CMD prompt.. but I know how tedious that can be sometimes.
You can bypass the need to do this however and get your Heroku gem working properly in your windows CMD prompt by adding your msysgit/bin path to your system Path variable.
That'll give your heroku gem access to the git command.
To add heroku as remote use the following:
git remote add heroku git#heroku.com:yourappname.git
Then push your master copy to Heroku:
git push heroku master