Unable to deploy my rails application to fly.io - ruby-on-rails

I have a rails application called gratify_me that is running on rails version 6.1.4.1 & ruby version: 2.7.2.
I want to deploy my application to fly.io and the first command I ran was fly launch and it was successfully able to create the PostgreSQL database.
Now when I run fly deploy it shows me this error:
...
...
=> ERROR [stage-4 8/8] RUN bin/rails fly:build 3.5s
------
> [stage-4 8/8] RUN bin/rails fly:build:
#25 3.411 Missing encryption key to decrypt file with. Ask your team for your master key and write it to /app/config/master.key or put it in the ENV['RAILS_MASTER_KEY'].
------
Error failed to fetch an image or build from source: error building: executor failed running [/bin/bash -o pipefail -c ${BUILD_COMMAND}]: exit code: 1
From the error, it looks like it is looking for the master key but it's unable to find it from the specified path even though I do have the master.key file in the mentioned path.
Does anyone have an idea on how I can resolve this?

Related

How to run heroku-buildpack-nginx locally in a docker container?

I am trying to run this buildpack: https://github.com/heroku/heroku-buildpack-nginx locally in a docker container following the tutorial.
When I try to execute the second command after make shell I get this error:
$ make shell
$ cp bin/nginx-$STACK bin/nginx
$ FORCE=1 bin/start-nginx
cp: cannot stat 'bin/nginx-heroku-18': No such file or directory
I want to be able to start nginx buildpack to test it locally, but I am stuck at this error. Could someone help me please ? Thank you
You could use it with the pack CLI by following this tutorial on Using Heroku Buildpacks with Pack
Pack uses a newer version of the buildpack API, but the tutorial describes how to use an old buildpack with a shim so that it works.

Elastic Beanstalk docker error

I'm getting a cryptic error when trying to update the configuration of a single-container Docker application. Anybody have an idea of what might cause this, or how to go about debugging it?
ERROR [3009] : Command execution failed:
[CMD-ConfigDeploy/ConfigDeployStage0/ConfigDeployPreHook/00run.sh]
command failed with error code 1:
/opt/elasticbeanstalk/hooks/configdeploy/pre/00run.sh
docker: "tag" requires 2 arguments. See 'docker tag --help'.
(ElasticBeanstalk::ActivityFatalError)
I've seen this one before, and believe this happens when the Docker container failed to build. The command that failed is the one which runs your container, and it's failing (IIRC) because it can't find the container from the previous build step. Things to try:
Does the Docker container build successfully with eb local? (https://aws.amazon.com/blogs/aws/run-docker-apps-locally-using-the-elastic-beanstalk-eb-cli/)
Try checking eb-activity.log for errors during the build process
Terminate the EC2 instance or rebuild the EB environment (sometimes smaller instances get out-of-memory errors that prevent further deployments)
It could happen if your application fails to start successfully the first time it deploys. Just started having this problem myself.
Take a look at /var/log/eb-activity.log on your server... you may see something like:
[2015-07-23T00:19:11.015Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook/00run.sh] : Starting activity...
[2015-07-23T00:19:17.506Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook/00run.sh] : Activity execution failed, because: jq: error: Cannot iterate over null
aca80d7accfe4800ff04992e2f89a1e05689423d286deee31b53bf470ce89afb
Docker container quit unexpectedly after launch: bleBeanFactory.java:942)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:533)
... 93 more. Check snapshot logs for details. (ElasticBeanstalk::ExternalInvocationError)
caused by: jq: error: Cannot iterate over null
aca80d7accfe4800ff04992e2f89a1e05689423d286deee31b53bf470ce89afb
Docker container quit unexpectedly after launch: bleBeanFactory.java:942)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:533)
... 93 more. Check snapshot logs for details. (Executor::NonZeroExitStatus)
[2015-07-23T00:19:17.506Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook/00run.sh] : Activity failed.
[2015-07-23T00:19:17.507Z] INFO [2624] - [CMD-Startup/StartupStage1/AppDeployEnactHook] : Activity failed.
[2015-07-23T00:19:17.507Z] INFO [2624] - [CMD-Startup/StartupStage1] : Activity failed.
[2015-07-23T00:19:17.507Z] INFO [2624] - [CMD-Startup] : Completed activity. Result:
Command CMD-Startup(stage 1) failed.
Next, look at /var/log/eb-docker/containers/eb-current-app If you see an unexpected-quit.log then it should contain the errors that your application logged as it tried, unsuccessfully, to start.
Unfortunately, in my case, it's failing to start because an environment variable is missing. However, AWS prevents me from updating the configuration while the beanstalk is in this state. And I can't specify the environment variables while I create the environment. So I'm not sure what I'll do to fix the problem.
I have the exact same issue as #Shannon's. My workaround is
first, deploy a sample Dockerfile that guarantees to work,
then setup all environment variables my real Docker app would need,
finally redeploy the real Docker app.
A sample Dockerfile copy-pasted from AWS documentation:
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -y nginx zip curl
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN curl -o /usr/share/nginx/www/master.zip -L https://codeload.github.com/gabrielecirulli/2048/zip/master
RUN cd /usr/share/nginx/www/ && unzip master.zip && mv 2048-master/* . && rm -rf 2048-master master.zip
EXPOSE 80
CMD ["/usr/sbin/nginx", "-c", "/etc/nginx/nginx.conf"]
You can provide your environment variables on the command line in the eb create and eb clone commands. These are set before the create or clone task so the environment will come up with them set.
See the eb cli help. For example...
$ eb create -h
...
--envvars ENVVARS a comma-separated list of environment variables as
key=value pairs
...

capistrano 2 : `tar' could not be found in the path on the local host

I am using Capistrano version 2 and trying to deploy code on server.
but when i enter cap deploy:check command i am getting below error.
* executing "which tar"
servers: ["53.79.454.474"]
[53.79.454.474] executing command
command finished in 1088ms
The following dependencies failed. Please check them and try again:
--> `tar' could not be found in the path on the local host
I also try to install tar on my remote ubuntu machine, but still getting the same error.
sudo apt-get install tar
Reading package lists... Done
Building dependency tree
Reading state information... Done
tar is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
I don't know why i am getting this error. please help
Thanks,
This error appears because the machine that holds your repo does not have tar installed (windows). You should have no problems when moving your :repo_url to your production server or any other server with a linux distribution.
** Edit
Before moving the repo, you could try to set the :copy_compression as :zip in your production.rb
if that fails (it might), and you absolutely want/need to stick to microsoft, there will be no way around installing cygwin with tar enabled on the machine your repo lies on and adding cygwin to your PATH variable.
Check this google discussion for more information.

How do I run bundle remotely when RVM is involved

I am trying to create a tiny shell script that will deploy a Rails app by first rsync'ing, then running the bundle command remotely via ssh. My shell script looks like this:
#!/bin/bash
REMOTE_SERVER="myserver.com"
REMOTE_USER="me"
REMOTE_PATH="/home/me/"
BUNDLE_PATH="/usr/local/rvm/gems/ruby-2.0.0-p353/bin/bundle"
# Step 1: Rsync
rsync -ave ssh --exclude-from '.ignore' ./ $REMOTE_USER#$REMOTE_SERVER:$REMOTE_PATH
# Step 2: Bundle
ssh $REMOTE_USER#$REMOTE_SERVER "cd $REMOTE_PATH && $BUNDLE_PATH install"
Rsync'ing works fine but when RVM is involved, the bundle line throws the following error:
/usr/bin/env: ruby_executable_hooks: No such file or directory
So, I'm wondering ... Is it possible to run the bundle (and other commands like rake) as part of a single ssh command?
If it matters, the remote server is running Ubuntu 14.
This problem has already been solved by the community. It's called Capistrano.
http://capistranorb.com/

Capistrano Deployment failure : /etc/init.d/unicorn: line 42: rvm-shell command not found

I am deploying to Server using Unicorn, and capistrano in rails....
But at the final step of deployment . Capistrano exists out with this error:
INFO [47010f4f] Running /usr/bin/env service unicorn_app restart on xyzdomain.com
DEBUG [47010f4f] Command: service unicorn_app restart
DEBUG [47010f4f] Couldn't reload, starting 'cd /var/www/app/current; rvm-shell 'default' -c 'bundle exec unicorn -D -c /var/www/app/shared/config/unicorn.rb -E staging'' instead
DEBUG [47010f4f]
DEBUG [47010f4f] /etc/init.d/unicorn_app: line 42: rvm-shell: command not found
DEBUG [47010f4f]
cap aborted!
When I run rvm-shell command from terminal on the Server, it runs without error.
Note: rvm-shell is installed in ~/.rvm/bin . So it is not the same error as mentioned here :https://github.com/capistrano/capistrano/issues/43
Why is this happening?
Resolved the Problem:
It was actually some Permissions problem on the Server.
When I executed the command, which was failing directly on Server, it produced some directory permission errors. I resolved them by creating those directories manually.
cd /var/www/app/shared
mkdir pids
mkdir logs
Strangely Capistrano didn't display specific failure errors when deployment failed, which led to a lot of confusion and wasted time to debug.
Hope my Answer would help other people if they Get similar kind of error and save lot of their time. :)

Resources