Starting Rails Instance Server from an absolute path? - ruby-on-rails

I'm trying to create a autostart script to start a Rails Instance Server when my SUSE Linux Server is restarting.
I've created a shell script in /etc/init.d/rails_s_appname with the following content:
#!/bin/bash
/home/appname/public_html/rails s -p 3333 -d
I gave the script 755 permissions and started it.
The result is the following:
/etc/init.d/rails_s_appname
/etc/init.d/rails_s_appname: line 2: /home/appname/public_html/rails: No such file or directory
Any one has an idea how to start a Rails Instance Server from an absolute path?

It's better do not use the rails script to launch your app like that.
Try Thin or unicorn. Both have the cwd configuration to say where is you APP_HOME
But if you really want do that. Use the script/rails commandline inside your APP_HOME to do what you want
/home/appname/script/rails s -p 3333 -d

Related

Make rails and sidekiq work together from different Docker containers (but can't use docker-compose)

I'm moving a rails app from Heroku to a linux server and deploying it using Caprover. It's an app very dependent on background jobs, which I run with sidekiq.
I've managed to make it work by running both the rails server (bundle exec rails server -b 0.0.0.0 -p80 &) and sidekiq (bundle exec sidekiq &) from a script that launches both in the CMD of the Dockerfile.
But I guess it would be much better (separation of concerns) if the rails server was in one Docker container and sidekiq in another one. But I can't figure out how to connect them. How do I tell my rails app that sidekiq lives in another container?
Because I use Caprover I'm limited to Dockerfiles to deploy my images, so I can't use docker-compose.
Is there a way to tell rails that it should use a certain sidekiq found in a certain Docker container? Caprover uses Docker swarm if that is of any help.
Am I thinking about this the wrong way?
My setup, currently, is as follows:
1 Docker container with rails server + sidekiq
1 Docker container with the postgres DB
1 Docker container with the Redis DB
My desired setup would be:
1 Docker container with rails server
1 Docker container with sidekiq
1 Docker container with postgres DB
1 Docker container with Redis DB
Is that even possible with my current limitations?
My rails + sidekiq Dockerfile is as follows:
FROM ruby:2.6.4-alpine
#
RUN apk update && apk add nodejs yarn postgresql-client postgresql-dev tzdata build-base ffmpeg
RUN apk add --no-cache --upgrade bash
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install --deployment --without development test
COPY . /myapp
RUN yarn
RUN bundle exec rake yarn:install
# Set production environment
ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
# Assets, to fix missing secret key issue during building
RUN SECRET_KEY_BASE=dumb bundle exec rails assets:precompile
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 80
COPY start_rails_and_sidekiq.sh /myapp/start_rails_and_sidekiq.sh
RUN chmod +x /myapp/start_rails_and_sidekiq.sh
# Start the main process.
WORKDIR /myapp
CMD ./start_rails_and_sidekiq.sh
the start_rails_and_sidekiq.sh looks like this:
#!/bin/bash
# Start the first process
bundle exec rails server -b 0.0.0.0 -p80 &
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start Rails server: $status"
exit $status
fi
# Start the second process
bundle exec sidekiq &
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start Sidekiq: $status"
exit $status
fi
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps aux |grep puma |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep sidekiq |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
I'm totally lost!
Thanks in advance!
Method 1
According to CapRover docs, it seems that it is possible to run Docker Compose on CapRover, but I haven't tried it myself (yet).
Method 2
Although this CapRover example is for a different web app, the Internal Access principle is the same:
You can simply add a srv-captain-- prefix to the name of the container if you want to access it from another container.
However, isn't this method how you told your Rails web app where to find the PostgreSQL DB container? Or are you accessing it through an external subdomain name?

Custom shell script in crontab

I've a simple shell script that executes a docker-exec command inside a container.
The script is located in /var/www/mysite-nginx/nginx-reload.sh and permissions of this file are -rwxrwxr-x
#!/bin/sh
docker exec -it mysite_nginx nginx -s reload
If I execute this script directly from shell, it works. But if I add the script to my crontab with the following line, it doesn't work.
15 4 * * * /var/www/mysite-nginx/nginx-reload.sh
I suppose that cron doesn't execute the command, or what is wrong?
On /var/log/syslog I have:
Jul 23 15:30:01 arrubiu CRON[29511]: (sergej) CMD (/var/www/mysite-nginx/nginx-reload.sh)
[EDIT] Solved in this way: docker exec is not working in cron
The issue seems to be that docker is not found. There are two ways around:
You enter the full paths of all application in your crontab script, you can find that out using e.g. locate docker, so that it looks something like
#!/bin/sh
/usr/bin/docker exec -it mysite_nginx
/usr/bin/nginx -s reload
Alternatively, you can set the $PATH and other environment variables in the same way how they are set for a usual sh-script. To achieve that, first backup what is saved in /etc/environment, and then flush it with the currently available variables by executing:
cp /etc/environment > ~/my_etc_environment_backup
env >> /etc/environment
Related questions on SO
Where can I set environment variables that crontab will use?

Running cron in a docker container on a windows host

I am having some problems trying to make a container that runs a cronjob. I can see cron running using top in the container but it doesn't write to the log file as the below example attempts to. The file stays empty.
I have read answers to the same question here:
How to run a cron job inside a docker container?
Output of `tail -f` at the end of a docker CMD is not showing
But I could not make any of the suggestions work. For example I used the dockerfile from here: https://github.com/Ekito/docker-cron/
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
#Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
crontab:
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
# Don't remove the empty line at the end of this file. It is required to run the cron job
It didn't work on my machine (windows 10). Apparently there seems to be a windows specific issue also reported by someone else: https://github.com/Ekito/docker-cron/issues/3
To test if it was just me doing something wrong I tried to do the same in a virtual machine running ubuntu (so an ubuntu host instead of my windows host) and that worked as expected. The log file is extended as expected.
So what can I do to try to make this work?
I tried writing to a mounted (bind) folder and making a volume to write to. Neither worked.
rferalli's answer on the github issue did the trick for me:
"Had the same issue. Fixed it by changing line ending of the crontab file from CRLF to LF. Hope this helps!"
I have this problem too.
My workaround is to use Task Scheduler to run a .bat file that start a container instead
Using Task Scheduler: https://active-directory-wp.com/docs/Usage/How_to_add_a_cron_job_on_Windows.html
hello.bat
docker run hello-world
TaskScheduler Action
cmd /c hello.bat >> hello.log 2>&1
Hope this help :)

Rails migration on ECS

I am trying to figure out how to run rake db:migrate on my ECS service but only on one machine after deployment.
Anyone has experience with that?
Thanks
You may do it via Amazon ECS one-off task.
Build a docker image with rake db migrate as "CMD" in your docker file.
Create a task definition. You may choose one task per host while creating the task-definition and desired task number as "1".
Run a one-off ECS task inside your cluster. Make sure to make it outside service. Once It completed the task then the container will stop automatically.
You can write a script to do this before your deployment. After that, you can define your other tasks as usual.
You can also refer to the container lifecycle in Amazon ECS here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_life_cycle.html. However, this is the default behavior of the docker.
Let me know if it works for you.
I built a custom shell script to run when my docker containers start ( CMD command in docker ):
#!/bin/sh
web_env=${WEB_ENV:-1}
rails_env=${RAILS_ENV:-staging}
rails_host=${HOST:-'https://www.example.com'}
echo "*****************RAILS_ENV is $RAILS_ENV default to $rails_env"
echo "***************** WEB_ENV is $WEB_ENV default to $web_env"
######## Rails migration ################################################
echo "Start rails migration"
echo "cd /home/app/example && bundle exec rake db:migrate RAILS_ENV=$rails_env"
cd /home/app/example
bundle exec rake db:migrate RAILS_ENV=$rails_env
echo "Complete migration"
if [ "$web_env" = "1" ]; then
######## Generate webapp.conf##########################################
web_app=/etc/nginx/sites-enabled/webapp.conf
replace_rails_env="s~{{rails_env}}~${rails_env}~g"
replace_rails_host="s~{{rails_host}}~${rails_host}~g"
# sed: -i may not be used with stdin in MacOsX
# Edit files in-place, saving backups with the specified extension.
# If a zero-length extension is given, no backup will be saved.
# we use -i.back as backup file for linux and
# In Macosx require the backup to be specified.
sed -i.back -e $replace_rails_env -e $replace_rails_host $web_app
rm "${web_app}.back" # remove webapp.conf.back cause nginx to fail.
# sed -i.back $replace_rails_host $web_app
# sed -i.back $replace_rails_server_name $web_app
######## Enable Web app ################################################
echo "Web app: enable nginx + passenger: rm -f /etc/service/nginx/down"
rm -f /etc/service/nginx/down
else
######## Create Daemon for background process ##########################
echo "Sidekiq service enable: /etc/service/sidekiq/run "
mkdir /etc/service/sidekiq
touch /etc/service/sidekiq/run
chmod +x /etc/service/sidekiq/run
echo "#!/bin/sh" > /etc/service/sidekiq/run
echo "cd /home/app/example && bundle exec sidekiq -e ${rails_env}" >> /etc/service/sidekiq/run
fi
echo ######## Custom Service setup properly"
What I did was to build a docker image to be run as a web server ( Nginx + passenger) or Sidekiq background process. The script will decide whether it is a web or Sidekiq via ENV variable WEB_ENV and rails migration will always get executed.
This way I can be sure the migration always up to date. I think this will work perfectly for a single task.
I am using a Passenger docker that has been designed very easy to customize but if you use another rails app server you can learn from the docker design of Passenger to apply to your own docker design.
For example, you can try something like:
In your Dockerfile:
CMD ["/start.sh"]
Then you create a start.sh where you put the commands which you want to execute:
start.sh
#! /usr/bin/env bash
echo "Migrating the database..."
rake db:migrate

Run command in Docker Container only on the first start

I have a Docker Image which uses a Script (/bin/bash /init.sh) as Entrypoint. I would like to execute this script only on the first start of a container. It should be omitted when the containers is restarted or started again after a crash of the docker daemon.
Is there any way to do this with docker itself, or do if have to implement some kind of check in the script?
I had the same issue, here a simple procedure (i.e. workaround) to solve it:
Step 1:
Create a "myStartupScript.sh" script that contains this code:
CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER"
if [ ! -e $CONTAINER_ALREADY_STARTED ]; then
touch $CONTAINER_ALREADY_STARTED
echo "-- First container startup --"
# YOUR_JUST_ONCE_LOGIC_HERE
else
echo "-- Not first container startup --"
fi
Step 2:
Replace the line "# YOUR_JUST_ONCE_LOGIC_HERE" with the code you want to be executed only the first time the container is started
Step 3:
Set the scritpt as entrypoint of your Dockerfile:
ENTRYPOINT ["/myStartupScript.sh"]
In summary, the logic is quite simple, it checks if a specific file is present in the filesystem; if not, it creates it and executes your just-once code. The next time you start your container the file is in the filesystem so the code is not executed.
The entry point for a docker container tells the docker daemon what to run when you want to "run" that specific container. Let's ask the questions "what the container should run when it's started the second time?" or "what the container should run after being rebooted?"
Probably, what you are doing is following the same approach you do with "old-school" provisioning mechanisms. Your script is "installing" the needed scripts and you will run your app as a systemd/upstart service, right? If you are doing that, you should change that into a more "dockerized" definition.
The entry point for that container should be a script that actually launches your app instead of setting things up. Let's say that you need java installed to be able to run your app. So in the dockerfile you set up the base container to install all the things you need like:
FROM alpine:edge
RUN apk --update upgrade && apk add openjdk8-jre-base
RUN mkdir -p /opt/your_app/ && adduser -HD userapp
ADD target/your_app.jar /opt/your_app/your-app.jar
ADD scripts/init.sh /opt/your_app/init.sh
USER userapp
EXPOSE 8081
CMD ["/bin/bash", "/opt/your_app/init.sh"]
Our containers, at the company I work for, before running the actual app in the init.sh script they fetch the configs from consul (instead of providing a mount point and place the configs inside the host or embedded them into the container). So the script will look something like:
#!/bin/bash
echo "Downloading config from consul..."
confd -onetime -backend consul -node $CONSUL_URL -prefix /cfgs/$CONSUL_APP/$CONSUL_ENV_NAME
echo "Launching your-app..."
java -jar /opt/your_app/your-app.jar
One advice I can give you is (in my really short experience working with containers) treat your containers as if they were stateless once they are provisioned (all the commands you run before the entry point).
I had to do this and I ended up doing a docker run -d which just created a detached container and started bash (in the background) followed by a docker exec, that did the necessary initialization. here's an example
docker run -itd --name=myContainer myImage /bin/bash
docker exec -it myContainer /bin/bash -c /init.sh
Now when I restart my container I can just do
docker start myContainer
docker attach myContainer
This may not be ideal but work fine for me.
I wanted to do the same on windows container. It can be achieved using task scheduler on windows. Linux equivalent for task Scheduler is cron. You can use that in your case. To do this edit the dockerfile and add the following line at the end
WORKDIR /app
COPY myTask.ps1 .
RUN schtasks /Create /TN myTask /SC ONSTART /TR "c:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\app\myTask.ps1" /ru SYSTEM
This Creates a task with name myTask runs it ONSTART and the task its self is to execute a powershell script placed at "c:\app\myTask.ps1".
This myTask.ps1 script will do whatever Initialization you need to do on the container startup. Make sure you delete this task once it is executed successfully or else it will run at every startup. To delete it you can use the following command at the end of myTask.ps1 script.
schtasks /Delete /TN myTask /F

Resources