I use the following Dockerfile to create an image for my Rails 4.2 webapp:
FROM ruby:2.3.4
ENV LANG C.UTF-8
WORKDIR /usr/src/page
COPY Gemfile .
COPY Gemfile.lock .
RUN bundle install
COPY . .
EXPOSE 3000
ENV CERT_PATH ssl://0.0.0.0:3000?key=certificate.key&cert=certificate.crt
CMD rails server -b $CERT_PATH -e production
The Rails app uses the puma (version 3.9.1) webserver.
I create a docker image an run this image as a container on a webserver, which works fine. I can access the webapp via a domain test.example.com or direct via the IP of the server. I use https to access the webapp.
Now the problem:
After visiting a few sites the webapp stops responding (timeout error in browser). I think this happens only when visiting the site via the IP address (https://255.255.255.255/login etc.). However accessing via test.example.com does not work after that.
I have to add the SSL cert is self created.
In the rails logs in the container I see that the last request (which did not work) wasn't even logged, so my guess is it could be a docker problem?
However sudo journalctl -fu docker.service does not seem to show errors.
So my questions: Are there errors in my Dockerfile, are there some known bugs in one of the used software, does anybody know what my problem is, what are some docker commands to find error logs?
I did the following to solve the problem:
1) Used newest ruby version
2) Used newest puma version
3) Used puma workers
4) Made a config file for puma
For the time being, it seems these things solve the problem.
It appears that docker never made any problems.
Related
So I've deployed an app to a remote server. The app is quite large, so I've made some production configs for bundling the client files with webpack.
Currently I'm running the server with docker-compose, and in the server-container I'm running pm2-docker (as there are some workers that needs to be run).
To bundle the client, I'm just using a command like npm run build, so I'm wondering how I could rebuild the client whenever I push changes to the remote server.
To duck it down: whenever there is changes, do npm run build
What would be the best way to do this? Are you able to run some kind of command specified in the PM2-.yml-file or in the docker-compose .yml-file?
Thankful for any help!
I have hosted a rails application on AWS. Every time I want to access my website, I have to go through some steps which are quite repetitive.
1. ssh -i <<a>my-keypair-pem> ec2-user#<<a>AWS-IPv4-public-IP>
2. rails s -p <<a>port> -b 0.0.0.0
After some time, I also get this error
'packet_write_wait: Connection to <AWS-IPv4-public-IP> port 22: Broken pipe'
I did some research and can't seem to find a way to keep my application running 24/7 without having to do these steps before accessing every time.
My AWS instance is on 24/7, so the website should run 24/7 as well.
Would assign an elastic IP to my instance help?
Appreciate any guidance.
EDIT: I followed this tutorial initially https://www.youtube.com/watch?v=jFBbcleSPoY and that is where I found the steps mentioned above.
There are many ways to run the rails server as daemon. If you google for "rails server as daemon", you will see many links. Have not added any links as many of good links are by hosting service providers.
If you still want to run the rails server through the shell for some reason, tmux is the way to go. The following excerpt is shamelessly copied from Tmux Wiki.
tmux is a terminal multiplexer. It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal.
You can open a tmux terminal and start rails server. You can detach from the tmux and quit your ssh session. Whenever you wish, you can ssh back to your server and re-attach to the tmux session. You rails server will still be running as you left it. This is great way to run development server in foreground for debugging.
Resolved the issue with https://mosh.org/, for anyone who stumbles on this post in the future.
Download and install mosh (mobile shell)
Run the modified version of the command mentioned in my original question
mosh -ssh="ssh -i <your-keypair.pem>" ec2-user#<AWS-Instance-IP>
This resolved my packet_write_wait issues and I don't have to keep restarting the rails server.
I'm trying to develop a Rails project without having to install Ruby and all server tools in my Windows local machine. I've created my Docker containers (Ruby and MySQL) and installed the Docker plugin on RubyMine 2016.1, however it seems not very practical for the development daily use, I mean the cycle develop, run, debug, just before deployment to test server.
Am I missing something to make this workflow possible? Or isn't Docker suggested for this step in the development process?
I don't develop under Windows, but here's how I handle this problem under Mac OS X. First off, my Rails project has a Guardfile set up that launches rails (guard-rails gem) and also manages running my tests whenever I make changes (guard-minitest gem). That's important to get fast turnaround time in development.
I launch docker daemonized, mounting a local directory into the docker image, with an exposed port 3000, running a never-ending command.
docker run -d -v {local Rails root}:/home/{railsapp} -p 3000:3000 {image id} tail -f /dev/null
I do this so I can connect to it with an arbitrary number of shells, to do any activities I can only do locally.
Ruby 2.2.5, Rails 5, and a bunch of Unix developer tools (heroku toolbelt, gcc et al.) are installed in the container. I don't set up a separate database container, as I use SQLite3 for development and pg for production (heroku). Eventually when my database use gets more complicated, I'll need to set that up, but until then it works very well to get off the ground.
I point RubyMine to the local rails root. With this, any changes are immediately reflected in the container. In another command line, I spin up ($ is host, # is container):
$ docker exec -it {container id} /bin/bash
# cd /home/{railsapp}
# bundle install
# bundle exec rake db:migrate
# bundle exec guard
bundle install is only when I've made Gemfile changes or the first time.
bundle exec rake db:migrate is only when I've made DB changes or the first time.
At this point I typically have a Rails instance that I can browse to at localhost:3000, and the RubyMine project is 'synchronized' to the Docker image. I then mostly make my changes in RubyMine, ignoring messages about not having various gems installed, etc., and focus on keeping my tests running cleanly as I develop.
For handling a console when I get exceptions, I need to add:
config.web_console.whitelisted_ips = ['172.16.0.0/12', '192.168.0.0/16']
to config/environments/development.rb in order for it to allow a web debug console when exceptions happen in development. (The 192.168/* might not be necessary in all cases, but some folks have run into problems that require it.)
I still can't debug using RubyMine, but I don't miss it anywhere near as much as I thought I would, especially with web consoles being available. Plus it allows me to run all the cool tools completely in the development environment, and not pollute my host system at all.
I spent a day or so trying to get the remote debugger to work, but the core problem appears to be that (the way ruby-debug works) you need to allow the debugging process (in the docker container) to 'reach out' to the host's port to connect and send debugging information. Unfortunately binding ports puts them 'in use', and so you can't create a 'listen only' connection from the host/RubyMine to a specific container port. I believe it's just a limitation of Docker at present, and a change in either the way Docker handles networking, or in the way the ruby-debug-ide command handles transmitting debugging information would help fix it.
The upshot of this approach is that it allows me very fast turnaround time for testing, and equally fast turnaround time for in-browser development. This is optimal for new app development, but might not work as well if you have a large, old, crufty, and less-tested codebase.
Most/all of these capabilities should be present in the Windows version of Docker, as well.
I need to run my Rails server upon system startup. I am running my ROR code in my Ubuntu server. Here i need to see my rails server always in start condition. Suppose the Ubuntu server has shut down for some problem when it will start again my Rails server should start automatically. For this i made a script which is given below:
cd /home/subhrajyoti/spa
rails server -p 8888 -b 10.25.25.100 -d
In windows I am putting this file in the Windows Startup folder and it's running automatically each time Windows starts. Now I need the same behaviour in Ubuntu.
Check this out might help with your issue
Link
I configured my Rails 3 production app about 6 months ago on Ubuntu running nginx/passenger, using git and Capistrano for deployment.
Fast forward to last week - The data center I was using (DigitalOcean NYC) actually had a complete power failure (and the battery backup didn't work) - resulting in my server shutting completely down.
I did not set passenger or mysql to start on reboot, so when the hardware server restarted, my app was still down.
I really did not know much about what I was doing at the time when I launched it (since it was my first production server that I have worked with), and I followed a guide to get it up and running.
When I attempted to get the app running again, I managed to start mysqld no problem - but for the life of me couldn't remember how to get nginx/passenger running again.
Since time was of the essence (my client needed the app up and running ASAP), I ended up getting the app back up and running by navigating to my app directory (/current) and using the command:
passenger start -p 80 -e production
This did the trick but actually started Passenger Standalone. It seems to work fine (it is not a big or complicated app at all, maybe a few users at a time). I can navigate back to my directory and start and stop it using the above command (and passenger stop -p 80).
However, now my capistrano deploy (cap deploy) no longer restarts the server on a deploy (it is trying to run touch tmp/restart.txt) - which even if I try to run manually, does nothing since the server is running Passenger Standalone.
I can't remember how I got the server up and running in the first place because it was so long ago. I'm not using RVM - just the version of Ruby running directly on the server.
Does anyone know the correct command to start nginx/passenger (not standalone) on Ubuntu?
And even a step further - how I can get mysqld and nginx/passenger to automatically load on a hard server restart?
Capistrano does not restart the server because it actually creates a new app directory (/u/apps/.../releases/xxx), while Passenger Standalone is still running in the old app directory (/u/apps/.../releases/yyy). Therefore touching restart.txt doesn't work. Instead, you have to restart Passenger Standalone like this:
cd /path-to-previous-release && passenger stop -p 80
cd /path-to-current-release && passenger start -p 80 -e production
You mentioned you want to start nginx/passsenger. I assume that you mean the Nginx mode. Here's what you need to do:
Install Phusion Passenger using the official Passenger APT repository.
There is no step 2. If you did step 1, then the Ubuntu package will automatically configure Nginx to start at system boot, which will automatically start Passenger as well.
I don't understand why you ask how you can get mysqld to automatically start on a hard server restart. Mysqld is always started during system boot. You don't have to do anything.