Starting Rails server issue - Jenkins Pipeline - ruby-on-rails

We are starting integrate our build and automation testing process to jenkins pipeline, and I have some issue with starting rails server.
First of all, this is our pipeline chart:
In any step of "Config" (0,1,2), I start different rails app with specific port, using: rails s -p XXXX -d, and just after the execution command, I run lsof -i:XXXX and I DO see the server running.
But, in QA stage, I want to use the servers I ran in Servers Configuration stage, but I get connection refused in our tests, and also, when accessing the server the apps ran on, I don't see them running anymore, even that I used -d to daemonize them.
Any ideas? it seems like the rails servers ran only for the servers configuration stage and then closed, is that possible? and if so, how do I handle them?
Thanks!

Related

Re-bundle on changes with Docker or PM2

So I've deployed an app to a remote server. The app is quite large, so I've made some production configs for bundling the client files with webpack.
Currently I'm running the server with docker-compose, and in the server-container I'm running pm2-docker (as there are some workers that needs to be run).
To bundle the client, I'm just using a command like npm run build, so I'm wondering how I could rebuild the client whenever I push changes to the remote server.
To duck it down: whenever there is changes, do npm run build
What would be the best way to do this? Are you able to run some kind of command specified in the PM2-.yml-file or in the docker-compose .yml-file?
Thankful for any help!

Rails server on AWS hosting

I have hosted a rails application on AWS. Every time I want to access my website, I have to go through some steps which are quite repetitive.
1. ssh -i <<a>my-keypair-pem> ec2-user#<<a>AWS-IPv4-public-IP>
2. rails s -p <<a>port> -b 0.0.0.0
After some time, I also get this error
'packet_write_wait: Connection to <AWS-IPv4-public-IP> port 22: Broken pipe'
I did some research and can't seem to find a way to keep my application running 24/7 without having to do these steps before accessing every time.
My AWS instance is on 24/7, so the website should run 24/7 as well.
Would assign an elastic IP to my instance help?
Appreciate any guidance.
EDIT: I followed this tutorial initially https://www.youtube.com/watch?v=jFBbcleSPoY and that is where I found the steps mentioned above.
There are many ways to run the rails server as daemon. If you google for "rails server as daemon", you will see many links. Have not added any links as many of good links are by hosting service providers.
If you still want to run the rails server through the shell for some reason, tmux is the way to go. The following excerpt is shamelessly copied from Tmux Wiki.
tmux is a terminal multiplexer. It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal.
You can open a tmux terminal and start rails server. You can detach from the tmux and quit your ssh session. Whenever you wish, you can ssh back to your server and re-attach to the tmux session. You rails server will still be running as you left it. This is great way to run development server in foreground for debugging.
Resolved the issue with https://mosh.org/, for anyone who stumbles on this post in the future.
Download and install mosh (mobile shell)
Run the modified version of the command mentioned in my original question
mosh -ssh="ssh -i <your-keypair.pem>" ec2-user#<AWS-Instance-IP>
This resolved my packet_write_wait issues and I don't have to keep restarting the rails server.

Use RubyMine and Docker for development

I'm trying to develop a Rails project without having to install Ruby and all server tools in my Windows local machine. I've created my Docker containers (Ruby and MySQL) and installed the Docker plugin on RubyMine 2016.1, however it seems not very practical for the development daily use, I mean the cycle develop, run, debug, just before deployment to test server.
Am I missing something to make this workflow possible? Or isn't Docker suggested for this step in the development process?
I don't develop under Windows, but here's how I handle this problem under Mac OS X. First off, my Rails project has a Guardfile set up that launches rails (guard-rails gem) and also manages running my tests whenever I make changes (guard-minitest gem). That's important to get fast turnaround time in development.
I launch docker daemonized, mounting a local directory into the docker image, with an exposed port 3000, running a never-ending command.
docker run -d -v {local Rails root}:/home/{railsapp} -p 3000:3000 {image id} tail -f /dev/null
I do this so I can connect to it with an arbitrary number of shells, to do any activities I can only do locally.
Ruby 2.2.5, Rails 5, and a bunch of Unix developer tools (heroku toolbelt, gcc et al.) are installed in the container. I don't set up a separate database container, as I use SQLite3 for development and pg for production (heroku). Eventually when my database use gets more complicated, I'll need to set that up, but until then it works very well to get off the ground.
I point RubyMine to the local rails root. With this, any changes are immediately reflected in the container. In another command line, I spin up ($ is host, # is container):
$ docker exec -it {container id} /bin/bash
# cd /home/{railsapp}
# bundle install
# bundle exec rake db:migrate
# bundle exec guard
bundle install is only when I've made Gemfile changes or the first time.
bundle exec rake db:migrate is only when I've made DB changes or the first time.
At this point I typically have a Rails instance that I can browse to at localhost:3000, and the RubyMine project is 'synchronized' to the Docker image. I then mostly make my changes in RubyMine, ignoring messages about not having various gems installed, etc., and focus on keeping my tests running cleanly as I develop.
For handling a console when I get exceptions, I need to add:
config.web_console.whitelisted_ips = ['172.16.0.0/12', '192.168.0.0/16']
to config/environments/development.rb in order for it to allow a web debug console when exceptions happen in development. (The 192.168/* might not be necessary in all cases, but some folks have run into problems that require it.)
I still can't debug using RubyMine, but I don't miss it anywhere near as much as I thought I would, especially with web consoles being available. Plus it allows me to run all the cool tools completely in the development environment, and not pollute my host system at all.
I spent a day or so trying to get the remote debugger to work, but the core problem appears to be that (the way ruby-debug works) you need to allow the debugging process (in the docker container) to 'reach out' to the host's port to connect and send debugging information. Unfortunately binding ports puts them 'in use', and so you can't create a 'listen only' connection from the host/RubyMine to a specific container port. I believe it's just a limitation of Docker at present, and a change in either the way Docker handles networking, or in the way the ruby-debug-ide command handles transmitting debugging information would help fix it.
The upshot of this approach is that it allows me very fast turnaround time for testing, and equally fast turnaround time for in-browser development. This is optimal for new app development, but might not work as well if you have a large, old, crufty, and less-tested codebase.
Most/all of these capabilities should be present in the Windows version of Docker, as well.

Start Rails server automatically on Ubuntu startup

I need to run my Rails server upon system startup. I am running my ROR code in my Ubuntu server. Here i need to see my rails server always in start condition. Suppose the Ubuntu server has shut down for some problem when it will start again my Rails server should start automatically. For this i made a script which is given below:
cd /home/subhrajyoti/spa
rails server -p 8888 -b 10.25.25.100 -d
In windows I am putting this file in the Windows Startup folder and it's running automatically each time Windows starts. Now I need the same behaviour in Ubuntu.
Check this out might help with your issue
Link

Integrity CI Server not running builds when using Passenger / Apache webserver

The Integrity App is working fine for me in my OSX dev environment. I've deployed an instance to a Ubuntu server for my production setup, and I'm able to setup a new project. Once I call a manual build to attempt to test a first build the build record is created, but the build is never run.
I've added a bunch of logging to my application and have been able to track the point of failure to when the build job is added in ThreadPool#add It appears everything is running fine to get the job added to the build pool, but that the pool isn't actually running anything despite being spawned and no exceptions being raised.
The environment I'm running is Ubuntu 11.04, RVM & Ruby 1.9.2-p290, Passenger / Apache, and running Integrity from master w/Sqlite3 and ThreadedBuilder.
UPDATE:
I found an article indicating this may be an issue with using Apache & Passenger not loading the Ruby environment properly. This appears to be the case since in dev I'm just running bundle exec rackup, and in production I was trying to use Passenger. So on the production machine I started an instance of Integrity using bundle exec rackup, which does indeed actually start running the builds except that it didn't properly find the bundler gem as it should have. I'm sure I can track down a fix for that somehow.
So essentially the issue I am having is with running Integrity with Passenger rather than using rackup. The article that pointed me in this direction didn't work with their solution of getting Ruby in the Apache environment though. Can anyone help me determine how to properly run Integrity with Passenger?
The issue was in the way Passenger handles threading. By changing to the DelayedBuilder using DelayedJob for builds rather than the ThreadedBuilder I was able to use Passenger as the web server.

Resources