How can I test sidekiq running on remote server? I can start sidekiq:
RAILS_ENV=production bundle exec sidekiq
but I don't know if it work or not. I don't use capistrano, I can use only cron.
I'm not sure this question makes a whole lot of sense. Have you gotten it running on your local development machine? If so, you should be able to do the same thing to test it remotely. Set up a job for it to run in the background, then ssh into your gear and use top or ps -ef to see if the process runs.
Related
I am using this command
bundle exec sidekiq -d
to run sidekiq server on the background. getting this error message
ERROR: Daemonization mode was removed in Sidekiq 6.0, please use a proper process supervisor to start and manage your services.
sidekiq run but not in the background. After closing the console sidekiq automatically close.
You may also think about using process manager like overmind which will help you manage multiple processes (for instance server and sidekiq)
https://github.com/DarthSim/overmind
There are other tools around the web, this is my personal choice.
You need to open another terminal tab, in ubuntu ctrl + shift + T and run command
bundle exec sidekiq start
It is removed from the latest versions of Sidekiq to promote users to learn the newer, better ways. Here is the link to the discussion on the same.
The discussion suggested using a process supervisor like systemd, upstart, foreman, etc. to manage Sidekiq.
So you need to write your own service file to start, stop sidekiq. For the reference, here is the link to example service of sidekiq.
https://github.com/mperham/sidekiq/blob/master/examples/systemd/sidekiq.service
You didn't mention the operating system so I'll just go with ubuntu production VM. You're going to want to setup sidekiq with something like systemd or upstart. Sidekiq has some example configurations to get you started https://github.com/mperham/sidekiq/tree/master/examples.
I haven't done this on a mac before, but a quick google and found this Start sidekiq automatically on OSX.
We are starting integrate our build and automation testing process to jenkins pipeline, and I have some issue with starting rails server.
First of all, this is our pipeline chart:
In any step of "Config" (0,1,2), I start different rails app with specific port, using: rails s -p XXXX -d, and just after the execution command, I run lsof -i:XXXX and I DO see the server running.
But, in QA stage, I want to use the servers I ran in Servers Configuration stage, but I get connection refused in our tests, and also, when accessing the server the apps ran on, I don't see them running anymore, even that I used -d to daemonize them.
Any ideas? it seems like the rails servers ran only for the servers configuration stage and then closed, is that possible? and if so, how do I handle them?
Thanks!
I'm trying to develop a Rails project without having to install Ruby and all server tools in my Windows local machine. I've created my Docker containers (Ruby and MySQL) and installed the Docker plugin on RubyMine 2016.1, however it seems not very practical for the development daily use, I mean the cycle develop, run, debug, just before deployment to test server.
Am I missing something to make this workflow possible? Or isn't Docker suggested for this step in the development process?
I don't develop under Windows, but here's how I handle this problem under Mac OS X. First off, my Rails project has a Guardfile set up that launches rails (guard-rails gem) and also manages running my tests whenever I make changes (guard-minitest gem). That's important to get fast turnaround time in development.
I launch docker daemonized, mounting a local directory into the docker image, with an exposed port 3000, running a never-ending command.
docker run -d -v {local Rails root}:/home/{railsapp} -p 3000:3000 {image id} tail -f /dev/null
I do this so I can connect to it with an arbitrary number of shells, to do any activities I can only do locally.
Ruby 2.2.5, Rails 5, and a bunch of Unix developer tools (heroku toolbelt, gcc et al.) are installed in the container. I don't set up a separate database container, as I use SQLite3 for development and pg for production (heroku). Eventually when my database use gets more complicated, I'll need to set that up, but until then it works very well to get off the ground.
I point RubyMine to the local rails root. With this, any changes are immediately reflected in the container. In another command line, I spin up ($ is host, # is container):
$ docker exec -it {container id} /bin/bash
# cd /home/{railsapp}
# bundle install
# bundle exec rake db:migrate
# bundle exec guard
bundle install is only when I've made Gemfile changes or the first time.
bundle exec rake db:migrate is only when I've made DB changes or the first time.
At this point I typically have a Rails instance that I can browse to at localhost:3000, and the RubyMine project is 'synchronized' to the Docker image. I then mostly make my changes in RubyMine, ignoring messages about not having various gems installed, etc., and focus on keeping my tests running cleanly as I develop.
For handling a console when I get exceptions, I need to add:
config.web_console.whitelisted_ips = ['172.16.0.0/12', '192.168.0.0/16']
to config/environments/development.rb in order for it to allow a web debug console when exceptions happen in development. (The 192.168/* might not be necessary in all cases, but some folks have run into problems that require it.)
I still can't debug using RubyMine, but I don't miss it anywhere near as much as I thought I would, especially with web consoles being available. Plus it allows me to run all the cool tools completely in the development environment, and not pollute my host system at all.
I spent a day or so trying to get the remote debugger to work, but the core problem appears to be that (the way ruby-debug works) you need to allow the debugging process (in the docker container) to 'reach out' to the host's port to connect and send debugging information. Unfortunately binding ports puts them 'in use', and so you can't create a 'listen only' connection from the host/RubyMine to a specific container port. I believe it's just a limitation of Docker at present, and a change in either the way Docker handles networking, or in the way the ruby-debug-ide command handles transmitting debugging information would help fix it.
The upshot of this approach is that it allows me very fast turnaround time for testing, and equally fast turnaround time for in-browser development. This is optimal for new app development, but might not work as well if you have a large, old, crufty, and less-tested codebase.
Most/all of these capabilities should be present in the Windows version of Docker, as well.
Using Rails(4.0.1) and delayed_job_active_record(4.0.1) for background tasks in my application. DJ works fine only when I use rake jobs:work in production mode. But I want it has to run daemon process, also this process no more alive after sometime.
If I run the bin/delayed_job start RAILS_ENV=production. I can able to the pid file in tmp/pids/delayed_job.pid. The process is in alive. But nothing is working. Any clue on this issue?
May be delayed_jobs are failing due to some reason/error. Try installing dj_mon (https://github.com/akshayrawat/dj_mon) in rails application and view jobs log.
If delayed jobs services is shutting down due to memory leakage or some unknown reason, you may try to install and configure 'monit' scripts on your server. It will restart delayed jobs service automatically.
I have a rails application that needs to communicate with a background java application using DBus. Is it possible to do this on heroku or will I need VPS hosting?
To use DBus, both applications need to be running inside the same operating system
I have only run rake jobs on herokus cron that is called 'Heroku Scheduler' but if you can load the java app on heroku you should be able to do it
Heroku handles Cron jobs via its own add on called Scheduler. You can read how to use it here: https://addons.heroku.com/scheduler