I want to have a separate instance for running Sidekiq in my production environment.
Currently I have db, web and app instances and the app instance is taking care of sidekiq which is proving to be a wrong move.
I created a new instance with roles redis, redis_master and sidekiq but then when I SSH into it there is nothing running and if I do a cap deploy to it, during bundle install it says "dotenv" is only for instances with role app and it rolled back.
How do I set things up? Do I need to add app role to the Sidekiq instance for it to work?
EDIT:
Okay I've made it work by adding the app role to instance running sidekiq. Also I removed the passenger and apache roles from it manually so it does not start an app server. The only problem Im facing now is the fact that rubber is not automatically starting and stopping and restarting sidekiq during deploys. Need to figure that out.
Looking good though.
Are these ansible roles?
I'd recommend setting up a rails and ruby role (separately) in your ansible playbook, run the playbook in your new sidekiq-prod instance, then do a cap prod deploy
If you are using a rails app with capistrano to do your sidekiq deploys, you can access the rails env in your setup of lib/capistrano/tasks/sidekiq.cap:
export RAILS_ENV=<%= fetch(:rails_env) %>
Related
I'm trying to deploy a simple Ruby on Rails application (Ruby 2.7.7 and Rails ~> 6.1.7) using Elastic Beanstalk. Every time I deploy a new version or update the environment configuration, it takes some time, timeouts, and then fails with no changes applied.
Event details
I am SSH to my instance and I can't see any errors on eb-engine.log file. I restart my instance, terminate environment and create the new one too but that issue still happen.
I can't figure out why this happen. Please, help me!
Many thanks.
So here what I did and the following output:
root#ubuntu-512mb-sfo1-01:/var/lib/dokku/plugins# dokku postgres:link DATABASE ubuntu-512mb-sfo1-01
2016/02/18 05:24:38 open /var/lib/dokku/plugins/available/pg-plugin/plugin.toml: no such file or directory
2016/02/18 05:24:38 open /var/lib/dokku/plugins/available/pg-plugin/plugin.toml: no such file or directory
no config vars for ubuntu-512mb-sfo1-01
Can someone help me? I try to deploy rails to digital ocean.
I use http://blog.flatironschool.com/using-digital-ocean-and-dokku-for-easier-rails-app-deploys/ - this tutorial but it seems to be horribly outdated. I ran into so many errors so I am thinking of giving this up and staying with heroku hosting.
It means that you don't have a Postgres docker container active. Take a look at the dokku-pg-pluging to know how to configure and instantiate a postgres docker container.
By the way, since your objective is to change from Heroku to DigitalOcean, and you're having trouble using dokku, may I suggest you using deploy bot instead? I did managed to successfully deploy an rails 4 app to DigitalOcean using deploy bot. Follow this tutorial. And you can easily follow this guide with deploy bot, adapting the unicorn and nginx stop/start services with the hooks that deploy bot provides.
Edit:
Since you wanted a more specific answer for the deploy bot solution, here goes my approach (this was +/- 3/4 months ago):
Create the droplet and follow the guide to create a droplet, install ruby, rails, unicorn and nginx and the script to control unicorn (it's in the tutorial).
Configure the deploy bot and make sure you run bundle install and another rails' specific commands (changing environments and so on) after the upload (this is a predefined hook).
The last command should be service nginx restart to restart the server (using the script from step 1).
Profit!
I have a rails application that needs to communicate with a background java application using DBus. Is it possible to do this on heroku or will I need VPS hosting?
To use DBus, both applications need to be running inside the same operating system
I have only run rake jobs on herokus cron that is called 'Heroku Scheduler' but if you can load the java app on heroku you should be able to do it
Heroku handles Cron jobs via its own add on called Scheduler. You can read how to use it here: https://addons.heroku.com/scheduler
The Integrity App is working fine for me in my OSX dev environment. I've deployed an instance to a Ubuntu server for my production setup, and I'm able to setup a new project. Once I call a manual build to attempt to test a first build the build record is created, but the build is never run.
I've added a bunch of logging to my application and have been able to track the point of failure to when the build job is added in ThreadPool#add It appears everything is running fine to get the job added to the build pool, but that the pool isn't actually running anything despite being spawned and no exceptions being raised.
The environment I'm running is Ubuntu 11.04, RVM & Ruby 1.9.2-p290, Passenger / Apache, and running Integrity from master w/Sqlite3 and ThreadedBuilder.
UPDATE:
I found an article indicating this may be an issue with using Apache & Passenger not loading the Ruby environment properly. This appears to be the case since in dev I'm just running bundle exec rackup, and in production I was trying to use Passenger. So on the production machine I started an instance of Integrity using bundle exec rackup, which does indeed actually start running the builds except that it didn't properly find the bundler gem as it should have. I'm sure I can track down a fix for that somehow.
So essentially the issue I am having is with running Integrity with Passenger rather than using rackup. The article that pointed me in this direction didn't work with their solution of getting Ruby in the Apache environment though. Can anyone help me determine how to properly run Integrity with Passenger?
The issue was in the way Passenger handles threading. By changing to the DelayedBuilder using DelayedJob for builds rather than the ThreadedBuilder I was able to use Passenger as the web server.
I'm new to Heroku and Ruby on Rails and this may seem trivial. But I could not find the answer.
The Google App Engine has a web server application that emulates all of the App Engine services on local computer. Does Heroku have something similar?
Basically I want to run/debug RoR app on local machine before pushing it to Heroku.
If you are on the Cedar stack, there is a local utility called foreman that can read your procfile to simulate how it will run on Heroku. More info about it is on Dev Center here:
https://devcenter.heroku.com/articles/procfile#developing-locally-with-foreman
Heroku CLI has the local command to run your app locally. Without options, it will run the processes defined in the Procfile in the app root, using any environment variables defined in .env:
heroku local
For configuration options such as using different paths for .env and Procfile and local subcommands see: https://devcenter.heroku.com/articles/heroku-local
I use http://pow.cx/ and https://github.com/Rodreegez/powder for that. Is not emulating Heroku, but it allows you to set up a 'production' environment quickly.
Also, check http://devcenter.heroku.com/articles/multiple-environments and consider if you need a staging deploy.
Nothing like that exists for Heroku but to be honest you don't really need it. Develop locally, use Ruby 1.9.2 as that's the heroku default these days - keep in mind the constraints of Heroku http://devcenter.heroku.com/categories/platform-constraints. Use Postgres locally since that is what heroku shared DB is and you'll be off to a good start.