Adding a background process (celery) to a scaling app on OpenShift - scalability

I am developing a scaling app for RH Openshift.
I plan to use three gears for the following purposes:
1) HAProxy and web cartridge
2) MySQL 5.1 database
3) Background process (Celery)
I have already setup the first two gears by creating a scalable app, and adding the MySQL database cartridge. How can I assign the last gear to run Celery?

rhc cartridge-add https://raw.github.com/tresbailey/openshift-celery-cartridge/master/metadata/manifest.yml -a <appname>
I have not tested this cartridge, it's just one i found when searching for "openshift celery cartridge", but I can verify that it installs on it's own gear in a scaled application. You can run rhc app show <appname> --gears to verify how many gears your application is using, and what cartridges each is running (both before and after installing the above cartridge)

Related

Is possible to deploy existing app to Docker?

I have existing MEAN stack application. I found more tutorials, but I cannot find anything about deploying existing app to Docker. Is this possible?
As long as you can get the sources of your project on your deployment platform (the Ubuntu server), you can then follow the guide "Dockerizing a Node.js web app".
It shows how to create a simple web application in Node.js, then build a Docker image for that application, and lastly run the image as a container.
You can see a more complete example at Semaphore.

Use RubyMine and Docker for development

I'm trying to develop a Rails project without having to install Ruby and all server tools in my Windows local machine. I've created my Docker containers (Ruby and MySQL) and installed the Docker plugin on RubyMine 2016.1, however it seems not very practical for the development daily use, I mean the cycle develop, run, debug, just before deployment to test server.
Am I missing something to make this workflow possible? Or isn't Docker suggested for this step in the development process?
I don't develop under Windows, but here's how I handle this problem under Mac OS X. First off, my Rails project has a Guardfile set up that launches rails (guard-rails gem) and also manages running my tests whenever I make changes (guard-minitest gem). That's important to get fast turnaround time in development.
I launch docker daemonized, mounting a local directory into the docker image, with an exposed port 3000, running a never-ending command.
docker run -d -v {local Rails root}:/home/{railsapp} -p 3000:3000 {image id} tail -f /dev/null
I do this so I can connect to it with an arbitrary number of shells, to do any activities I can only do locally.
Ruby 2.2.5, Rails 5, and a bunch of Unix developer tools (heroku toolbelt, gcc et al.) are installed in the container. I don't set up a separate database container, as I use SQLite3 for development and pg for production (heroku). Eventually when my database use gets more complicated, I'll need to set that up, but until then it works very well to get off the ground.
I point RubyMine to the local rails root. With this, any changes are immediately reflected in the container. In another command line, I spin up ($ is host, # is container):
$ docker exec -it {container id} /bin/bash
# cd /home/{railsapp}
# bundle install
# bundle exec rake db:migrate
# bundle exec guard
bundle install is only when I've made Gemfile changes or the first time.
bundle exec rake db:migrate is only when I've made DB changes or the first time.
At this point I typically have a Rails instance that I can browse to at localhost:3000, and the RubyMine project is 'synchronized' to the Docker image. I then mostly make my changes in RubyMine, ignoring messages about not having various gems installed, etc., and focus on keeping my tests running cleanly as I develop.
For handling a console when I get exceptions, I need to add:
config.web_console.whitelisted_ips = ['172.16.0.0/12', '192.168.0.0/16']
to config/environments/development.rb in order for it to allow a web debug console when exceptions happen in development. (The 192.168/* might not be necessary in all cases, but some folks have run into problems that require it.)
I still can't debug using RubyMine, but I don't miss it anywhere near as much as I thought I would, especially with web consoles being available. Plus it allows me to run all the cool tools completely in the development environment, and not pollute my host system at all.
I spent a day or so trying to get the remote debugger to work, but the core problem appears to be that (the way ruby-debug works) you need to allow the debugging process (in the docker container) to 'reach out' to the host's port to connect and send debugging information. Unfortunately binding ports puts them 'in use', and so you can't create a 'listen only' connection from the host/RubyMine to a specific container port. I believe it's just a limitation of Docker at present, and a change in either the way Docker handles networking, or in the way the ruby-debug-ide command handles transmitting debugging information would help fix it.
The upshot of this approach is that it allows me very fast turnaround time for testing, and equally fast turnaround time for in-browser development. This is optimal for new app development, but might not work as well if you have a large, old, crufty, and less-tested codebase.
Most/all of these capabilities should be present in the Windows version of Docker, as well.

Dokku error /var/lib/dokku/plugins/available/pg-plugin/plugin.toml: no such file or directory

So here what I did and the following output:
root#ubuntu-512mb-sfo1-01:/var/lib/dokku/plugins# dokku postgres:link DATABASE ubuntu-512mb-sfo1-01
2016/02/18 05:24:38 open /var/lib/dokku/plugins/available/pg-plugin/plugin.toml: no such file or directory
2016/02/18 05:24:38 open /var/lib/dokku/plugins/available/pg-plugin/plugin.toml: no such file or directory
no config vars for ubuntu-512mb-sfo1-01
Can someone help me? I try to deploy rails to digital ocean.
I use http://blog.flatironschool.com/using-digital-ocean-and-dokku-for-easier-rails-app-deploys/ - this tutorial but it seems to be horribly outdated. I ran into so many errors so I am thinking of giving this up and staying with heroku hosting.
It means that you don't have a Postgres docker container active. Take a look at the dokku-pg-pluging to know how to configure and instantiate a postgres docker container.
By the way, since your objective is to change from Heroku to DigitalOcean, and you're having trouble using dokku, may I suggest you using deploy bot instead? I did managed to successfully deploy an rails 4 app to DigitalOcean using deploy bot. Follow this tutorial. And you can easily follow this guide with deploy bot, adapting the unicorn and nginx stop/start services with the hooks that deploy bot provides.
Edit:
Since you wanted a more specific answer for the deploy bot solution, here goes my approach (this was +/- 3/4 months ago):
Create the droplet and follow the guide to create a droplet, install ruby, rails, unicorn and nginx and the script to control unicorn (it's in the tutorial).
Configure the deploy bot and make sure you run bundle install and another rails' specific commands (changing environments and so on) after the upload (this is a predefined hook).
The last command should be service nginx restart to restart the server (using the script from step 1).
Profit!

Database disappearing on OpenShift (OSE) running Postgres and Rails 4 / Ruby 2

I have Ruby (2) on Rails (4) app deployed on OpenShift Enterprise running a Postgres database. After initial deployment the app worked perfectly, information was persisted in the database, routing was working, all the tests were passing - everything was good.
Then I deployed some new changes with git push openshift master. When I went back to the app it was still running, but all the database content (including table structure) was gone.
The output from the push was clean. I didn't write any hooks, or have any funky cron jobs running. I could repeat the process, rebuilding the database, and watch it get blown away on every deployment. This problem was not occurring in my local instance.
tl;dr: Make sure you have a .openshift directory at your project's root, use this as an example: https://github.com/openshift/rails4-example
Here's what was going on.
When I created the Rails app I didn't know where I would end up deploying it. Consequently I didn't start with an OpenShift Rails skeleton app or by using rhc app create ruby-X.X.X -a railsX.
When I was told to deploy on open shift I just configured rhc and set up an openshift git remote.
This meant there was no .openshift directory in my project's root. Once I cloned https://github.com/openshift/rails4-example and moved that project's .openshift directory into my project root I was able to deploy without losing my database.

My app has no dynos running - No web processes running

Some of my apps at heroku has no dynos anymore, although previously it worked fine:
heroku logs says No web processes running. My other applications are working well.
How do I fix it?
i was having the same problem it really sucks i'm stuck in it from like 3 hours or more, eventually it's fixed just by deleting the whole heroku app and then specifying the buildpack that you gonna use in your interactive terminal with this command line heroku buildpacks:set heroku/php and you can add it on creating the app directly and that's what i did and it was fixed like that:
heroku create myapp --buildpack heroku/php
and the main reason was because of one python library has been installed i wasn't even using it so heroku finds two builpacks python and php and used the python one so when i did specifying that i'm actually using PHP everything has been pretty fine.

Resources