cloudcontrol.com tcp port in use & procfile multiple commands & push hooks - ruby-on-rails

i'm trying to get redmine running on cloudcontrol.com. i've got four questions:
i need to do more that start a webserver, for example i need to run rake tasks each time i deploy. can i put those in a one liner? i got the following in my Procfile for testing:
web: touch foobar; echo "barbarz"; bundle exec rails s -p $PORT -e production
but i neither see a file foobar nor do i get barbarz in the log files :(
When i login to the server and want to start the application it tells me tcp $PORT is already in use:
u24293#depvk7jw2mk-24293:~/www$ fuser $PORT/tcp # netstat and lsof is not available
24293/tcp: 10 13
u24293#depvk7jw2mk-24293:~/www$ ps axu | grep 13
u24293 13 0.0 0.0 52036 3268 ? SNs 15:22 0:00 sshd: u24293#pts/0
by sshd??? why would that be?
i need to change this default behaviour during push:
-----> Rails plugin injection
Injecting rails_log_stdout
Injecting rails3_serve_static_assets
or run something after it as easyredmine doesnt like plugins in vendor/plugins (or i cahnge the code of easyredmine quickly). how would i do that (not change the code, run an after hook for that like with capistrano or so)?
we have our own gitlab on a dedicated server and for bundle i need to pull those gems. how can i get the public key of the user running the app before the first deployment so i can add it to gitlab?
thanks in advance :)

The web command is only executed in the web containers. Using run bash connects you to a special ssh container of your app. See https://www.cloudcontrol.com/dev-center/Platform%20Documentation#secure-shell-ssh
Generally, you can not put multiple commands in one Procfile line. Wrap them in a sh -c '<cmd1>; <cmd2>' call or use a shell script explicitly.
Keep in mind that this script will be executed in each container being started. This includes the number of containers you deploy your app with and any redeploys that are triggered by the platform during operation (in case of a node failures, addon changes etc.).
In the ssh container the $PORT is used by the ssh server you are connected to.
If it is a problem of redmine during runtime, you could remove the plugins in the mentioned startup script. If it's a problem during the gem install currently you can not circumvent this behavior.
Dependencies requiring special ssh keys are not supported right now. If your server supports basic auth over https, you can use the https://<username>:<password>#hostname syntax

Related

Run `rails c` on GCloud instace with appengine gem

I have a Rails 6.0.0.rc1 application (with the appengine gem install) that I deployed to GCP. Is there a way to log into a remote rails console on the instance that runs the application? I tried this:
bundle exec rake appengine:exec -- bundle exec rails c
which gives the following output:
...
---------- EXECUTE COMMAND ----------
bundle exec rails c
Loading production environment (Rails 6.0.0.rc1)
Switch to inspect mode.
...
so apparently it executed the command, but closes the connection right after.
Is there an easy way to do this?
As reference: On Heroku this would simply be:
heroku run rails c --app my-application
There's a few steps involved:
https://gist.github.com/kyptin/e5da270a54abafac2fbfcd9b52cafb61
If you're running a Rails app in Google App Engine's flexible environment, it takes a bit of setup to get to a rails console attached to your deployed environment. I wanted to document the steps for my own reference and also as an aid to others.
Open the Google App Engine -> instances section of the Google Cloud Platform (GCP) console.
Select the "SSH" drop-down for a running instance. (Which instance? Both of my instances are in the same cluster, and both are running Rails, so it didn't matter for me. YMMV.) You have a choice about how to connect via ssh.
Choose "Open in browser window" to open a web-based SSH session, which is convenient but potentially awkward.
Choose "View gcloud command" to view and copy a gcloud command that you can use from a terminal, which lets you use your favorite terminal app but may require the extra steps of installing the gcloud command and authenticating the gcloud command with GCP.
When you're in the SSH session of your choice, run sudo docker ps to see what docker containers are presently running.
Identify the container of your app. Here's what my output looked like (abbreviated for easier reading). My app's container was the first one.
jeff#aef-default-425eaf...hvj:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND NAMES
38e......552 us.gcr.io/my-project/appengine/default... "/bin/sh -c 'exec bun" gaeapp
8c0......0ab gcr.io/google_appengine/cloud-sql-proxy "/cloud_sql_proxy -di" focused_lalande
855......f92 gcr.io/google_appengine/api-proxy "/proxy" api
7ce......0ce gcr.io/google_appengine/nginx-proxy "/var/lib/nginx/bin/s" nginx_proxy
25f......bb8 gcr.io/google_appengine/fluentd-logger "/opt/google-fluentd/" fluentd_logger
Note the container name of your app (gaeapp in my case), and run container_exec bash.
Add ruby and node to your environment: export PATH=$PATH:/rbenv/versions/2.3.4/bin:/rbenv/bin:/nodejs/bin
cd /app to get to your application code.
Add any necessary environment variables that your Rails application expects to your environment. For example: export DATABASE_URL='...'
If you don't know what your app needs, you can view the full environment of the app with cat app.yaml.
bin/rails console production to start a Rails console in the Rails production environment.

Is it safe to run Ruby HTTP server from the user who owns codebase?

For example, in PHP-based apps it's recommended to run PHP-FPM from a user different from the user who owns codebase so if anyone hacks your app from the web won't be able to write anything to a codebase (except public assets directory). It seems like ruby apps designed to run HTTP servers (or application/rake servers) like Unicorn and Puma from the same user. Puma (the default rails server) doesn't even have config entries for specifying a user/group.
UPDATE
What I expect to do with a ruby app (the same I do with PHP-FPM) is to run sudo unicorn or sudo puma from a default user (codebase owner) and specify a user/group (like www-data) in config so the HTTP server will run the master process from root and child processes from www-data. Puma has no such settings and I haven't found any related issues in puma's repo. This made me think that maybe it's a common thing in Ruby ecosystem to run it like that. I still tried to run latest rails app (5.2.1) with unicorn but faced another issue: rails has a dependency on bootsnap gem and when you run sudo unicorn -c config.rb it creates tmp/cache/bootsnap-* directories as root (in master process) which breaks everything because child processes running from www-data won't have access to it. That made me consider again that maybe I'm doing something wrong and it's ok to run ruby apps from codebase-owner although I see no arguments how it's different from running PHP-FPM.

Passing commands to deployed Ruby on Rails App in Bluemix

Finally, with the help of SO member, JeffSloyer, I was able to deploy my RoR app on to bluemix. There seems to be an additional problem with the RoR app. I cant login as admin in this app.
http://csw-events.mybluemix.net/sign_in
The question here is not about the app itself, I have found a solution from the forum dedicated to this RoR app(Currently, in-active) -> SOLUTION.
The question is
1: Can I pass commands to an already deployed app on Bluemix using CF something like this
cf -c "User.last.update_attribute(:admin, true)"
If not, What are the alternatives for passing such commands
As for this eg it is
bundle exec rails console
User.last.update_attribute(:admin, true)
You can not pass commands to an already running CF application.
You can have the buildpack run commands for you at the start of the application by creating a manifest.yml file at the root of your application and specifying the command.
Sample manifest.yml:
---
applications:
- name: my-rails-app
command: bundle exec rake cf:on_first_instance db:migrate && bundle exec rails s -p $PORT -e $RAILS_ENV
Tips for Ruby Developers
You could also push the app again and add a command with the -c option:
cf push -f manifest.yml -c "User.last.update_attribute(:admin, true)"
It would mean some downtime but only a very limited amount of time.
If the app does not run anymore with a consequent push, run the same cf command with -c "null", Cloud Foundry is a bit buggy that way.
If it is only a once off command you want to pass this would be the recommended way, instead of putting it in the manifest file.

How to keep a ruby script running persistently within a Rails app on Heroku?

I have a Mailman server script that checks for incoming email and loads it into the rails app database. The script (should) run continuously and checks for new email every 60 seconds. I was able to run the script on Heroku using heroku run:detached script/mailman_server, but when I checked back a few days later it wasn't running. How can I ensure it is always running?
You should use the Cedar stack, and add a Procfile. Eg. Something like...
web: bundle exec unicorn -p $PORT -c ./unicorn.rb
mailman: bundle exec script/mailman_server
Then:
heroku ps:scale mailman=1
On the command line will add one worker. However. Should the worker encounter some kind of error and close you would need additional config to restart it.
Sendgrid have a service which can accept incoming emails for your app:
http://docs.sendgrid.com/documentation/api/parse-api-2/
I haven't looked at the pricing.

Start Rails server from within Rails

Is it possible to start a rails server from within a running rails server?
I would also like to install the gems using 'bundle install'.
I made a simple setup, but when i invoke 'bundle install', the gems from the running rails server are installed and not the gems for the server i wish to start.
What would be the best strategy to launch another rails server.
As others have stated in the comments, you can spawn shell commands from your Rails application. You have several options. http://mentalized.net/journal/2010/03/08/5_ways_to_run_commands_from_ruby/
If you want to inherit the user environment when running bundle commands, you might want to spawn a bash login shell then run the command. E.g. ```/bin/bash -l -c "command here".
Although you didn't ask about killing them, if you're only going to run this and you're not building anything to keep track of process IDs, you could find and kill other instances with some ps aux | awk '/process name or unique path/ {print $2}' | xargs kill magic

Resources