I'm running into a strange issue trying to dockerize a (fairly large) rails app. I'm using the official ruby docker image for now, and mounting my code into the app. Using:
docker pull ruby:2.2.4
docker run -it -v $PWD:/code ruby:2.2.4 bash
Then in the container I ran:
cd /code
bundle install
rails c
This works fine (I can't connect to the database obviously since I didn't link it, but rails console otherwise runs normally and loads my app).
However, when I instead run bundle install --deployment, then running rails c just hangs forever.
Any idea what could cause this?
I was hoping to use a local copy of gems, because we're also using a bunch of npm modules (which install locally into node_modules) so I figured keeping gems in the local directory as well is the most straightforward and has the same persistence between docker runs in the development environment.
Nevermind I figured it out - it's not completely hanging just going really really slow thanks to Virtualbox's default type of shared drive. The code is on my OSX filesystem, mounted as a shared drive into Virtualbox (using vagrant), and then mounted as a volume into docker. When the gems are installed into vendor/bundle they all have to be loaded from this slow volume and there are ~150 gems total with all the dependencies.
root#fa575694f86a:/code# time echo puts :ok | rails c
Loading development environment (Rails 4.2.1)
Switch to inspect mode.
irb: warn: can't alias context from irb_context.
puts :ok
ok
nil
real 2m0.937s
user 0m2.574s
sys 0m59.985s
So it takes 2 whole minutes just to launch rails. I changed vagrant to use an NFS shared volume, and that brings the launch time down to ~12 seconds (literally 10x faster, I'd heard of NFS being 2x faster but this is pretty crazy). Using just bundle install to install the gems globally there's no penalty, it's the usual 3-4 second rails load time which is the same thing I get running in OSX.
Related
I have a somewhat peculiar situation involving Docker and Ruby on Rails.
I'm creating images of a Ruby on Rails project, the problem would be that the images are getting too big.
The project in question is a monolith that will become a microservice.
I intend to use the images in a Kubernetes cluster and due to the size of the images this can be detrimental to the K8S and the deployment time
Using smaller official Ruby images like Slim-Buster and Alpine, the images created by me were not so small, reaching approximately 600MB.
The main reason is due to the mandatory execution of the bundle install command.
To execute the "bundle exec puma" command, it first requests the execution of the "bundle install".
The /usr directory is the biggest "underdog" after running the bundle install.
I tried to get around this situation by placing the /usr directory with the Gems already installed on my local host and then mounting it in the container. Even so, the message "Install missing gem executables with bundle install" is reported after attempting to execute the" bundle exec puma "command.
Could you give me any tips on how to get around this situation with this project?
I'm avoiding reformulating the whole project to be migrated to microservices at this point, but I need some advice.
Thank you very much in advance
The most obvious advice would be to remove the unnecessary folders like spec (if you don't run tests over your docker image), node_modules and app/assets because you don't need them in production since they are normally precompiled.
Answers could be much more helpful if you post your Dockerfile :)
I have a Rails 6.0.0.rc1 application (with the appengine gem install) that I deployed to GCP. Is there a way to log into a remote rails console on the instance that runs the application? I tried this:
bundle exec rake appengine:exec -- bundle exec rails c
which gives the following output:
...
---------- EXECUTE COMMAND ----------
bundle exec rails c
Loading production environment (Rails 6.0.0.rc1)
Switch to inspect mode.
...
so apparently it executed the command, but closes the connection right after.
Is there an easy way to do this?
As reference: On Heroku this would simply be:
heroku run rails c --app my-application
There's a few steps involved:
https://gist.github.com/kyptin/e5da270a54abafac2fbfcd9b52cafb61
If you're running a Rails app in Google App Engine's flexible environment, it takes a bit of setup to get to a rails console attached to your deployed environment. I wanted to document the steps for my own reference and also as an aid to others.
Open the Google App Engine -> instances section of the Google Cloud Platform (GCP) console.
Select the "SSH" drop-down for a running instance. (Which instance? Both of my instances are in the same cluster, and both are running Rails, so it didn't matter for me. YMMV.) You have a choice about how to connect via ssh.
Choose "Open in browser window" to open a web-based SSH session, which is convenient but potentially awkward.
Choose "View gcloud command" to view and copy a gcloud command that you can use from a terminal, which lets you use your favorite terminal app but may require the extra steps of installing the gcloud command and authenticating the gcloud command with GCP.
When you're in the SSH session of your choice, run sudo docker ps to see what docker containers are presently running.
Identify the container of your app. Here's what my output looked like (abbreviated for easier reading). My app's container was the first one.
jeff#aef-default-425eaf...hvj:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND NAMES
38e......552 us.gcr.io/my-project/appengine/default... "/bin/sh -c 'exec bun" gaeapp
8c0......0ab gcr.io/google_appengine/cloud-sql-proxy "/cloud_sql_proxy -di" focused_lalande
855......f92 gcr.io/google_appengine/api-proxy "/proxy" api
7ce......0ce gcr.io/google_appengine/nginx-proxy "/var/lib/nginx/bin/s" nginx_proxy
25f......bb8 gcr.io/google_appengine/fluentd-logger "/opt/google-fluentd/" fluentd_logger
Note the container name of your app (gaeapp in my case), and run container_exec bash.
Add ruby and node to your environment: export PATH=$PATH:/rbenv/versions/2.3.4/bin:/rbenv/bin:/nodejs/bin
cd /app to get to your application code.
Add any necessary environment variables that your Rails application expects to your environment. For example: export DATABASE_URL='...'
If you don't know what your app needs, you can view the full environment of the app with cat app.yaml.
bin/rails console production to start a Rails console in the Rails production environment.
For example, in PHP-based apps it's recommended to run PHP-FPM from a user different from the user who owns codebase so if anyone hacks your app from the web won't be able to write anything to a codebase (except public assets directory). It seems like ruby apps designed to run HTTP servers (or application/rake servers) like Unicorn and Puma from the same user. Puma (the default rails server) doesn't even have config entries for specifying a user/group.
UPDATE
What I expect to do with a ruby app (the same I do with PHP-FPM) is to run sudo unicorn or sudo puma from a default user (codebase owner) and specify a user/group (like www-data) in config so the HTTP server will run the master process from root and child processes from www-data. Puma has no such settings and I haven't found any related issues in puma's repo. This made me think that maybe it's a common thing in Ruby ecosystem to run it like that. I still tried to run latest rails app (5.2.1) with unicorn but faced another issue: rails has a dependency on bootsnap gem and when you run sudo unicorn -c config.rb it creates tmp/cache/bootsnap-* directories as root (in master process) which breaks everything because child processes running from www-data won't have access to it. That made me consider again that maybe I'm doing something wrong and it's ok to run ruby apps from codebase-owner although I see no arguments how it's different from running PHP-FPM.
We have a Rails 3.2.9 app, and recently switched to Docker in development. By now, I've always used zeus local on my machine to preload my codebase and execute tests with Rspec faster.
But how would you achieve this with docker? When I try to install zeus inside my container with gem install zeus and start it with zeus start I get
Unable to accept socket connection.
It looks like Zeus is already running. If not, remove .zeus.sock and try again.
And there is a .zeus.soc (notice the missing k at the end) left in my filesystem.
Has anybody got this working with Docker?
Apparently zeus is not able to create the .zeus.sock file on the vboxsf filesystem used by VirtualBox for sharing a volume with the host. So a solution is to tell Zeus explicitely to create the file somewhere else by setting the ZEUSSOCK environment variable. This is discussed here: https://github.com/burke/zeus/issues/488
I'm using capistrano3-foreman gem to deploy my app into production which is in a centOS server but capistrano is trying to run foreman export command from root. Since I have installed rvm and other stuff from a user which has no password privilege in sudoers file, foreman export cannot be completed.
I'm getting the following error.
sh: /root/.rvm/bin/rvm: No such file or directory
How can I prevent capistrano-foreman from trying to run the command as root and make it set to my user home path.
Thanks in advance
Ok, since RHEL & CentOS 7 migrated to systemd, first mistake was trying to export foreman to upstart.
But When I exported foreman to systemd, systemd did not recognised foreman export scripts as a service so it didn't work either.
After many hours of work & research I decided to take my chance with supervisord on CentOS 7 and now It works like a charm.
http://supervisord.org/installing.html
And please note that Debian & Ubuntu are also getting rid of upstart...