I'm trying to develop a Rails project without having to install Ruby and all server tools in my Windows local machine. I've created my Docker containers (Ruby and MySQL) and installed the Docker plugin on RubyMine 2016.1, however it seems not very practical for the development daily use, I mean the cycle develop, run, debug, just before deployment to test server.
Am I missing something to make this workflow possible? Or isn't Docker suggested for this step in the development process?
I don't develop under Windows, but here's how I handle this problem under Mac OS X. First off, my Rails project has a Guardfile set up that launches rails (guard-rails gem) and also manages running my tests whenever I make changes (guard-minitest gem). That's important to get fast turnaround time in development.
I launch docker daemonized, mounting a local directory into the docker image, with an exposed port 3000, running a never-ending command.
docker run -d -v {local Rails root}:/home/{railsapp} -p 3000:3000 {image id} tail -f /dev/null
I do this so I can connect to it with an arbitrary number of shells, to do any activities I can only do locally.
Ruby 2.2.5, Rails 5, and a bunch of Unix developer tools (heroku toolbelt, gcc et al.) are installed in the container. I don't set up a separate database container, as I use SQLite3 for development and pg for production (heroku). Eventually when my database use gets more complicated, I'll need to set that up, but until then it works very well to get off the ground.
I point RubyMine to the local rails root. With this, any changes are immediately reflected in the container. In another command line, I spin up ($ is host, # is container):
$ docker exec -it {container id} /bin/bash
# cd /home/{railsapp}
# bundle install
# bundle exec rake db:migrate
# bundle exec guard
bundle install is only when I've made Gemfile changes or the first time.
bundle exec rake db:migrate is only when I've made DB changes or the first time.
At this point I typically have a Rails instance that I can browse to at localhost:3000, and the RubyMine project is 'synchronized' to the Docker image. I then mostly make my changes in RubyMine, ignoring messages about not having various gems installed, etc., and focus on keeping my tests running cleanly as I develop.
For handling a console when I get exceptions, I need to add:
config.web_console.whitelisted_ips = ['172.16.0.0/12', '192.168.0.0/16']
to config/environments/development.rb in order for it to allow a web debug console when exceptions happen in development. (The 192.168/* might not be necessary in all cases, but some folks have run into problems that require it.)
I still can't debug using RubyMine, but I don't miss it anywhere near as much as I thought I would, especially with web consoles being available. Plus it allows me to run all the cool tools completely in the development environment, and not pollute my host system at all.
I spent a day or so trying to get the remote debugger to work, but the core problem appears to be that (the way ruby-debug works) you need to allow the debugging process (in the docker container) to 'reach out' to the host's port to connect and send debugging information. Unfortunately binding ports puts them 'in use', and so you can't create a 'listen only' connection from the host/RubyMine to a specific container port. I believe it's just a limitation of Docker at present, and a change in either the way Docker handles networking, or in the way the ruby-debug-ide command handles transmitting debugging information would help fix it.
The upshot of this approach is that it allows me very fast turnaround time for testing, and equally fast turnaround time for in-browser development. This is optimal for new app development, but might not work as well if you have a large, old, crufty, and less-tested codebase.
Most/all of these capabilities should be present in the Windows version of Docker, as well.
Related
I have learned to use docker as development server (LAMP and MEAN) and now I feel I should take next step, By removing PHP and node binaries from system and use binaries from containers. So on a fresh Solus install, I setup containers for PHP, node, Ruby etc. Solus already recommends using containers for such tasks. But I got stuck on first day.
I installed vs code (Code-oss) on installed extensions (prettier, PHPCS etc) on it, and they need path of installed binaries (path/to/phpcs, path/to/node etc).
I initially set up configuration path as
docker run -it --rm herloct/phpcs phpcs
based on https://gist.github.com/barraq/e7f85262bc7a0af2d8d8884d27b62d2c but using more updated container. It didn't work, So I set it up as alias thinking it would fool VSCode into thinking it is native command, but it didn't work either. I have confirmed that using those command directly from terminal does work, But VSCode PHPIntellisense extension does not want to work.
Any suggestion?
P.S. Any tip to keep container running in background as to avoid container bootup delay everytime I use PHPCS or javac from container? I can keep LAMP server running but everytime I enter terminal tools, it loads up new container to execute command, and then kill container causing delay for bootup and closing.
In case it is still relevant to someone: You might want to create a VS Code development container to use dockerized binaries.
For this to work, a .devcontainer.json is required which could be as simple as:
{
"image": "mcr.microsoft.com/vscode/devcontainers/typescript-node:0-12"
}
I want to use Docker as a development environment. I am familiar with the basic Docker concepts such as containers, images, volumes, etc. I am also reading this article.
I think that there are already images specifically created for RoR development. Could someone recommend me a couple of images to start with?
Suppose that I create a container, mount my
working folder (RoR projects). Besides code writing, there are also command line jobs such as Linux tasks (update, install), Rails specific commands (Rake, migrations....). I may need to install new binaries or new gems, change Ruby version using rbenv. How can I accomplish these tasks under Docker? May I type a command in a console or ssh the container?
I managed to create an ubuntu container and run it as following:
docker run -it -v /Users/me/Documents/Projects:/var/source_files ubuntu
It creates a console for my container. Next I guess I can run commands like gem install, apt-get update and etc. Is this how we should configure our environment?
I cannot find information on how to run, how to maintain, how add/remove gems, etc.
It's really up to you and what you're comfortable the most with. I'm assuming solo development on some kind of libraries rather than full-fledged apps[1].
I, for example, tend to use Makefiles when developing on specific Golang projects and have some separate images I tend to use for different occasions. For example, if I have to test a Python / Node scripts, I simply type play and I get into a silly container with a few dependencies pre-installed:
https://github.com/odino/dev#play
https://github.com/odino/dev/blob/master/play/Dockerfile
In my personal experience, though, I found that shell scripts / aliases work very well across projects, so I tend to have simple aliases that work on most projects. If I were you, for example, I would use a minimalistic approach and alias dev to docker run -ti -v $(pwd):/src $RUBY_IMAGE so that you can then run dev rake test, dev rails server etc etc from any project. Your $RUBY_IMAGE should have a few utilities installed (htop, curl and so on) and you should be good to go.
Again, I must stress on the fact that it really depends what you're comfortable with -- most of the times I'm extremely productive with just a Makefile.
[1] if working on full-fledged apps docker-compose works well for a lot of people and has a very good DX. minikube is a tool I'd recommend you to pick up only if you know how to work with kubernetes. We used docker-compose for a long time but have switched to minikube since a few months as it closely mirrors our production environment, and minikube works better (imho) when you have quite a few services talking to each other.
I am having some trouble understanding the vagrant workflow from their website.
I had previously been working on a project and had gone through the whole process of changing directory and setting up the vagrant box, etc. I had even run bundle install that installed all of the gems of the forked project I am working on. I configured the web server to work and was even able to go to see the project on my browser through the web server connection.
Later on I had to go get dinner so I did
vagrant destroy
When I returned, in the same directory I ran
vagrant up
Then I did
vagrant ssh
followed by
cd /vagrant
when I get here I run
rails s
and I get the following error:
The program 'rails' is currently not installed. You can install it by typing:
sudo apt-get install rails
Shouldn't running vagrant up remember all of the work I had previously done? Or do I have to restart from scratch and rebuild all of my gems every time? Am I missing something?
vagrant destroy does literally what the command says - destroys started up VM, completely with disc images. Every change (i.e. installation of software, results of running bundle install, etc) is lost, beyond the changes that happened in /vagrant directory.
If you want to just stop the VM without destroying disc images - you should use vagrant halt instead (or just power off VM as you would do with a real server - i.e. by issuing poweroff).
The general workflow for deploying a vagrant-powered VM outlined in documentation is that you distribute Vagrantfile along with your sources that includes provisioning section (config.vm.provision) which does the stuff you've described - installation of additional software not bundled in a box image (i.e. Rails, gems), setting up the databases, etc. It can be implemented in several ways, starting from just running a simple shell script (with sequential commands to execute), up to using high-profile configuration management systems like Chef, Puppet, CFEngine, Ansible, etc.
Temporary break (like going for dinner) generally does not require even halting a VM, less destroying it. Even a full-fledged VM running under VirtualBox / VMware / KVM with a single-user Rails application hardly consumes lots of resources to worry about.
I am developing 2 applications in Rails 3.1 (will upgrade soon), and have noticed that my current strategy has its drawbacks. What I am doing currently is:
Work directly on the development directory, have there version control with Git (which works perfect for me).
I have defined the databases like (omitted not interesting parts):
development:
database: db/dev.db
production:
database: db/dev.db
I have both applications running all the time in production mode, where the ports are defined as 3008 and 3009.
From time to time, I want to change little things, and start then a development server for one of the two applications directly with the defaults: rails s thin (port == 3000).
I have noticed that the following things don't work very well.
When I change CSS or Javascript files, I have often to cleanup (and after development rebuild) the assets.
Sometimes, the development server takes the files (CSS and Javascript) from one server and uses them for the other server. I have to manually clean the caches for the browser to avoid that.
What would be a better strategy to develop and use the two applications in parallel locally on my computer? Any tips and hints are welcome. Should I use a deployment tool (Capistrano) for that? Shall I roll my own Rake task for the divide? Or do I miss some magic switch that will heal the wounds (sounds pathetic :-))?
At the end, it is a mix of changes, so I answer my own questions and hope that others may learn something from it. At the end, there are 2 major decisions (and some minor ones):
Work with different repositories for development and production, even on the same machine. Add another one (a bare one) to synchronize the two. Push only from the development, pull only from production.
Use different ports all the time for different applications. Make a scheme like:
appA: dev ==> 4001, prod ==> 3001
appB: dev ==> 4002, prod ==> 3002
...
Here are the changes that I have done. rails/root is the root directory of my application, the overall directory structure is the following:
rails/
root/
another/
...
bare/
root.git/
another.git/
...
production/
root/
another/
...
Create 2 new repositories from the old one, one as a bare repository, the other one for production only:
mkdir rails/production
mkdir rails/bare
cd rails/bare
git clone ../root --bare
cd ../root
git remote add bare ../bare/root
cd rails/production
git clone ../bare/root
cd root
git remote add bare ../../bare/root
Don't use one (the same) database for development and production, just to be sure that Git can do its magic.
Develop (only) on the development repository.
After enough tests, do the following 2 steps:
root> git push bare
root/../production/root> git pull bare
Start the development server (only) with: root> rails s thin -p 4009
and the production server (only) with: root/../production/root> rails s thin -e production -p 3009
So as a result, I have a little more work to do, to stage changes from development to production, but I will eliminate those small irritations that were around all the time.
Running production servers on the development machine, or developing on the production machine is an unusual, even discouraged setup. Use your local machine to develop, run the server in development mode and run your test suite. Commit changes to git. Then, from time, to time, deploy to a server that runs in production mode. That's the recommended setup. As production server you could set up your own (e.g. your own machine or one in the cloud like EC2) and use Capistrano for deployment. More simply and with a lot less trouble, however, you can deploy to a service like Heroku. All you need to do is a git push and the app will deploy. A single instance of concurrency on Heroku is free, even.
Also, Windows is not a very well supported environment for running a Rails server, you're better off with Linux. For development, Windows may do the trick, but you'll definitely be in the minority. Most people are on Mac or Linux. Sometimes people recommend installing Ubuntu Linux on top of Windows in a virtual machine for Rails development.
I have a simple Ruby on Rails application that works through a localhost test (both using sqlite, or ruby mysql2 gem).
I have a web server ready to upload my app online.
I understand that I need to create a new mysql database, which is no problem, and obviously add the connect info in the database.yml, but how do I propertly upload the whole thing (app root) to a public dir of my site?
Rails itself contains a few links to get you started with deployment. I was in your boat a while ago, and I got started with Passenger and Apache within half an hour (although I did have some light Apache experience going in).
Get started just to prove to yourself you can do it
Not that it's a good idea, but the balls to the wall easiest way to "deploy" is the following (assuming you've already pulled your application into your deployment environment, created your database, and run rake db:migrate and any application-specific steps like bundle install on Rails 3):
rails server -p 80 on Rails 3 (./script/server -p 80 on Rails 2).
There is no step 2.
This has to be run on a machine for which you have administrative rights and for which port 80 is not already being listened to by another application. This is suboptimal in many ways, most apparent of which is that it won't allow for virtual hosting (i.e., it won't cooperate with other "websites" being run from that server), but it's a good baby step to dip your feet into.
Go to the machine's FQDN or in fact any hostname that resolves to the machine's IP address (via a hosts file or an A record), and you'll see your application.
Now do it properly
You're going to want to do the following to bring your application "up to speed":
Deploy it behind a virtual host behind a webserver application like Apache
Use a production-oriented deployment setup (WEBrick's single-threadedness, among other factors, make it unsuitable for production)
Actually use the "production" rails environment
I'll be recommending a very, very typical Apache/Passenger deployment environment. The reason is that (at least it seems to me) this particular stack is the most thoroughly supported across the Internet, so if you need to get help, you'll have the easiest time with this.
1. Set up Apache
I don't want to sound like a tool, but setting up Apache (if it's not already set up on your deployment environment) is left as an exercise for the reader. It also varies enough across platforms that I couldn't possible write a catchall guide. Coarsely, use your distribution's package manager (for Ubuntu, this is apt-get) to get it hooked up.
2. Set up Passenger
Passenger installation is even easier. You just run one command, and their guide runs you through all the steps. At this point, in your Rails application root, you'll be able to run passenger start instead of rails s to have Passenger fill the role that WEBrick once did.
3. Hook up Passenger with Apache
The Passenger guide fairly thoroughly documents, step by step, how to set it all up. The ServerName attribute in Apache's VirtualHost entry should be set to your hostname. Passenger will "find" the Rails application from the public directory you give Apache, and when you restart Apache, the first time the server gets a request for a page, Passenger will hook up your Rails application and start serving up files.
I'm not performing these steps as I'm writing this guide, so I'm not sure to what extent this is done automatically, but make sure that the site is enabled via a2ensite (in the case that you're putting this VirtualHost node in the sites-available directory) and that Passenger is enabled via a2enmod.
Make sure your production environment is ready
This is likely to be the first time you're using the production environment. Most rake tasks don't automatically act on the production environment, but you can conveniently force them to by including RAILS_ENV=production inline with any rake tasks. The one you'll very likely be running is rake db:migrate RAILS_ENV=production. The bundler in Rails 3 works independently of environment.
5. Go
Restart Apache. The specifics on how to do this will vary by distribution, so you'll have to look it up. For Ubuntu, apache2ctl restart does it for me.
Visit your hostname as you defined in ServerName, and you should be seeing your application up and running.
I've heard gems like capistrano can assist with this.
https://github.com/capistrano/capistrano
Heroku is an excellent (free) option: http://docs.heroku.com/quickstart
Also, deploying to Heroku is as easy as it gets!