Connecting two vagrant machines - ruby-on-rails

I am currently attempting to connect two vagrant environments. One is a web application with an associated postgres database. The other is an API application which makes calls to the postgres database on the first vagrant machine. Can anyone provide advice as to how this can be achieved. I believe I will need to change my database.yml or envirornment.rb file but not quite sure how. My vagrantfiles and database.yml files are currently like so:
Front-End Machine Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "../Base", "/Base"
config.vm.synced_folder "../api", "/API"
end
Front-End Machine database.yml:
default: &default
adapter: postgresql
database: chsh
development: &development
<<: *default
host: localhost
username: username
password: password
database: database_name
pool: 10
API Machine:
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 3002, host: 3002
config.vm.synced_folder "../Base", "/Base"
config.vm.provider "virtualbox" do |vb|
vb.gui = true
end
end

As I read the code, I didn't see any way to configure multiple machines.
You can bypass this by reconfiguring before use..
module Vagrant
def set(name)
send(name) if respond_to?(name)
end
def front_end
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "../Base", "/Base"
config.vm.synced_folder "../api", "/API"
end
end
def api
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 3002, host: 3002
config.vm.synced_folder "../Base", "/Base"
config.vm.provider "virtualbox" do |vb|
vb.gui = true
end
end
end
end
You will be then able to do something like this:
Vagrant.set(:front_end)
Vagrant.set(:api)

Related

New Relic Insights is logging pageviews and User Agent info from local machine in dev environment

My New Relic Insights is logging pageviews and User Agent info from local machine in dev environment. I have another dev in another city who is also having the development enviroment pageviews and other info being logged.
When I pull up samples, I see localhost:3000, which is my port.
However, the production info is also being logged.
I have New Relic running using Heroku's default set up. It automatically sets the license key as an environment variable. I do not have the license key anywhere in the app, it only set through an environment variable.
If I pull up my local development environment, navigate to port 3000, and refresh, then query New Relic Insights for events in the last minute, I see my city, my user agent info, my visited url and pageview. Our product is in beta, there is really no chance that an actual user in my location is hitting the same random page.
I have tried turning development mode off, monitor off. I cannot understand how this can be happening.
I do have some files hosted on AWS (images and some js), if that matters
Gemfile
group :production do
gem 'rails_12factor'
gem 'newrelic_rpm'
end
config/newrelic.yml
common: &default_settings
license_key: <%= ENV["NEW_RELIC_LICENSE_KEY"] %>
log_level: info
development:
<<: *default_settings
app_name: app-dev
developer_mode: false
monitor_mode: false
agent_enabled: false
test:
<<: *default_settings
monitor_mode: false
developer_mode: false
agent_enabled: false
production:
app_name: app-prod
monitor_mode: true
agent_enabled: false
<<: *default_settings
config/puma.rb
require 'puma_worker_killer'
ActiveRecord::Base.connection_pool.disconnect!
PumaWorkerKiller.config do |config|
config.ram = ENV['PUMA_WORKER_KILLER_RAM'] || 1024 # mb
config.frequency = 5 # seconds
config.percent_usage = 0.98
config.rolling_restart_frequency = 12 * 3600 # 12 hours in seconds
end
PumaWorkerKiller.start
end
workers Integer(ENV['WEB_CONCURRENCY'] || 5)
min_threads_count = Integer(ENV['MIN_THREADS'] || 1)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads min_threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
# #sidekiq_pid ||= spawn('bundle exec sidekiq -c 2 -q default -q mailers')
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.establish_connection
end
end
config/initializers/sidekiq.rb
require 'sidekiq'
redis_url = ENV['REDISTOGO_URL']
redis_config = {
url: redis_url,
namespace: 'oct',
}
Sidekiq.configure_server do |config|
config.redis = {
url: ENV["REDISTOGO_URL"], namespace: 'app',
size: ENV["SIDEKIQ_SERVER_CONNECTIONS"].to_i || 6
}
config.error_handlers << Proc.new do |exception, context_hash|
SidekiqErrorService.new(exception, context_hash).notify
end
end
Sidekiq.configure_client do |config|
config.redis = {
url: ENV["REDISTOGO_URL"], namespace: 'app',
size: ENV["REDIS_CLIENT_CONNECTION_SIZE"].to_i || 2
}
end
So I believe it was the New Relic Browser JS that I included in the head of my pages. Once I set that to - if production_environment? (my helper method), then I only saw production environment traffic.
I believe that something in that JS was pinging my New Relic.
Fixed now.

Rails Mailer does not pick up on the host

Site is in sub-directory /app
In development.rb:
config.action_mailer.default_url_options = {
host: 'localhost:3000/app'
}
Url generated in the mailer item_url(1):
localhost/item/1, it should be: localhost:3000/app/item/1
How do I replace localhost with localhost:3000/app?
This is in Rails 4.1.15.
To expand on tadman's comment... Your host should only have the 'localhost' portion and a separate port option:
config.action_mailer.default_url_options = {
host: 'localhost',
port: 3000
}
As for making the app load under /app:
config.relative_url_root = "/app"
(See see Rails app in a subdirectory for details)
Using script_name did the trick:
config.action_mailer.default_url_options = {
host: 'localhost',
port: 3000,
script_name: '/app'
}

How to access Guest's port 3000 from Host?

Here's my Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu-14.04-x64"
# Sync'd folders
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder "~/work", "/home/vagrant/work", create: true
config.vm.synced_folder "~/apt-archives", "/var/cache/apt/archives/", create: true
# Ubuntu VM
config.vm.define "ubuntu" do |ubuntu|
ubuntu.vm.provision "shell", path: "provision.sh", privileged: false
ubuntu.vm.network "forwarded_port", guest: 3000, host: 8080 # http
ubuntu.vm.network "private_network", ip: "10.20.30.100"
ubuntu.vm.hostname = "ubuntu"
# VirtualBox Specific Stuff
# https://www.virtualbox.org/manual/ch08.html
config.vm.provider "virtualbox" do |vb|
# Set more RAM
vb.customize ["modifyvm", :id, "--memory", "2048"]
# More CPU Cores
vb.customize ["modifyvm", :id, "--cpus", "2"]
end # End config.vm.provider virtualbox
end # End config.vm.define ubuntu
end
For example, when I run rails app using port 3000, from the guest machine I would accessing http://localhost:3000.
But I'm trying to access the app via host's browser.
None of below worked:
http://10.20.30.100:8080
https://10.20.30.100:8080
http://10.20.30.100:3000
https://10.20.30.100:3000
Browser on the host's showing: ERR_CONNECTION_REFUSED
For security reasons, Rails 4.2 limits remote access while in development mode. This is done by binding the server to 'localhost' rather than '0.0.0.0' ....
To access Rails working on a VM (such as one created by Vagrant), you need to change the default Rails IP binding back to '0.0.0.0'.
See the answers on the following StackOverflow Question, there are a number of different approaches suggested.
The idea is to get Rails running either by forcing the following command:
rails s -b 0.0.0.0
Or by hardcoding the binding into the Rails app (which I found less desirable):
# add this to config/boot.rb
require 'rails/commands/server'
module Rails
class Server
def default_options
super.merge(Host: '0.0.0.0')
end
end
end
Personally, I would have probably gone with the suggestion to use foreman and a Procfile:
# Procfile in Rails application root
web: bundle exec rails s -b 0.0.0.0
This would allow, I believe, for better easier deployment synchronicity.

Rails, Redis and Sentinel

I have a Redis cluster of 4 nodes, 1 master and 3 slaves, monitored by Sentinel.
Now in rails I need to connect to this cluster, reading from the nearest replica, and writing to the master, the same as I do with MongoDB.
Using the redis-rails gem, how is it possible to configure the cache_store to specify the Sentinels instead of a single node?
I may have missed it, but I wasn't aware that you could configure it to read from the slaves? However, this is my master + 2 slave configuration:
config.cache_store = :redis_store, {
url: 'redis://prestwick/1',
sentinels: [{host: 'prestwick.i', port: 26379}, {host: 'carnoustie.i', port: 26379}, {host: 'birkdale.i', port: 26379}],
role: 'master',
expires_in: 1.hour
}
And in case it's useful, my configuration for a generic REDIS object and Sidekiq (this is in config/initializers/001_redis.rb):
redis_options = {
url: 'redis://prestwick/0',
sentinels: [{host: 'prestwick.i', port: 26379}, {host: 'carnoustie.i', port: 26379}, {host: 'birkdale.i', port: 26379}],
role: 'master'
}
redis_sidekiq_options = {
url: 'redis://prestwick/2',
sentinels: [{host: 'prestwick.i', port: 26379}, {host: 'carnoustie.i', port: 26379}, {host: 'birkdale.i', port: 26379}],
role: 'master'
}
REDIS = Redis.new(redis_options)
Sidekiq.configure_server do |config|
config.redis = redis_sidekiq_options
end
Sidekiq.configure_client do |config|
config.redis = redis_sidekiq_options
end

Port management strategy for Vagrant on Windows?

I was wondering if there is a community accepted pattern for managing vagrant port mappings? Auto correction is great, but it doesn't help with integration testing. In my case, I spin up multiple boxes, provision with Chef, and then run Serverspec.
You can see below, that with auto port assignments, the rake config file is no longer predictable when multiple boxes are used. Before I start another Yak shaving project, I was wondering how others handle this situation?
Vagrant.configure("2") do |config|
config.vm.define 'wsus_server',primary: true do |config|
config.vm.network "forwarded_port", guest: 5985, host: 15985, auto_correct: true
config.vm.network "forwarded_port", guest: 3389, host: 13390, auto_correct: true
config.vm.network "forwarded_port", guest: 4000, host: 4000, auto_correct: true
end
config.vm.define 'wsus_client' do |config|
config.vm.network "forwarded_port", guest: 5985, host: 15985, auto_correct: true
config.vm.network "forwarded_port", guest: 3389, host: 13390, auto_correct: true
config.vm.network "forwarded_port", guest: 4000, host: 4000, auto_correct: true
end
end
serverspec (rake config):
require 'serverspec'
require 'winrm'
include Serverspec::Helper::WinRM
include Serverspec::Helper::Windows
RSpec.configure do |c|
user = 'vagrant'
pass = 'vagrant'
endpoint = "http://localhost:15985/wsman"
c.winrm = ::WinRM::WinRMWebService.new(endpoint, :ssl, :user => user, :pass => pass, :basic_auth_only => true)
c.winrm.set_timeout 300 # 5 minutes max timeout for any operation
end
I think the community accepted pattern for that these days is to use Test Kitchen instead of firing up individual VMs in Vagrant.
You can even use use the kitchen-docker driver to avoid firing up VMs at all. If you're on OS X or Windows, dvm can help with creating a docker-server VM seamlessly for that.
You can also have a look at meez to get you started with that. I've written an article on InfoQ about that if you're interested.

Resources