I was wondering if there is a community accepted pattern for managing vagrant port mappings? Auto correction is great, but it doesn't help with integration testing. In my case, I spin up multiple boxes, provision with Chef, and then run Serverspec.
You can see below, that with auto port assignments, the rake config file is no longer predictable when multiple boxes are used. Before I start another Yak shaving project, I was wondering how others handle this situation?
Vagrant.configure("2") do |config|
config.vm.define 'wsus_server',primary: true do |config|
config.vm.network "forwarded_port", guest: 5985, host: 15985, auto_correct: true
config.vm.network "forwarded_port", guest: 3389, host: 13390, auto_correct: true
config.vm.network "forwarded_port", guest: 4000, host: 4000, auto_correct: true
end
config.vm.define 'wsus_client' do |config|
config.vm.network "forwarded_port", guest: 5985, host: 15985, auto_correct: true
config.vm.network "forwarded_port", guest: 3389, host: 13390, auto_correct: true
config.vm.network "forwarded_port", guest: 4000, host: 4000, auto_correct: true
end
end
serverspec (rake config):
require 'serverspec'
require 'winrm'
include Serverspec::Helper::WinRM
include Serverspec::Helper::Windows
RSpec.configure do |c|
user = 'vagrant'
pass = 'vagrant'
endpoint = "http://localhost:15985/wsman"
c.winrm = ::WinRM::WinRMWebService.new(endpoint, :ssl, :user => user, :pass => pass, :basic_auth_only => true)
c.winrm.set_timeout 300 # 5 minutes max timeout for any operation
end
I think the community accepted pattern for that these days is to use Test Kitchen instead of firing up individual VMs in Vagrant.
You can even use use the kitchen-docker driver to avoid firing up VMs at all. If you're on OS X or Windows, dvm can help with creating a docker-server VM seamlessly for that.
You can also have a look at meez to get you started with that. I've written an article on InfoQ about that if you're interested.
Related
I'd like to adjust the puma configuration when running Capybara tests. Changing settings in .env, .env.test (I use dotenv), or config/puma.rb has no effect.
Where can I change the configuration?
Rails 5.1, poltergeist 1.15.0, capybara 2.14.0, puma 2.8.2
Generally with Capybara you configure the server in a register_server block. The :puma server definition Capybara provides is
Capybara.register_server :puma do |app, port, host|
require 'rack/handler/puma'
Rack::Handler::Puma.run(app, Host: host, Port: port, Threads: "0:4")
end
If you're using Rails 5.1 system testing it has added a layer of abstraction on top of that with server configuration being done in
ActionDispatch::SystemTesting::Server#register which is defined as
def register
Capybara.register_server :rails_puma do |app, port, host|
Rack::Handler::Puma.run(app, Port: port, Threads: "0:1")
end
end
Either way you should be able to either overwrite one of the current server registrations
Capybara.register_server :rails_puma do |app, port,host|
Rack::Handler::Puma.run(app, ...custom settings...)
end
or configure your own
Capybara.register_server :my_custom_puma do |app, port, host|
Rack::Handler::Puma.run(app, ...custom settings...)
end
Capybara.server = :my_custom_puma
In ApplicationSystemTestCase, you can configure by passing options to the default :puma server used by Rails in setup.
AFAIK, this will work for any options, but I've only used
Silent: true to silence the puma startup output
Thread: '1:1' to configure the puma process to only use one thread
Here's how I've setup up rails systems tests to run inside docker containers:
class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
driven_by :selenium, using: :chrome, screen_size: [1400, 1400], options: { url: "http://selenium:4444/wd/hub" }
def setup
Capybara.server_host = '0.0.0.0' # bind to all interfaces
Capybara.server = :puma, { Silent: true, Threads: '1:1' }
host! "http://#{IPSocket.getaddress(Socket.gethostname)}"
super
end
end
I am currently attempting to connect two vagrant environments. One is a web application with an associated postgres database. The other is an API application which makes calls to the postgres database on the first vagrant machine. Can anyone provide advice as to how this can be achieved. I believe I will need to change my database.yml or envirornment.rb file but not quite sure how. My vagrantfiles and database.yml files are currently like so:
Front-End Machine Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "../Base", "/Base"
config.vm.synced_folder "../api", "/API"
end
Front-End Machine database.yml:
default: &default
adapter: postgresql
database: chsh
development: &development
<<: *default
host: localhost
username: username
password: password
database: database_name
pool: 10
API Machine:
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 3002, host: 3002
config.vm.synced_folder "../Base", "/Base"
config.vm.provider "virtualbox" do |vb|
vb.gui = true
end
end
As I read the code, I didn't see any way to configure multiple machines.
You can bypass this by reconfiguring before use..
module Vagrant
def set(name)
send(name) if respond_to?(name)
end
def front_end
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "../Base", "/Base"
config.vm.synced_folder "../api", "/API"
end
end
def api
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 3002, host: 3002
config.vm.synced_folder "../Base", "/Base"
config.vm.provider "virtualbox" do |vb|
vb.gui = true
end
end
end
end
You will be then able to do something like this:
Vagrant.set(:front_end)
Vagrant.set(:api)
I'm trying to build a simple redis cluster locally on OSX, using docker inside virtualbox VMS. Specifically, I want to create a master redis node (192.168.0.100) with two slaves (.101 and .102). This involves configuring each machine and issuing a command to each slave to tell it who the master is in the key-value-store relationship.
This worked great, until I decided to make my configuration more flexible by doing some programmatic stuff inside my Vagrantfile. The result is that I am unable to even ping any of the machines beside .102, let alone uses redis-cli commands (for example, redis-cli -h 192.168.0.100 INFO to see that the master has slaves set up correctly)
My goals/constraints:
Run docker inside a virtualbox VM: I saw several ways to do this when reading the docs. I chose the way I do it because I don't want to refer to a ton of external files.
Run a sequence of commands on each VM after configuration to set slaves and whatnot.
I'd prefer not to just use boot2docker as I have a bunch other stuff I'm gonna be adding... twemproxy, sentinel and I want to simulate a production environment at least a little bit.
What am I doing wrong in my new file?
EDIT: I've investigated a bit more. One thing I tried was destroying all images, then running the old vagrant file to build the machines, and then reloading them with the new vagrant file. In this scenario, the machines worked, though each time I'd see authentication failure messages like redis1: Warning: Authentication failure. Retrying... after the machine had gone up. I was able to ping the machine no problem though.
New Vagrantfile (uses config file below):
require './config.rb'
include RedisClusterConfig
Vagrant.configure(2) do |config|
config.vm.provider "virtualbox"
## Redis Nodes (Can define multiple triplets) ##
for node in RedisClusterConfig.redisNodes
config.vm.define node.name do |a|
a.vm.hostname = node.name
a.vm.box = "ubuntu/trusty64"
# Skip checking for an updated Vagrant box
a.vm.box_check_update = false
# Networking
a.vm.network :private_network, ip: node.address
a.ssh.forward_agent = true
a.ssh.insert_key = false
# Set up docker with 'redis' image
a.vm.provision "docker" do |d|
d.images = ["redis"]
d.run "redis", name: "redis", args: "-p "+node.port+":"+node.port
end
# Additional configuration
a.trigger.after [:up, :resume, :reload, :provision] do
# Master/slave
unless node.slaveOfAddress.nil?
system("redis-cli -h " + node.address + " SLAVEOF " + node.slaveOfAddress + " " + node.slaveOfPort)
puts "Finished setting up slave node: " + node.name
else
puts "Finished setting up master node: " + node.name
end
end
end
end
end
config.rb:
module RedisClusterConfig
attr_reader :redisNodes
RedisNode = Struct.new("RedisNode", :name, :address, :port, :slaveOfAddress, :slaveOfPort)
p = "6379" # Default redis port
baseAddr = "192.168.0."
# Define a triplet
redis0 = RedisNode.new("redis0", baseAddr + "100", p, nil, nil)
redis1 = RedisNode.new("redis1", baseAddr + "101", p, baseAddr + "100", p)
redis2 = RedisNode.new("redis2", baseAddr + "102", p, baseAddr + "100", p)
#redisNodes = [redis0, redis1, redis2]
end
Old Vagrantfile:
# Make sure you have triggers plugin:
# vagrant plugin install vagrant-triggers
#
# TODO:
# - We really would like to run some shell commands after everything is done, not after each machine is provisioned
Vagrant.configure(2) do |config|
config.vm.provider "virtualbox"
## Redis Nodes ##
### Master Node###
config.vm.define "redis0" do |a|
a.vm.hostname = "redis0"
a.vm.box = "ubuntu/trusty64"
# Skip checking for an updated Vagrant box
a.vm.box_check_update = false
# Networking
a.vm.network :private_network, ip: "192.168.0.100"
a.ssh.forward_agent = true
a.ssh.insert_key = false
a.vm.provision "docker" do |d|
d.images = ["redis"]
d.run "redis",
name: "redis",
args: "-p 6379:6379"
end
end
### Slave Nodes ###
config.vm.define "redis1" do |a|
a.vm.hostname = "redis1"
a.vm.box = "ubuntu/trusty64"
# Skip checking for an updated Vagrant box
a.vm.box_check_update = false
# Networking
a.vm.network :private_network, ip: "192.168.0.101"
a.ssh.forward_agent = true
a.ssh.insert_key = false
a.vm.provision "docker" do |d|
d.images = ["redis"]
d.run "redis",
name: "redis",
args: "-p 6379:6379"
end
end
config.vm.define "redis2" do |a|
a.vm.hostname = "redis2"
a.vm.box = "ubuntu/trusty64"
# Skip checking for an updated Vagrant box
a.vm.box_check_update = false
# Networking
a.vm.network :private_network, ip: "192.168.0.102"
a.ssh.forward_agent = true
a.ssh.insert_key = false
a.vm.provision "docker" do |d|
d.images = ["redis"]
d.run "redis",
name: "redis",
args: "-p 6379:6379"
end
end
# After everything else has finished provisioning...
config.trigger.after [:up, :resume, :reload] do
# Configuring redis master/slave triplet
system('echo Configuring redis master/slave triplet...')
system('redis-cli -h 192.168.0.101 SLAVEOF 192.168.0.100 6379')
system('redis-cli -h 192.168.0.102 SLAVEOF 192.168.0.100 6379')
end
end
Here's my Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu-14.04-x64"
# Sync'd folders
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder "~/work", "/home/vagrant/work", create: true
config.vm.synced_folder "~/apt-archives", "/var/cache/apt/archives/", create: true
# Ubuntu VM
config.vm.define "ubuntu" do |ubuntu|
ubuntu.vm.provision "shell", path: "provision.sh", privileged: false
ubuntu.vm.network "forwarded_port", guest: 3000, host: 8080 # http
ubuntu.vm.network "private_network", ip: "10.20.30.100"
ubuntu.vm.hostname = "ubuntu"
# VirtualBox Specific Stuff
# https://www.virtualbox.org/manual/ch08.html
config.vm.provider "virtualbox" do |vb|
# Set more RAM
vb.customize ["modifyvm", :id, "--memory", "2048"]
# More CPU Cores
vb.customize ["modifyvm", :id, "--cpus", "2"]
end # End config.vm.provider virtualbox
end # End config.vm.define ubuntu
end
For example, when I run rails app using port 3000, from the guest machine I would accessing http://localhost:3000.
But I'm trying to access the app via host's browser.
None of below worked:
http://10.20.30.100:8080
https://10.20.30.100:8080
http://10.20.30.100:3000
https://10.20.30.100:3000
Browser on the host's showing: ERR_CONNECTION_REFUSED
For security reasons, Rails 4.2 limits remote access while in development mode. This is done by binding the server to 'localhost' rather than '0.0.0.0' ....
To access Rails working on a VM (such as one created by Vagrant), you need to change the default Rails IP binding back to '0.0.0.0'.
See the answers on the following StackOverflow Question, there are a number of different approaches suggested.
The idea is to get Rails running either by forcing the following command:
rails s -b 0.0.0.0
Or by hardcoding the binding into the Rails app (which I found less desirable):
# add this to config/boot.rb
require 'rails/commands/server'
module Rails
class Server
def default_options
super.merge(Host: '0.0.0.0')
end
end
end
Personally, I would have probably gone with the suggestion to use foreman and a Procfile:
# Procfile in Rails application root
web: bundle exec rails s -b 0.0.0.0
This would allow, I believe, for better easier deployment synchronicity.
vagrant#precise32:/vagrant$ rails s
=> Booting WEBrick
=> Rails 4.1.0 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
=> Notice: server is listening on all interfaces (0.0.0.0). Consider using 127.0.0.1 (--binding option)
=> Ctrl-C to shutdown server
[2014-08-24 23:33:39] INFO WEBrick 1.3.1
[2014-08-24 23:33:39] INFO ruby 2.1.2 (2014-05-08) [i686-linux]
[2014-08-24 23:33:39] INFO WEBrick::HTTPServer#start: pid=18812 port=3000
In chrome I'm getting the "This webpage is not available" error.
I follow this tutorial to setup Vagrant with Rails - http://tutorials.jumpstartlab.com/topics/vagrant_setup.html
What could be wrong here?
Edit:
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "hashicorp/precise32"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
config.vm.network "forwarded_port", guest: 80, host: 3000
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# If true, then any SSH connections made will enable agent forwarding.
# Default value: false
# config.ssh.forward_agent = true
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Don't boot with headless mode
# vb.gui = true
#
# # Use VBoxManage to customize the VM. For example to change memory:
# vb.customize ["modifyvm", :id, "--memory", "1024"]
# end
#
# View the documentation for the provider you're using for more
# information on available options.
# Enable provisioning with CFEngine. CFEngine Community packages are
# automatically installed. For example, configure the host as a
# policy server and optionally a policy file to run:
#
# config.vm.provision "cfengine" do |cf|
# cf.am_policy_hub = true
# # cf.run_file = "motd.cf"
# end
#
# You can also configure and bootstrap a client to an existing
# policy server:
#
# config.vm.provision "cfengine" do |cf|
# cf.policy_server_address = "10.0.2.15"
# end
# Enable provisioning with Puppet stand alone. Puppet manifests
# are contained in a directory path relative to this Vagrantfile.
# You will need to create the manifests directory and a manifest in
# the file default.pp in the manifests_path directory.
#
# config.vm.provision "puppet" do |puppet|
# puppet.manifests_path = "manifests"
# puppet.manifest_file = "site.pp"
# end
# Enable provisioning with chef solo, specifying a cookbooks path, roles
# path, and data_bags path (all relative to this Vagrantfile), and adding
# some recipes and/or roles.
#
# config.vm.provision "chef_solo" do |chef|
# chef.cookbooks_path = "../my-recipes/cookbooks"
# chef.roles_path = "../my-recipes/roles"
# chef.data_bags_path = "../my-recipes/data_bags"
# chef.add_recipe "mysql"
# chef.add_role "web"
#
# # You may also specify custom JSON attributes:
# chef.json = { mysql_password: "foo" }
# end
# Enable provisioning with chef server, specifying the chef server URL,
# and the path to the validation key (relative to this Vagrantfile).
#
# The Opscode Platform uses HTTPS. Substitute your organization for
# ORGNAME in the URL and validation key.
#
# If you have your own Chef Server, use the appropriate URL, which may be
# HTTP instead of HTTPS depending on your configuration. Also change the
# validation key to validation.pem.
#
# config.vm.provision "chef_client" do |chef|
# chef.chef_server_url = "https://api.opscode.com/organizations/ORGNAME"
# chef.validation_key_path = "ORGNAME-validator.pem"
# end
#
# If you're using the Opscode platform, your validator client is
# ORGNAME-validator, replacing ORGNAME with your organization name.
#
# If you have your own Chef Server, the default validation client name is
# chef-validator, unless you changed the configuration.
#
# chef.validation_client_name = "ORGNAME-validator"
end
Post your Vagrantfile, but you're probably missing the port forwarding configuration for port 3000.
Update: not missing, just wrong. Change both port options to 3000 and you should be good.