Vagrant: Passing docker run arguments to multiple containers using vagrant - docker

I'm using vagrant to deploy a multiple VM environment, for research purposes, and it was great so far. But now i need to pin each docker container to a specific CPU core, but i don't know how to do that using vagrant. I know i can use the "args" clause on the Vagrantfile to pass the "--cpuset" parameter to the docker run command, but i don't know how to use it in a loop, since i'm launching multiple containers and i need to pin each container to a different CPU core (eg. node1 pins to core #0, node2 pins to core #1, etc).
My current Vagrantfile is as follows, without the CPU Pinning thing:
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
# choose how many machines the cluster will contain
N_VMS = 32
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "tknerr/baseimage-ubuntu-14.04"
config.vm.network "private_network", ip: "192.168.121.2"
config.vm.provider "docker" do |v|
v.has_ssh = true
end
hosts_file = []
1.upto(N_VMS) do |i|
config.vm.define vm_name = "node#{i}" do |config|
config.vm.hostname = vm_name
end
end
script = <<-SCRIPT
apt-get -y update
apt-get -y install libcr-dev mpich2 mpich2-doc arp-scan openssh-server nano make
SCRIPT
script.sub! 'N_VMS', N_VMS.to_s
config.vm.provision "shell", inline: script
end

In the end, i was looking in the wrong place. The correct spot to add the "--cpuset-cpus" was in the config.vm.define block.
The code ended being like this:
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
# choose how many machines the cluster will contain
N_VMS = 32
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "tknerr/baseimage-ubuntu-14.04"
config.vm.network "private_network", ip: "192.168.121.2"
config.vm.provider "docker" do |v|
v.has_ssh = true
end
2.upto(N_VMS+1) do |i|
config.vm.define vm_name = "node#{i}" do |config|
# CPU PINNING CONFIG
config.vm.provider "docker" do |docker|
docker.create_args = ['--cpuset-cpus=' + ((i/2)-1).to_s]
end
config.vm.hostname = vm_name
end
end
script = <<-SCRIPT
apt-get -y update
apt-get -y install libcr-dev mpich2 mpich2-doc arp-scan openssh-server nano make
SCRIPT
script.sub! 'N_VMS', N_VMS.to_s
i=1
config.vm.provision "shell", inline: script
end

Suppose you want to config the vagrant vm in the loop, as the vagrantfile is based on the ruby language, so you can involve the development availablty to the vagrantfile, here is an example, you can add your "--cpuset" config in the vm define.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# read vm and chef configurations from JSON files
nodes_config = (JSON.parse(File.read("nodes.json")))['nodes']
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
nodes_config.each do |node|
node_name = node[0] # name of node
node_values = node[1] # content of node
# puts node_name
# puts node_values['box']
config.vm.define node_name do |config|
config.vm.box = node_values['box']
config.vm.hostname = node_name
config.vm.network :private_network, ip: node_values['ip']
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", node_values['memory']]
vb.customize ["modifyvm", :id, "--name", node_name]
end
end
end
nodes.json to define the vm, you can define your
{
"nodes": {
"jenkins.example.com": {
"info": "jenkins master server",
"box": "../Vgrant-boxes/centos65_virtualbox_50G.box",
"ip": "192.168.35.101",
"ports": [],
"memory": 512
},
"node01.example.com": {
"info": "tomcat app host server",
"box": "../Vgrant-boxes/centos65_virtualbox_50G.box",
"ip": "192.168.35.121",
"ports": [],
"memory": 512
},
"node02.example.com": {
"info": "jboss app host server",
"box": "../Vgrant-boxes/centos65_virtualbox_50G.box",
"ip": "192.168.35.122",
"ports": [],
"memory": 512
},
"node03.example.com": {
"info": "oracle xe server",
"box": "../Vgrant-boxes/centos65_virtualbox_50G.box",
"ip": "192.168.35.123",
"ports": [],
"memory": 512
},
"node04.example.com": {
"info": "artifactory server",
"box": "../Vgrant-boxes/centos65_virtualbox_50G.box",
"ip": "192.168.35.124",
"ports": [],
"memory": 512
}
}
}

Related

Elixir OTP 24 doesn’t apply custom Logger backend

I have an umbrella application which has logger configuration in the root config.exs file:
config :logger,
compile_time_purge_level: :debug,
backends: [
{LoggerFileBackend, :backends_log}
]
config :logger, :backends_log,
path: "path/to/awesome.log",
level: :debug,
metadata: :all
After updating OTP from 22 to 24 version (elixir 1.12.3, erlang 24.1), backends_log is not starting and all logs are going to STDOUT with default :console backend. But configuration is still same:
Application.get_all_env(:kernel)
=> [
logger: [
{:handler, :default, :logger_std_h,
%{
config: %{type: :standard_io},
formatter: {:logger_formatter,
%{legacy_header: true, single_line: false}}
}}
],
logger_sasl_compatible: false,
logger_level: :notice,
shell_docs_ansi: :auto
]
Application.get_all_env(:logger)
=> [
handle_sasl_reports: true,
discard_threshold: 5000,
compile_time_purge_matching: [],
sync_threshold: 10000,
utc_log: false,
console: [],
backends_log: [path: "path/to/awesome.log", level: :debug, metadata: :all],
start_options: [],
pdu_format_reviewer_error_count: [level: :error],
compile_time_application: nil,
backends: [
{LoggerFileBackend, :backends_log}
],
discard_threshold_periodic_check: 30000,
translators: [
{Plug.Cowboy.Translator, :translate},
{Logger.Translator, :translate}
],
compile_time_purge_level: :debug,
truncate: 8096,
log_counter: [level: :debug],
handle_otp_reports: true,
discard_threshold_for_error_logger: 500,
translator_inspect_opts: []
]
:logger.get_primary_config()
=> %{
filter_default: :log,
filters: [process_disabled: {&Logger.Filter.process_disabled/2, []}],
level: :debug,
metadata: %{}
}
I can fix this problem with Runtime Configuration
Logger.add_backend({LoggerFileBackend, :backends_log})
Logger.configure_backend(
{LoggerFileBackend, :backends_log},
path: "path/to/awesome.log",
level: :debug,
metadata: :all
)
Logger.remove_backend(Logger.Backends.Console)
but I think Application Configuration is more proper way.
How should I fix this problem?
Your config works for me. My configuration is: Ubuntu2004, Elixir 1.12.2, Erlang 24.1 (ESL Erlang). I tried to recreate your problem. Take a look at it, it might help.
https://github.com/z5ottu/elixir_12_logger_test

Vagrant SSH command responded with a non-zero exit status

I'm trying to install docker on Ubuntu 18.04-VM (via vagrant) using the setup below. Is there any way I can make docker installation succeed on vagrant ubuntu 18.04 VM using the Vagrantfile? Note: I need to know how to apply the suggested solution into the Vagrantfile.
Vagrantfile:
servers=[
{
:hostname => "manager",
:ip => "192.168.2.1",
:box => "ubuntu/bionic64",
:ram => 2048,
:cpu => 4
},
{
:hostname => "worker-1",
:ip => "192.168.2.2",
:box => "ubuntu/bionic64",
:ram => 2048,
:cpu => 4
},
{
:hostname => "worker-2",
:ip => "192.168.2.3",
:box => "ubuntu/bionic64",
:ram => 2048,
:cpu => 4
}
]
Vagrant.configure(2) do |config|
servers.each do |machine|
config.vm.define machine[:hostname] do |node|
node.vm.box = machine[:box]
node.vm.hostname = machine[:hostname]
node.vm.network "private_network", ip: machine[:ip]
if machine[:hostname] == "manager"
node.vm.provision "docker",
images: ["ubuntu/bionic64"]
else
node.vm.provision "docker"
end
node.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", machine[:ram]]
end
end
end
end
Dockerfile:
FROM ubuntu:18.04
RUN apt-get install -y python python-pip --no-install-recommends
RUN apt-get install vim -y
RUN apt update -y
ADD app /home/app/
WORKDIR /home/app
EXPOSE 8080
Exception/Error Output Message:
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
curl -sSL https://get.docker.com/ | sh
Stdout from the command:
Executing docker install script, commit: 02d7c3c
Stderr from the command:
Either your platform is not easily detectable or is not supported by this
installer script.
Please visit the following URL for more detailed installation instructions:
https://docs.docker.com/engine/installation/
I finally figured it out spawning virtual servers with Ubuntu 18, using Vagrant. The link has all the simple instructions: Spawn virtual servers on the fly

Unable to load rspec configuration

When trying to run rspec spec/controllers/api_controller_spec.rb:406 --color command on linux machine. We already have redis server running at port 6379 but as below error Unable to load rspec spec_helper configuration & getting error:
Code:
REDIS_PID = "/var/run/redis.pid"
REDIS_CACHE_PATH = "tmp/cache/"
Dir.mkdir "#{Rails.root}/tmp" unless Dir.exists? "#{Rails.root}/tmp"
Dir.mkdir "#{Rails.root}/tmp/pids" unless Dir.exists? "#{Rails.root}/tmp/pids"
Dir.mkdir "#{Rails.root}/tmp/cache" unless Dir.exists? "#{Rails.root}/tmp/cache"
config.before(:suite) do
redis_options = {
"daemonize" => 'yes',
"pidfile" => 7528,
"port" => 6379,
"timeout" => 300,
"save 900" => 1,
"save 300" => 1,
"save 60" => 10000,
"dbfilename" => "dump.rdb",
"dir" => REDIS_CACHE_PATH,
"loglevel" => "debug",
"logfile" => "stdout",
"databases" => 16
}.map { |k, v| "#{k} #{v}" }.join("\n")
`echo '#{redis_options}' | redis-server -`
end
config.after(:suite) do
%x{
cat #{REDIS_PID} | xargs kill -QUIT
rm -f #{REDIS_CACHE_PATH}dump.rdb
}
end
Error:
------------
------------
------------
Finished searching in 0.09762763977050781 seconds.
Request#<ActionController::TestRequest:0x00000004139f58>
^[[?1;2c^[[?1;2c^[[?1;2c^[[?1;2c^[[?1;2c^[[?1;2cFcat: /var/run/redis.pid: No such file or directory
usage: kill [ -s signal | -p ] [ -a ] pid ...
kill -l [ signal ]
------------
------------
------------
Restart the redis server redis-server stop/start. The redis.pid file generated and rspec is able to access the redis configuration during background processing.

Elixir exrm release crashes on eredis start_link

I'm fairly new to Elixir and this is the first app that I'm attempting to release using exrm. My app interacts with a Redis database for consuming jobs from a queue (using exq), and also stores results of processed jobs in Redis using eredis.
My app works perfectly when I run it via iex -S mix, and it also runs great when compiled into an escript. However when I use exrm, the application compiles without any issue, but it crashes when I run it.
This is the crash output:
$ ./rel/my_app/bin/my_app console
{"Kernel pid terminated",application_controller,"{application_start_failure,my_app,{bad_return,{{'Elixir.MyApp',start,[normal,[]]},{'EXIT',{{badmatch,{error,{{'EXIT',{{badmatch,{error,{undef,[{eredis,start_link,[],[]},{'Elixir.MyApp.Cache',init,1,[{file,\"lib/my_app/cache.ex\"},{line,8}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,306}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,237}]}]}}},[{'Elixir.MyApp.Cache',start_link,1,[{file,\"lib/my_app/cache.ex\"},{line,21}]},{supervisor,do_start_child,2,[{file,\"supervisor.erl\"},{line,314}]},{supervisor,handle_start_child,2,[{file,\"supervisor.erl\"},{line,685}]},{supervisor,handle_call,3,[{file,\"supervisor.erl\"},{line,394}]},{gen_server,try_handle_call,4,[{file,\"gen_server.erl\"},{line,607}]},{gen_server,handle_msg,5,[{file,\"gen_server.erl\"},{line,639}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,237}]}]}},{child,undefined,'Elixir.MyApp.Cache',{'Elixir.MyApp.Cache',start_link,[[{host,\"127.0.0.1\"},{port,6379},{database,0},{password,[]},{reconnect_timeout,100},{namespace,<<>>},{queues,[<<\"elixir\">>]}]]},permanent,5000,worker,['Elixir.MyApp.Cache']}}}},[{'Elixir.MyApp.Supervisor',start_cache,1,[{file,\"lib/my_app/supervisor.ex\"},{line,17}]},{'Elixir.MyApp.Supervisor',start_link,0,[{file,\"lib/my_app/supervisor.ex\"},{line,9}]},{'Elixir.MyApp',start,2,[{file,\"lib/my_app.ex\"},{line,10}]},{application_master,start_it_old,4,[{file,\"application_master.erl\"},{line,272}]}]}}}}}"}
Here is the mix.exs for my application:
defmodule MyApp.Mixfile do
use Mix.Project
def project do
[
app: :my_app,
version: "0.0.1",
name: "MyApp",
elixir: "~> 1.0",
escript: escript_config,
deps: deps
]
end
def application do
[
applications: app_list(Mix.env),
mod: { MyApp, [] },
env: [ queue: 'elixir']
]
end
def included_applications do
[ :logger, :httpoison, :eredis, :exq, :dotenv, :exjsx, :ex_doc, :oauth2, :sweet_xml ]
end
defp app_list(:dev), do: [:dotenv | app_list]
defp app_list(_), do: app_list
defp app_list, do: [:logger, :httpoison]
def escript_config do
[ main_module: MyApp ]
end
defp deps do
[
{ :dotenv, github: "avdi/dotenv_elixir" },
{ :eredis, github: "wooga/eredis", tag: "v1.0.5" },
{ :exjsx, "~> 3.1.0" },
{ :exq, "~> 0.1.0", app: false },
{ :exrm, "~> 0.16.0" },
{ :ex_doc, github: "elixir-lang/ex_doc" },
{ :httpoison, "~> 0.4" },
{ :oauth2, "~> 0.1.1" },
{ :sweet_xml, "~> 0.2.1" }
]
end
end
The crash appears to be happening in the following init function, where I call :eredis.start_link:
defmodule MyApp.Cache do
use GenServer
require Logger
def init(client_opts) do
{ :ok, client } = :eredis.start_link(
client_opts[:host],
client_opts[:port],
client_opts[:database],
client_opts[:password],
client_opts[:reconnect_timeout])
end
end
Could it be because eredis is an Erlang library as opposed to Elixir?
You need to add :eredis to your app_list function, so that it is packaged with the release, that goes for the rest of your dependencies as well.

How to get the IP Address in Vagrant?

I'm trying to configure a Hadoop cluster, but to do so I needed the ip address of the namenode.
The cluster itself is being created by Vagrant, but I don't have the ip address until vagrant creates the instance in AWS.
So, I have the following Vagrantfile:
current_dir = File.dirname(__FILE__)
$master_script = <<SCRIPT
// will write a script to configure
SCRIPT
Vagrant.configure("2") do |config|
config.omnibus.chef_version = :latest
config.vm.provider :aws do |aws, override|
config.vm.box = "dummy"
aws.access_key_id = "MY_KEY"
aws.secret_access_key = "SECRET_KEY"
aws.keypair_name = "my_key"
aws.ami = "ami-7747d01e"
override.ssh.username = "ubuntu"
override.ssh.private_key_path = "#{current_dir}/my_key.pem"
end
config.vm.provider :virtualbox do |v|
config.vm.box = "precise64"
config.vm.box_url = "https://vagrantcloud.com/chef/ubuntu-13.04/version/1/provider/virtualbox.box"
v.customize ["modifyvm", :id, "--memory", "1024"]
end
config.vm.define :namenode do |namenode|
namenode.vm.box = "dummy"
namenode.vm.provision :chef_solo do |chef|
chef.cookbooks_path = "cookbooks"
chef.roles_path = "roles"
chef.add_role "cluster"
end
namenode.vm.provision :hostmanager
namenode.vm.provision "shell", :inline => $master_script
end
config.vm.define :slave do |slave|
slave.vm.box = "dummy"
slave.vm.provision :chef_solo do |chef|
chef.cookbooks_path = "cookbooks"
chef.roles_path = "roles"
chef.add_role "cluster"
end
slave.vm.provision :hostmanager
slave.vm.provision "shell", :inline => $master_script
end
end
I need to update the mapred-site.xml and core-site.xml files with the ip address of the namenode. How could I get the ip address of the namenode box so I can update the hadoop config files? Is there a better option in the cookbook that I can use to accomplish it?
Suppose I have 1 namenode and 5 slaves, the mapred-site.xml.erb template will look like:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://<%= node[:ipaddress] %>:8021</value>
</property>
</configuration>
However, I needed that all the namenode and the slaves to have the ip address only of the namenode. How can I accomplish that in chef?
Either way will work for me, even though I prefer the chef solution.
You could:
1- Use the instance metadata service on the namenode instance to find out its own ip:
curl http://169.254.169.254/latest/meta-data/local-ipv4
see: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html
2- Tag the namenode (ex: HADOOP_ROLE=NAMENODE) and use AWS CLI on any instance to find the local ip of the namenode:
aws ec2 describe-instances \
--region=us-east-1 \
--filter "Name=tag:HADOOP_ROLE,Values=NAMENODE" \
--query='Reservations[*].Instances[*].PrivateIpAddress' \
--output=text
see: http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html

Resources