I'm a little bit stuck with implementing ipsets for iptables with Chef using data bags. I know you may say that this solution is not elegant and ideal, but believe me I have my own reasons why. What I'm trying to achieve; I need to create the ip set "allowed_subnet" for future using with iptables for whitelisting some ip addresses. The "allowed" ip addresses are in the data bag. Unfortunately I could not find that Chef supports ipset resource so I have to use execute. Please correct me if I'm wrong.
Right, I have data bag with the IP list:
{
"id": "ipset_for_iptables",
"ip_list": [
"1.1.1.1",
"1.1.1.2",
"1.1.1.3",
"1.1.1.4"
]
}
Data bag name is equal to the "id".
And I have my default recipe file default.rb where I've added the following code:
package 'ipset'
execute 'create timeout ipset' do
command 'ipset create allow_selected hash:ip timeout 120'
not_if 'ipset -L allow_selected'
end
execute 'create ipset' do
command 'ipset create allowed_subnet hash:ip hashsize 8192'
not_if 'ipset -L allowed_subnet'
end
servers = data_bag('ipset_for_iptables' , 'ipset_for_iptables')
template "/opt/data/data_hosts.txt" do
source 'ipset.erb'
owner 'ipset'
group 'ipset'
action :create
variables :properties => servers['ip_list']
end
And now, my question is: How to add the IP addresses from the data bag to the ip set "allowed_subnet" using "execute" and "ipset" linux command.
Here is the template "ipset.erb" content:
<% #properties.each do |host|%>
<%= host['ipaddress'] %>
<% end %>
BTW, I'm not sure that this template is correct, this is legacy from a previous admin.
I would really appreciate if somebody can help me and also point me to the right documentation which can help me in a future as I have a lot of inherited stuff like this in my zoo. I have tried to find how to do that reading Chef official documentation, but I guess it is something beyond the Chef itself and more Ruby stuff.
Related
For my work, we are trying to spin up a docker swarm cluster with Puppet. We use puppetlabs-docker for this, which has a module docker::swarm. This module allows you to instantiate a docker swarm manager on your master node. This works so far.
On the docker workers you can join to docker swarm manager with exported resources:
node 'manager' {
##docker::swarm {'cluster_worker':
join => true,
advertise_addr => '192.168.1.2',
listen_addr => '192.168.1.2',
manager_ip => '192.168.1.1',
token => 'your_join_token'
tag => 'docker-join'
}
}
However, the your_join_token needs to be retrieved from the docker swarm manager with docker swarm join-token worker -q. This is possible with Exec.
My question is: is there a way (without breaking Puppet philosophy on idempotent and convergence) to get the output from the join-token Exec and pass this along to the exported resource, so that my workers can join master?
My question is: is there a way (without breaking Puppet philosophy on
idempotent and convergence) to get the output from the join-token Exec
and pass this along to the exported resource, so that my workers can
join master?
No, because the properties of resource declarations, exported or otherwise, are determined when the target node's catalog is built (on the master), whereas the command of an Exec resource is run only later, when the fully-built catalog is applied to the target node.
I'm uncertain about the detailed requirements for token generation, but possibly you could use Puppet's generate() function to obtain one at need, during catalog building on the master.
Update
Another alternative would be an external (or custom) fact. This is the conventional mechanism for gathering information from a node to be used during catalog building for that node, and as such, it might be more suited to your particular needs. There are some potential issues with this, but I'm unsure how many actually apply:
The fact has to know for which nodes to generate join tokens. This might be easier / more robust or trickier / more brittle depending on factors including
whether join tokens are node-specific (harder if they are)
whether it is important to avoid generating multiple join tokens for the same node (over multiple Puppet runs; harder if this is important)
notwithstanding the preceding, whether there is ever a need to generate a new join token for a node for which one was already generated (harder if this is a requirement)
If implemented as a dynamically-generated external fact -- which seems a reasonable option -- then when a new node is added to the list, the fact implementation will be updated on the manager's next puppet run, but the data will not be available until the following one. This is not necessarily a big deal, however, as it is a one-time delay with respect to each new node, and you can always manually perform a catalog run on the manager node to move things along more quickly.
It has more moving parts, with more complex relationships among them, hence there is a larger cross-section for bugs and unexpected behavior.
Thanks to #John Bollinger I seem to have fixed my issue. In the end, it was a bit more worked than I envisioned, but this is the idea:
My puppet setup now uses PuppetDB for storing facts and sharing exported resources.
I have added an additional custom fact to the code base of Docker (in ./lib/facter/docker.rb).
The bare minimum in the site.pp file, now contains:
node 'manager' {
docker::swarm {'cluster_manager':
init => true,
advertise_addr => "${::ipaddress}",
listen_addr => "${::ipaddress}",
require => Class['docker'],
}
##docker::swarm {'cluster_worker':
join => true,
manager_ip => "${::ipaddress}",
token => "${worker_join_token}",
tag => "cluster_join_command",
require => Class['docker'],
}
}
node 'worker' {
Docker::Swarm<<| tag == 'cluster_join_command' |>> {
advertise_addr => "${::ipaddress}",
listen_addr => "${::ipaddress}",
}
}
Do keep in mind that for this to work, puppet agent -t has to be run twice on the manager node, and once (after this) on the worker node. The first run on the manager will start the cluster_manager, while the second one will fetch the worker_join_token and upload it to PuppetDB. After this fact is set, the manifest for the worker can be properly compiled and run.
In the case of a different module, you have to add a custom fact yourself. When I was researching how to do this, I added the custom fact to the LOAD_PATH of ruby, but was unable to find it in my PuppetDB. After some browsing I found that facts from a module are uploaded to PuppetDB, which is the reason that I tweaked the upstream Docker module.
I have a Rails web application. I want to create a class that takes an email address, say "matt#trucksandstuff.com," parses out the domain, and then checks if the domain is found in the Spamhaus DBL. I am having no luck with the dig or host commands as described on their website and the Charon gem doesn't seem to work with their sample URL either. Any ideas?
EDIT: Here is what is on the website:
In response to "How can I test the DBL?" they said:
First, the DBL follows RFC5782 for determining whether a URI zone is operational with an entry for TEST. Second, the DBL has a specific domain for testing DBL applications: dbltest.com. To test functionality of the DBL use the host or dig command to do a manual query. (If you need to look up a domain in the DBL via the web, use the domain lookup form at our Blocklist Removal Center. Do not query our website with automated tools.).
I have tried using the Charon gem, which I think should be as simple as running
Charon.query('dbltest.com')
with variations that remove the parentheses, add a space, etc.
Also tried
resolver = Resolv::DNS.new
name = 'dbltest.com'
resolver.getresources("#{name}.zen.spamhaus.org", Resolv::DNS::Resource::IN::A)
in the Rails console.
The Zen database is only for IP addresses. The DBL list is for hostnames. Therefore Charon (Zen query) only works with IP addresses. To test hostnames, query them with Resolv and dbl.spamhouse.org:
def is_spammer?(host)
!Resolv::DNS.new.getresources("#{host}.dbl.spamhaus.org",
Resolv::DNS::Resource::IN::A).empty?
end
is_spammer?('dbltest.com')
=> true
is_spammer?('google.com')
=> false
I am using Ruby on Rails 3 and a MYSQL database. I would like to retrieve a regex from the database and then use that value to validate email addresses.
I aim to not put the regex value in line in my RoR application code, but outside so that the value can be recalled for other usages and from other places.
In order to populate the database, I put in my 'RAILS_ROOT/db/seed.rb' the following:
Parameter.find_or_create(
:param_name => 'email_regex',
:param_value => "[a-z0-9!#\$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#\$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?",
)
Notice: in the 'seed.rb' file I edited a little bit the original regex from www.regular-expressions.info adding two \ just before $. Here it is the difference:
#original from www.regular-expressions.info
[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?
#edited by me
[a-z0-9!#\$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#\$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?
After run rake db:seed in the Terminal, in MYSQL database I have this value (without \ near $):
[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?
Then in my RoR application I use the regex this way:
def validate(string)
email_regex = Regexp.new(Parameter.find_by_param_name('email_regex').param_value)
if email_regex.match(string)
return true
else
return false
end
end
The problem using the above regex is that I can successfully validate also email addresses with double '#' or without the final part like these:
name#surname#gmail.com # Note the double '#'
test#gmail
Of course I would like to refuse those email addresses. So, how can I adjust that? Or, how can I get what I want?
I tried also to seed these regex:
#case 1
\A[a-z0-9!#\$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#\$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\Z
#case 2
\\A[a-z0-9!#\$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#\$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\Z
#case 3
^[a-z0-9!#\$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#\$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$
that in the MYSQL database become respectively:
#case 1
A[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?Z
#case 2
\A[a-z0-9!#\$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#\$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\Z
#case 3
^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$
but also them don't work as expected.
UPDATE
Debugging I have
--- !ruby/regexp /[a-z0-9!#$%&'*+\/=?^_`{|}~-]+(?:.[a-z0-9!#$%&'*+\/=?^_`{|}~-]+)*#(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?/
that means that just before of all / characters Ruby added a \ character. Can be that my problem? In the 'seed.rb' file I tryed to escape all / adding \ statements but the debug output is always the same.
There are so many things wrong on so many levels here…
Storing application configuration in your database isn't recommended; slower performance, potential catch 22s (like how do you configure your database, from your database), etc. Try something like SettingsLogic if you don't want to have to build your own singleton configuration or use an initializer.
Rails has built in validation functionality as a mixin that's automatically part of any models inheriting from ActiveRecord::Base. You should use it, rather than define your own validation routines, especially for basic cases like this.
You can actually have an email address with multiple # signs, provided the first is escaped with a backslash or the local portion of the address is quoted.
Why are you escaping $ characters in a character class where they have no special meaning?
Regular expressions are okay for a very basic validation of an email address to make sure you didn't get complete garbage data to pass off to your mail server, but they aren't the best way to verify an email address.
I suggest you have a good look at the validations guide at RubyOnRails.org.
You shouldn't reinvent this wheel. See http://lindsaar.net/2010/1/31/validates_rails_3_awesome_is_true for a standard way to validate email addresses in Rails 3.
If you do choose to reinvent the wheel, don't use a regular expression. The gory details of why this is a bad idea are explained in http://oreilly.com/catalog/9780596528126, along with a very, very complicated regular expression that almost does it.
Stuff I've already figured out
I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application.
I already have a few concerns answered:
How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog.
What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas.
I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer.
Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way.
Finally, to my question
When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea.
Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question.
Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema.
Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas).
Thanks, and I hope that wasn't too long!
Update Dec 5, 2011
Thanks to Brad Robertson and his team, there's the Apartment gem. It's very useful and does a lot of the heavy lifting.
However, if you'll be tinkering with schemas, I strongly suggest knowing how it actually works. Familiarize yourself with Jerod Santo's walkthrough , so you'll know what the Apartment gem is more or less doing.
Update Aug 20, 2011 11:23 GMT+8
Someone created a blog post and walks though this whole process pretty well.
Update May 11, 2010 11:26 GMT+8
Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know.
module SchemaUtils
def self.add_schema_to_path(schema)
conn = ActiveRecord::Base.connection
conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}"
end
def self.reset_search_path
conn = ActiveRecord::Base.connection
conn.execute "SET search_path TO #{conn.schema_search_path}"
end
def self.create_and_migrate_schema(schema_name)
conn = ActiveRecord::Base.connection
schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'")
if schemas.include?(schema_name)
tables = conn.tables
Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}"
else
Rails.logger.info "About to create #{schema_name}"
conn.execute "create schema #{schema_name}"
end
# Save the old search path so we can set it back at the end of this method
old_search_path = conn.schema_search_path
# Tried to set the search path like in the methods above (from Guy Naor)
# [METHOD 1]: conn.execute "SET search_path TO #{schema_name}"
# But the connection itself seems to remember the old search path.
# When Rails executes a schema it first asks if the table it will load in already exists and if :force => true.
# If both true, it will drop the table and then load it.
# The problem is that in the METHOD 1 way of setting things, ActiveRecord::Base.connection.schema_search_path still returns $user,public.
# That means that when Rails tries to load the schema, and asks if the tables exist, it searches for these tables in the public schema.
# See line 655 in Rails 2.3.5 activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
# That's why I kept running into this error of the table existing when it didn't (in the newly created schema).
# If used this way [METHOD 2], it works. ActiveRecord::Base.connection.schema_search_path returns the string we pass it.
conn.schema_search_path = schema_name
# Directly from databases.rake.
# In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake
file = "#{Rails.root}/db/schema.rb"
if File.exists?(file)
Rails.logger.info "About to load the schema #{file}"
load(file)
else
abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!}
end
Rails.logger.info "About to set search path back to #{old_search_path}."
conn.schema_search_path = old_search_path
end
end
Change line 38 to:
conn.schema_search_path = "#{schema_name}, #{old_search_path}"
I presume that postgres is trying to lookup existing table names when loading schema.rb and since you've set the search_path to only contain the new schema, it fails. This of course, is presuming you still have the public schema in your database.
Hope that helps.
Is there a gem/plugin that has these things already?
pg_power provides this functionality to create/drop PostgreSQL schemas in migration, like this:
def change
# Create schema
create_schema 'demography'
# Create new table in specific schema
create_table "countries", :schema => "demography" do |t|
# columns goes here
end
# Drop schema
drop_schema 'politics'
end
Also it takes care about correctly dumping schemas into schema.rb file.
I would like to let my users create Ruby scripts that do computation on some data residing on the web server and then outputs results. The scripts are executed on the server. Is there any way to do this securely?
More specifically, I would like to:
restrict the resources the script can use (memory and cpu), and limit its running time
restrict which core classes the script can use (e.g. String, Fixnum, Float, Math etc)
let the script access and return data
output any errors to the user
Are there any libraries or projects that do what I'm asking for? If not in Ruby, maybe some other language?
You can use a "blank slate" as a clean room, and a sandbox in which to set the safe level to 4.
A blank slate an object you've stripped all the methods from:
class BlankSlate
instance_methods.each do |name|
class_eval do
unless name =~ /^__|^instance_eval$|^binding$|^object_id$/
undef_method name
end
end
end
end
A clean room is an object in which context you evaluate other code:
clean_room = BlankSlate.new
Read a command from an untrusted source, then untaint it. Unless untainted, Ruby will refuse to eval the string in a sandbox.
command = gets
command.untaint
Now execute the string in a sandbox, cranking the safe level up as high as it will go. The $SAFE level will go back to normal when the proc ends. We execute the command in the context of the clean room's binding, so that it can only see the methods and variables that the clean room can see (remember, though, that like any object, the clean room can see anything in global scape).
result = proc do
$SAFE = 4
clean_room.instance_eval do
binding
end.eval(command)
end.call
print the result:
p result