Ruby Rack App - Couchbase DNS/Hostname lookup error - ruby-on-rails

I'm using couchbase as session storage in my rack application (couchbase gem v1.3.9).
When I test the rack app with some more request (for example 50 parallel threads in jmeter)
or just reload the app many times, I allways get this error:
Rack app error: Couchbase::Error::UnknownHost: bootstrap error, DNS/Hostname lookup failed (error=0x15)>
My questions:
Anyone else here has such error, when using couchbase with ruby and how can I solve this?
What about performance of couchbase as sessionstore in a ruby rack application?
Additional informations:
My config.ru
session_options = PlainRackApplication::Config.session_options
use ActionDispatch::Session::CouchbaseStore, session_options
run RackApp.new
and my couchbase options
module PlainRackApplication
class Config
#session_options = {
path: '/',
namespace:'sessions_',
key: 'foo_session',
expire_after: 30.days,
couchbase: {bucket: "foo",
username: 'foo',
password: 'bar',
default_format: :json}
}
end
end

In what environment did you encounter this error?
If this happens on your localhost, verify that
127.0.0.1 localhost
is included in your /etc/hosts. Worked for me.

The (error=0x15) error message suggest that one of the host names in the bootstrap list is incorrect.
The client randomise the boot strap list, so that explains why you are only seeing it when you make more requests or if you you reload the application a number of times.
Further more creating and destroy couchbase client objects can slow down your application. If you can you should try to use a long live persistent connection that is used by all of your requests.
A number of users do use couchbase as session store mainly because of its high performance.

Related

Sharing a connection pool for all Rails uses of Redis

Summary: I'm using a single Redis instance for the Rails cache, actioncable and (non-cache) use in my rails code. Should all these uses share a single connection pool and if so how can I config this to make it happen since there seem to be totally different ways to setup the pooling for each?
Details follow since people seem to like to see them.
I'm using redis as the adapter for rails cache using the following config.
config.cache_store = :redis_cache_store, {
url: "redis://XXX.net:6379/0",
pool_size: ENV.fetch('RAILS_MAX_THREADS') { 5 },
password: Rails.application.credentials.dig(:redis, :password),
expires_in: 24.hours,
pool_timeout: 5
}
I've set the expires_in option so that I can set the option in my redis config to evict keys with expiration set so I can use the same redis instance for both cache and non-cache data. Now, I want to also access REDIS directly for non-cache related tasks via something like the example config below
pool_size = ENV.fetch("RAILS_MAX_THREADS", 5)
redis_pool = ConnectionPool.new(size: pool_size) do
Redis.new(
url: "redis://XXX.net:6379/0",
)
end
But I'm not sure if that is correct. Shouldn't I be sharing a connection pool between the cache_store connections and the other connections to Redis? If so, how can I do this?
To complicate matters further I'm also using Redis for actioncable via a config like
production:
adapter: redis
url: <%= ENV.fetch("REDIS_URL") { "redis://XXX.net:6379/0" } %>
password: <%= Rails.application.credentials.dig(:redis, :password) %>
I've seen suggestions that actioncable will automatically handle connection pooling with Redis if I'm using the connection_pool gem (is this right?) but I feel like all these connections should be drawing from the same pool. If so how can I make that happen?

geocoder not working (possibly due to network proxy? )

I am a beginner in Rails. I got to know exciting feature of geocoder from railscasts
[ http://railscasts.com/episodes/273-geocoder ]
But same source code also downloaded from it not working behind proxy. as it doesn't populate any longitudes or latitudes.
How to deal with with proxyserver of my workspace network?
else from another machine having direct internet connection things work fine.
geocoder has http proxy support, but it's not obvious from the documentation for where to configure it.
you can find it when looking at the initializer, that should get created for your rails generate call: https://github.com/alexreisner/geocoder/blob/master/lib/generators/geocoder/config/templates/initializer.rb
Geocoder.configure(
[...]
# :http_proxy => nil, # HTTP proxy server (user:pass#host:port)
# :https_proxy => nil, # HTTPS proxy server (user:pass#host:port)
)

Faye on Heroku: Cross-Domain Issues

I'm currently hosting both my rails app and a faye-server app on Heroku. The faye server has been cloned from here (https://github.com/ntenisOT/Faye-Heroku-Cedar) and seems to be running correctly. I have disabled websockets, as they are not supported on Heroku. Despite the claim on Faye's site that:
"Faye clients and servers transparently support cross-domain communication, so your client can connect to a server on any domain you like without further configuration."
I am still running into this error when I try to post to a faye channel:
XMLHttpRequest cannot load http://MYFAYESERVER.herokuapp.com. Origin http://MYAPPURL.herokuapp.com is not allowed by Access-Control-Allow-Origin.
I have read about CORS and tried implementing some solutions outlined here: http://www.tsheffler.com/blog/?p=428 but have so far had no luck. I'd love to hear from someone who:
1) Has a rails app hosted on Heroku
2) Has a faye server hosted on Heroku
3) Has the two of them successfully communicating with each other!
Thanks so much.
I just got my faye and rails apps hosted on heroku communicating within the past hour or so... here are my observations:
Make sure your FAYE_TOKEN is set on all of your servers if you're using an env variable.
Disable websockets, which you've already done... client.disable(...) didn't work for me, I used Faye.Transport.WebSocket.isUsable = function(_,c) { c(false) } instead.
This may or may not apply to you, but was the hardest thing to track down for me... in my dev environment, the port my application is running on will be tacked onto the end of the specified hostname for my faye server... but this appeared to cause a failure to communicate in production. I worked around that by creating a broadcast_server_uri method in application_controller.rb that handles inclusion of a port when necessary, and then use that anywhere I spin up a new channel.
....
class ApplicationController < ActionController::Base
def broadcast_server
if request.port.to_i != 80
"http://my-faye-server.herokuapp.com:80/faye"
else
"http://my-faye-server.herokuapp.com/faye"
end
end
helper_method :broadcast_server
def broadcast_message(channel, data)
message = { :ext => {:auth_token => FAYE_TOKEN}, :channel => channel, :data => data}
uri = URI.parse(broadcast_server)
Net::HTTP.post_form(uri, :message => message.to_json)
end
end
And in my app javascript, including
<script>
var broadcast_server = "<%= broadcast_server %>"
var faye;
$(function() {
faye = new Faye.Client(broadcast_server);
faye.setHeader('Access-Control-Allow-Origin', '*');
faye.connect();
Faye.Transport.WebSocket.isUsable = function(_,c) { c(false) }
// spin off your subscriptions here
});
</script>
FWIW, I wouldn't stress about setting Access-Control-Allow-Origin as it doesn't seem to be making a difference either way - I see XMLHttpRequest cannot load http://... regardless, but this should still works well enough to get you unblocked. (although I'd love to learn of a cleaner solution...)
Can't say I have used Rails/Faye on Heroku but have you tried setting the Access-Control-Allow-Origin header to something like Access-Control-Allow-Origin: your-domain.com?
For testing you could also do Access-Control-Allow-Origin: * to see if that helps
Custom headers
Some services require the use of additional HTTP headers to connect to
their Bayeux server. You can add these headers using the setHeader()
method, and they will be sent if the underlying transport supports
user-defined headers (currently long-polling only).
client.setHeader('Authorization', 'OAuth abcd-1234');
Source: http://faye.jcoglan.com/browser.html
So try client.setHeader('Access-Control-Allow-Origin', '*');

ruby interprocess communication

I have a Rails project and two ruby mini-daemons running in the background. What's the best way to communicate between them?
Communication like below should be possible:
Rails -> Process 1 -> Process 2 -> Rails
Some requests would be sync, other async.
Queues (something like AMQ, or custom Redis based) or RPC HTTP calls?
Check DRb as well.
I implemented a system via RabbitMq + the bunny gem.
Update:
After reading http://blog.brightbox.co.uk/posts/queues-and-callbacks I decided to try out RabbitMQ. There are two gems amqp (async, eventmachine based) or bunny (sync). amqp is great, but if you're using Rails with passenger it can do some weird things.
The system works like this, the daemons listen on a queue for messages:
# The incoming data should be a JSON encoded hash that looks like:
# { "method" => method_to_call, "opts" => [ Array of opts for method ],
# "output" => "a queue where to send the result (optional)" }
# If output is specified it will publish the JSON encoded response there.
def listen_on(queue_name, class)
BUNNY.start
bunny = BUNNY.queue(queue_name)
bunny.subscribe do |msg|
msg = JSON.parse(msg[:payload])
result = class.new.send(msg["method"], *msg["opts"])
if msg["output"]
BUNNY.queue(msg["output"]).publish(result.to_json)
end
end
So once a message is received it calls a method from a class. One thing to note is that it would have been ideal to use bunny for Rails and amqp in the daemons. But I like to use one gem pe service.

Mongodb server goes down, how to prevent Rails app from timing out?

I'm using central_logger to store logs from our Rails app in mongodb. When the mongo server went down recently our app started timing out on mongo inserts. How can I prevent Rails from timing out if the mongo server goes down?
The ruby driver supports timeouts like so
#conn = Connection.new("localhost", 27017, :pool_size => 5, :timeout => 5)
But the central_logger gem isn't using that. So you can either fork it to add that in there, or monkey-path the CentralLogger::MongoLogger.connect method
It currently has
def connect
#mongo_connection ||= Mongo::Connection.new(#db_configuration['host'],
#db_configuration['port'],
:auto_reconnect => true).db(#db_configuration['database'])
if #db_configuration['username'] && #db_configuration['password']
# the driver stores credentials in case reconnection is required
#authenticated = #mongo_connection.authenticate(#db_configuration['username'],
#db_configuration['password'])
end
end
You would need to monkey-path in :timeout=>5 (or whatever) to the Mongo::Connection.new
I would bet the author of central-logger would like to have this in there, so a fork and pull request would likely be welcome.
You could use replica sets - so if the master goes down, it can failover automatically to one of the replicas.
Usually the database insert should be fast, so you could work with the ruby timeout:
require 'timeout'
Timeout::timeout(0.2) do
... write to log server
end
this code will timeout and continue after 200 milliseconds in any case.

Resources