Get Rails application domain name without request - ruby-on-rails

Is there a way to get Rails` app domain name without use of the request?
I know how to get URL from the request, but what if there is no request and Rails is just running a delayed job task, can i get a domain name of the server where the Rails app is hosted?

There isn't a per se way of doing this due to how Rails is built but if it is a big enough project it might me stored somewhere.
So you could look for a place where it is being set and use that elsewhere.
For example, if the project uses request_store you could search for something like:
RequestStore.store[:host] = SomeTable.stored_host_name
And use SomeTable.stored_host_name in your worker or migration or the place you want to use it which doesn't have a request at hand.
Have a great day!

Related

Get paperclip full url in model

I publish some data in redis after saving my model and I need send full url of attachment to Redis, but I'm stuck to get rails root url with port or full attachment url.
How I can get full url of paperclip attachments or get rails app root url in model file?
Models don't know how you deploy the application. However controllers do via request object:
"#{request.protocol}#{request.host}"
You could pass it to the model level, however it breaks the abstraction badly.
I would consider changing the design.
The use case is also in web sockets. Use #{request.host} or #{request.port} or #{request.protocol} or #{request.params} to get the necessary request information. #Danil answer is the right string interpolation for the question.

Reading pending work from PhantomJS

I am building a pool of PhantomJS instances, and I am trying to make it so that each instance is autonomous (it fetches next job to be done).
My concern is to choose between these two:
Right now I have a Rails app that can give to PhantomJS which URL needs to be parsed next. So, I could do an HTTP get call from PhantomJS to my Rails app and Rails would respond with a URL that is pending to be done (most likely Rails would get that from a queue).
I am thinking on building a stand alone Redis server that PhantomJS would access via Webdis, so Rails would push the jobs there, and PhantomJS instances would fetch from it directly.
I am trying to think what would be the correct decision in terms of performance: PhantomJS hitting the Rails server (so Rails needs to get the job from the queue and send it to PhantomJS), or just making PhantomJS to access a Redis server directly.
Maybe I need more info but why isn't the performance answer obvious? Phantom JS hitting the Redis server directly means less stuff to go through.
I'd consider developing whatever is easier to maintain. What's the ballpark req/minute? What sort of company (how funded / resource-strapped are you)?
There's also more OOTB solutions like IronMQ that may ease the pain

Sending data from an analytics engine to a Rails server

I have an analytics engine which periodically packages a bunch of stats in JSON format. I want to send these packages to a Rails server. Upon a package arriving, the Rails server should examine it, generate a model instance out of it (for historical purposes), and then display the contents to the user. I've thought of two approaches.
1) Have a little app residing on the same host as the Rails server to be listening for these packages (using ZeroMQ). Upon receiving a package, the app would invoke a Rails action through CURL, passing on the package as a parameter. My concern with this approach is that my Rails server checks that only signed-in users can access actions which affect models. By creating an action accessible to this listening app (and therefore other entities), am I exposing myself to a major security flaw?
2) The second approach is to simply have the listening app dump the package into a special database table. The Rails server will then periodically check this table for new packages. Upon detecting one or more, it will process them and remove them from the table.
This is the first time I'm doing something like this, so if you have techniques or experiences you can share for better solutions, I'd love to learn.
Thank you.
you can restrict access to a certain call by limiting the host name that is allowed for the request in routes.rb
post "/analytics" => "analytics#create", :constraints => {:ip => /127.0.0.1/}
If you want the users to see updates, you can use polling to refresh the page every minute orso.
1) Yes you are exposing a major security breach unless :
Your zeroMQ app provides the needed data to do authentification and authorization on the rails side
Your rails app is configured to listen only on the 127.0.0.1 interface and is thus not accessible from the outside
Like Benjamin suggests, you restrict specific routes to certain IP
2) This approach looks a lot like what delayed_job does. You might wanna take a look there : https://github.com/collectiveidea/delayed_job and use a rake task to add a new job.
In short, your listening app will call a rake task that will add a custom delayed_job when receiving a packet. Then let delayed_job handle the load. You benefit from delayed_job goodness (different queues, scaling, ...). The hard part is getting the result.
One idea would be to associated a unique ID with each job, and have the delayed_job task output the result in a data store wich associated the job ID with the result. This data store can be a simple relational table
+----+--------+
| ID | Result |
+----+--------+
or a memecache/redis/whatever instance. You just need to poll that data store looking for the result associated with the job ID. And delete everything when you are done displaying that to the user.
3) Why don't you directly POST the data to the rails server ?
Following Benjamin's lead, I implemented a filter for this particular action.
def verify_ip
#ips = ['127.0.0.1']
if not #ips.include? request.remote_ip
redirect_to root_url
end
end
The listening app on the localhost now invokes the action, passing the JSON package received from the analytics engine as a param. Thank you.

Sharing session data between Rails and Node?

The main question is: Can I read Rails session data in Node?
More details:
I have a project that is written in Ruby on Rails. It works but I want to add to it and eventually replace it using NodeJS. Both are running on the same server, just on different ports.
As of now RoR will serve up all the HTML files (and continue handeling the existing functionality) and then I'll connect to the Node server via AJAX. Node will just dish up JSON for the time being.
The problem is, how can I work with session variables between the two? More specifically, can I get to RoRs session variables in Node? Mostly I just need to know which user is logged in.
If it matters, I am running Rails 2.3.5, Ruby 1.8.7, and Node 0.8.17.
I haven't tried exactly same stuff, myself, but, we did something similar but with Sinatra and Java.
I wouldn't comment about your approach on application design, but, in case you don't mind using Memcached session store in your rails application, yes it is possible. Configuring Memcached with Ruby app is explained on Heroku Doc
In Node application you can use Memcached Client like 3rd-Eden and access session variable from memcache
You would have to explicitly pass session id generated by rails to Node.

How to get the HTTP_HOST from Rails?

I need to set some server specific variables in a rails application that will run on at least two different servers. What is the best way to get the request's HTTP_HOST value in order to know what is the current server and set these variables accordingly?
I'm using Apache 2 with Passenger.
Think you're looking for request.env["SERVER_ADDR"].
Are the 2 servers involved in load balancing? If so, you won't want to grab the host from the request, because it'll be the same in both cases. You'll want to grab it in some generic ruby call. It's been a while since I've done Ruby, but probably something like (untested - rough idea):
Resolv::Hosts.getNames(Resolv::Hosts.getAddress('localhost'))

Resources