Store ssh connections in rails - ruby-on-rails

I have a rails app that needs to communicate with a couple of servers through ssh. I'm using the Net::SSH library and it works great. I would like however to be able to cache/store the ssh connections somehow between requests (something like OpenSSH multiplexing).
So, i can't store them in a key-value store like Memcached or Redis (because ssh connections are not serializable).
I don't want to store them in a session because they are meant to be used by all users (and besides i think it needs to be serializable also).
I managed to get this working with class variables and initiliazer constants. I know that class variables don't replicate between servers (in production), and i'm pretty certain initializer constants also don't. Something like:
initializer:
SSH = {}
model:
class Server
def connection
require 'net/ssh'
SSH[name] ||= Net::SSH.start(ip, "root", :password => password)
end
end
OpenSSH multiplexing would be great but i'm not sure if i could do that through the Net::SSH ruby library (i'm back to storing the master connection somewhere).
Are there any other solutions? Or if not, which one is the least evil of them all?

Perhaps rather than trying to share sockets across requests which is bound to end up causing pain and suffering you could delegate to a background processor of some kind? You could set up an ssh tunnel and use DRb to talk across it as if it was just a local network daemon, or any of the large number of networked asynchronous job handling daemons.
http://ruby-toolbox.com/categories/queueing.html

To keep the SSH connection up between requests, you'll need to spawn off a background process. The background process can open up a pipe or some other sort of interprocess communication method, the handle to which you can store in a serializable way.
Note that this is a non-trivial exercise, which is why I've only described it at high-level detail.

Related

Rails pub/sub with ActiveMQ

I'd like my Rails app to be able to listen and publish to ActiveMQ queues.
This article gives examples of how to use a ruby STOMP client, and a gem activemessaging that integrates that client into a Rails app. The functionality there seems ideal, but the activemessaging gem seems to no longer be maintained.
There are lots of resources on using rabbitMQ instead of ActiveMQ, but I'm trying to improve my Rails app's integration with an existing Java stack that's already using ActiveMQ.
So does anyone know of a gem I can use to achieve similar functionality to that of the activemessaging gem? I can't find one, so failing that:
How would I initialise a Stomp client with a persistent connection to my activeMQ instance inside the context of my Rails app, such that 1) The lifecycle of the client is tied to that of the ruby process running my app, not the request-response procedure, and 2) I get to consume to messages using code such as Active Record models or service objects defined in my app?
Thanks in advance.
According to the ActiveMessaging project website:
ActiveMessaging is a generic framework to ease using messaging, but is not tied to any particular messaging system - in fact, it now has support for Stomp, AMQP, beanstalk, Amazon Simple Queue Service (SQS), JMS (using StompConnect or direct on JRuby), WebSphere MQ...
So, it's an interface to simplify integration between various messaging protocols and/or providers. However, since your using a standardized messaging protocol (i.e. STOMP) you don't really need it.
I recommend you simply use this STOMP gem which is referenced in the original article.
STOMP, as the name suggests, is a very simple protocol. You should be able to use it however you need in your application.
As there's so little out there on this topic I thought I'd share the solution I came up with. Having established that using the STOMP gem directly is the way forward let me re-iterate the key challenges:
How would I initialise a Stomp client with a persistent connection to
my activeMQ instance inside the context of my Rails app, such that
1) The lifecycle of the client is tied to that of the ruby process
running my app, not the request-response procedure, and
2) I get to consume to messages using code such as Active Record models or service
objects defined in my app?
Part 1) turned out to be a bad idea. I managed to achieve this using a Rails initializer, which worked fine on my local. However, when I ran it in a staging environment I found that my message listeners died mysteriously. What seems to happen is that production web servers spawn the app (running the initializers), fork the process (without running them) and kill processes at random, eventually killing the listeners without ever having replaced them.
Instead, I used the daemons gem to create a background process that's easy to start and stop. My code in lib/daemons/message_listener.rb looked something like this:
require 'daemons'
# Usage (from daemons dir):
# ruby message_listener start
# ruby message_listener status
# ruby message_listener stop
# See https://github.com/thuehlinger/daemons for full docs.
# Require this to get your app code
require_relative '../../config/environment'
Daemons.run_proc('listener.rb') do
client = nil
at_exit do
begin
client.close
rescue # probably means there's no connection to close, do nothing to handle it.
end
end
client = Stomp::Client.new(your_config_options)
# Your message handling code using your rails app goes here
loop do
# I'd expected that subscribing to a stomp queue would be blocking,
# but it doesn't seem to be.
sleep(0.001)
end
end

Getting "ECONNREFUSED" error when trying to upload to Wolkenkit Blob Server

I'm currently developing a Wolkenkit application which is run on my local machine.
I want to upload a file from the Wolkenkit app to the blob server (as documented here).
When sending a POST request from the server to https://local.wolkenkit.io:3001/, Node.js gives me the error ECONNREFUSED.
I've tested the POST-Request with another program and it works there. Any idea why it doesn't work from the wolkenkit application itself?
Thanks!
The Storing files sample you linked to shows code that is to be run in the browser, not in the backend itself. Of course, both should work, but there are a few minor differences you need to watch out for.
Fixing the host name
First, I suppose that local.wolkenkit.io in your case maps to 127.0.0.1, which is the default for wolkenkit. That means that when you try to connect to this domain from within a Docker container, the container does not try to call out to the blog storage container, but it stays within itself. So, the first thing that needs to be fixed is the host name.
Basically, there are two options for this: You can either setup local.wolkenkit.io so that it resolves to the external IP address of your machine. This would work, but is pretty cumbersome. The other option is to directly address the appropriate container that is responsible for blob storage, by its internal name. The internal name is <name-of-your-app>-depot-file. So you need to replace https://local.wolkenkit.io:3001/ by https://<...>-depot-file.wolkenkit.io:3001/.
Fixing the port
Second, the port is wrong. This is because the blob storage service is internally running on port 3000, externally on 3001. So instead of https://<...>-depot-file.wolkenkit.io:3001/ you need to use https://<...>-depot-file.wolkenkit.io:3000/.
Once you have done this you should not get any more errors like ECONNREFUSED, since now the service can be found.
Fixing SSL issues
Third, since you are now connecting to the blob storage service using a different domain name, the SSL certificate doesn't match any more, since it was issued for local.wolkenkit.io. As a result, you will get SSL errors when trying to connect.
The simplest way to get around this is to disable any SSL checks (albeit this is also the most insecure way to handle this!). How to do this depends on the HTTP client module you are using. E.g., in request there is an option called strictSSL that you can set to false.
Of course, what you actually should do is to either use a custom certificate which includes this domain name as well, or to write a function that handles the certificate check and accepts the presented one, especially in this case.
If you do all of this, things should work :-)
PS: I am one of the authors of wolkenkit. Thanks a lot for bringing up this issue, and we will take care of this in the future, to make storing blobs easier.

Launch a script on a separate server from a Rails app

In my Rails app, when a user clicks a button it will currently launch an in-house created script in the background. For simplicity, let's just call it myScript. So in my Rails app, I basically have:
def run!
`myScript with some arguments`
end
Now this script will run as a process on the same machine that the Rails application is running on.
We want to host all of our Ruby/Rails apps on one server, and utilize a separate server for running the scripts. Is it possible to launch that script, but on a different machine? Let me know if you need additional information.
I use ssh for these types of things.
require 'net/ssh'
Net::SSH.start('server.com', 'username', password: "asdasd") do |ssh|
$stdout.print ssh.exec!("cdc && curl https://gist.github.com/mhenrixon/asdasd123123/raw/123123asdasd/update.rb | rails c production")
end
That's the easiest way of doing it I think but the sinatra/rails listener isn't a bad idea either.
To flat out steal Dogbert's answer: I'd go with a HTTP solution. Create a background job (Sidekick, Queue Classic) and have a simple job that does a get or a post or whatever on that second server.
The HTTP solution will involve a bit of a setup cost (time and learning probably) but in the end it will be a bit more robust than the SSH solution as you won't have to worry about IPs or users,etc. just a straight up URL. Plus if you are doing things with Capistrano,etc your deployments will be super easy.
Is there a reason why these jobs couldn't be run on the your webserver, but with a background process?

Is it secure to communicate with localhost via a socket without TLS or similar?

I'm writing a library that implements a distributed object system over a socket connection. I'm requiring that users sign any messages sent, at least when communicating over a network, as otherwise an attacker could pose as one of the participants and remotely call methods on the other, which would be a Bad Thing.
The main use of this library is for network communications. However I want to make it as simple as possible to get a 'hello world' example running locally without compromising someone's machine. Is it reasonable to assume that incoming data from a connection to localhost is really from localhost without securing it in some other way? Are there any other reasons that this might not be secure?
In case it's relevant, I'm working on OSX/iOS.
Connection on loopback is secure unless you have remote login enabled on the machine. Users can easily redirect connections with ssh(1).
Whether it is a good idea to complicate your code by not verifying messages from loopback is a different question that you have to ask yourself.

How can I update a DataSnap server while clients are still connected?

We use stateful DataSnap servers for some business logic tasks and also to provide clientdataset data.
If we have to update the server to modify a business rule, we copy the new version into a new empty folder and register it (depending on the Delphi version, just by launching or by running the TRegSvr utility).
We can do this even while the old server instance is running. However, after registering the new version, all new client connections will still use the currently running (old) server instance. All clients have to disconnect first, then the new server will be used for the next clients.
Is there a way to direct all new client connections to the new server, immediately after registering?
(I know that new or changed method signatures will also require a change and restart of the clients but this question is about internal modifications which do not affect the interface)
We are using Socket connections, and all clients share the same server application (only one application window is open). In the early days we have used a different configuration of the remote datamodule which resulted in one app window per client. Maybe this could be a solution? (because every new client will launch the currently registered executable)
Update: does Delphi XE offer some support for 'hot deployment' (of updated servers)? We use Delphi 2009 at the moment but would upgrade to XE if it offers easier implementation of 'hot deployment'.
you could separate your appserver into 2 new servers, one being a simple proxy object redirecting all methods (and optionally containing state info if any) to the second one actually implementing your business logic. you also need to implement "silent reconnect" feature within your proxy server in order not to disturb connected clients if you decide to replace business appserver any time you want. never did such design myself before but hope the idea is clear
Have you tried renaming the current server and placing the new in the same location with the correct name (versus changing the registry location). I have done this for COM libraries before with success. I am not sure if it would apply to remote launch rules through as it may look for an existing instance to attach to instead of a completely fresh server.
It may be a bit hackish but you would have the client call a method on the server indicating that a newer version is available. This would allow it to perform any necessary cleanup so it doesn't end up talking to both the existing server instance and new server instance at the same time.
There is probably not a simple answer to this question, and I suspect that you will have to modify the client. The simplest solution I can think of is to have a flag (a property or an out parameter on some commonly called method) on the server that the client checks periodically that tells the client to disconnect and reconnect (called something like ImBeingRetired).
It's also possible to write callbacks under certain circumstances for datasnap (although I've never done this). This would allow the server to inform the client that it should restart or reconnect.
The last option I can think of (that hasn't already been mentioned) would be to make the client/server stateless, so that every time the client wants something it connects, gets what it wants then disconnects.
Unfortunately none of these options are the answer you want to your question, but might give you some ideas.
(optional) set up vmware vSphere, ESX, or find a hosting service that already has one.
Store the session variables in db.
Prepare 2 web boxes with 2 distinct IP address and deploy your stuff.
Set up DNS, firewall, load balancer, or BSD vm so name "example.com" resolves to web box 1.
Deploy new version to web box 2.
Switch over to web box 2 using whatever routing method you chose.
Deploy new version to web box 1 if things look ok.
Using DNS is probably easiest, but it takes time for the mapping to propagate to the client (if the client is outside your LAN) and also two clients may see different results. Some firewalls have IP address mapping feature that you can map public IP address and internal IP address. The ideal way is to use load balancer and configure it to 50:50 and change it to 100:0 when you want to do upgrade, but it costs money. A cheaper alternative is to run software load balancer on BSD vm, but it probably requires some work.
Edit: What I meant to say is session variables, not session. You said the server is stateful. If it contains some business logic that uses session variable, it needs to get stored externally to be preserved across reconnection during switch over. Actual DataSnap session will be lost, so when you shutdown web box 1 during upgrade, the client will get "Session {some-uuid} is not found" error by web box 1, and it will reconnect to web box 2.
Also you could use 3 IP addresses (1 public and 2 private) so the client always sees 1 address , which is better method.
I have done something similar by having a specific table which held my "data version". Each time I would update the server or change a system wide global setting, I would increment this field. When a client starts it always checks this value, and will check again before any transactions/queries. If the value was ever different from when I first started, then I needed to go through my re-initialization logic, which could easily include a re-login to an updated server.
I was using IIS to publish my app servers, so the data that would change would be the path to the app server. I kept the old ones available, to respond to any existing transactions that were in play. Eventually these would be removed once I knew there were no more client connections to that version.
You could easily handle knowing what versions to keep around if you log what server the client last connected too (and therefore would know about).
For newer versions (Delphi 2010 and up), there is an interesting solution
for systems using the HTTP transport:
Implementing Failover and Load Balancing in DataSnap 2010 by Andreano Lanusse
and a related question for the TCP/IP transport:
How to direct DataSnap client connections to various DS Servers?

Resources