A web app that connects to local server; is that safe? - ruby-on-rails

I am writing a rails app that comes with a local program for the user to run. That program creates a nodeserver which connects to the user's browser through websockets. Upon certain actions by the user in the browser, a signal will be sent through the socket to the local node server which will execute a pre-defined command on the user's local machine. The command differs depending on the user's specific action in the browser, but there is no way for the user to send custom data to the local node server(ie different actions send pre-defined information to the local server). I was wondering if there are security implications to doing this, and what some possible exploits might be if so.

I had troubles working on a Chrome Extension communicating with a local environment with this:
Maybe you can have problems with HTTP access control (CORS) and gem CORS could fix if that happens http://rubygems.org/gems/cors
Another possible problem may be "406 not acceptable" because of CORS.

Related

.NET MVC Custom Mail Server

I'm in the process of planning the development of a mail-server to hand the sending of email across our multiple websites. Below is a description of what we are planning to implement and I'd like your opinion/suggestions.
We use ASP.NET MVC and have many web-sites hosted on Azure. We currently send mail internally within each of the web applications using SMTPServer.Send(). Obviously this is not the ideal way to send emails when you have a decently busy set of websites because the send mail call is blocking and cannot guarantee mails are sent. With this I'm worried out getting an influx of mail requests when we launch our next website (we think it'll get a decent amount of traffic and lots of emails will be sent).
My plan of action : to build a centralised mail-server that runs in the background (we use azure and this will be simply another web-application). When each one of our web applications wants to send a mail, instead of doing this internally, it'll call a web method on the mail-server called sendMail() this function will accept certain parameters and insert the mail parameters, content etc. into a database. The mail server will then poll the database at fixed time intervals, select a set of unsent emails and attempt to send them using the same SMTPServer.Send() function. If an email fails for some reason we won't flag it as sent and in the next poll interval the email will be selected again and another send attempt will be made. (we will cap the number of send attempts to say 20).
This will allow each of the websites to run smoothly without having loads of blocking send mail calls internally and the mail server will handle all the sending sequentially and in a controlled environment as a separate standalone web-application.
Thanks in advance!
Looks like a good design, Don't know the entire scenario which let to you building something like an email server. The problem has been solved well by using a service that already exist like Office 365.
Your design is good, My suggestions would be the following,
You can use Azure WebJobs to build the polling agent. You can make the web job run as a scheduled web job that does the polling and sending the mail and it can be written very clean as a simple console app.
You can use Azure API App to build the SendMail() call and you can use the Azure AD Auth on the API to authenticate the caller of the API using the Authentication and Authorization feature to easily secure your email server. You can also enable CORS easily as well to make sure you receive requests from other websites and process it.
Some issues I foresee for you,
Volume and Scaling : You can only process so much email between each polling. If you cannot then you will need to create another polling agent which will making things complicated as they need to know they are picking different sets of emails to send. If you volume is going to be low you should be fine.
Challenge : Why can't the websites send the mail them selves ? And then record it on the database for tracking. All you have to do build module or a component that they use on their web page to create and send the mail. Polymer 1.0 works well for this scenario.
Hope this helps to get you started.

How to keep the application alive without a client

My goal is to send an email out every 5 min from my application even if there isn't a browser open to the application.
I'm using FluentScheduler to manage the tasks; which works up until the server decides to kill the application from inactivity.
My big constraints are:
I can't touch the server. It is how it is and I have to work around it.
I can't rely on a client refreshing a browser or anything else along the lines of using client side scripts.
I can't use any scheduler that uses a database.
What I have been focusing on is trying to create an artificial postback.
Note: The server is load balanced, so a solution could use that
Is there any way that I can keep my application from getting killed by the server?
You could use a monitoring service like https://www.pingdom.com/ to ping the server at regular intervals. Just make sure it hits an endpoint that invokes .NET code and not a static resource.

ruby on rails chat application over port 80 which is hosting site agnostic(no flash and websockets)

Wanted to build a chat like application(i.e bidirectional message passing to multiple connected clients). Looked at the Faye gem but it opens a new port apart from port 80.
The big problem is that if the client is behind firewall all access to other ports except 80 are restricted and not all the hosting sites provide the support.
The ActionController::Live component does not have any mechanism to register the clients so that the message can not be passed to the registered clients on a specific event occurance.
Looking for a solution where the alive clients are stored in a collection(array or somthing like that) and when any of the alive client sends a message then the collection can be iterated and the messages can be written on it. All of these must happen only through port 80.
Good question - having implemented something similar, let me explain how it works:
Connections
A "live" web application is not really "live" at all - it's just got a persistent request; meaning it still works exactly the same as a "normal" Rails app, except clients don't close the connection (hence why you're interested in opening another port)
The way you handle the request is where the magic happens. This is as much to do with the client-side, as it is with Rails (server-side)
Clients
When you connect to a "chat" application, your browser is opening a live connection with the server. This will typically be done with either server sent events (Ajax long polling), or web sockets
The way this works is to open the connection using the normal Rails ActionDispatch middleware, and then allow you to connect
If you've played with ActionController::Live functionality, you'll find that it's not a typical controller-action. It's actually a separate technology (like resque or Redis) which you call from another controller action. This gives room to do cool things with
Server
The way you'd handle something like this is to separate the "live" functionality and the "normal" Rails app. It's one of the current down-falls of Rails - in that it's probably better to implement something like nodeJS with socket.io to handle the live data (with an endpoint like chat.yourapp.com), whilst using Rails to handle authentication & authorization
From a server perspective, its job is to handle incoming & outgoing requests -- not to handle persistent connections. So I guess you may want to look at ways you could "outsource" the websocket connectivity. Admittedly, my experience is slightly thin in this area, so you may do well searching the net
Solutions
We've had a lot of success using a third-party system called Pusher
This is a web socket system which allows you to open a persistent connection as a client, and integrates with Rails in a similar way to Redis (you can push to it)
This means you can host the "chat" application with Rails (http://yourapp.com/chat), send the messages to your Rails app (http://yourapp.com/chat/send), and handle the incoming chats from pusher (or similar)
Maybe you want to use my open source comet web server (https://github.com/TorstenRobitzki/Sioux). There is a ruby web chat example. I use this to implement an interactive role playing map with rails (http://dungeonpilot.com).

Delphi XE2 - How to get IP of a specified website?

I have a program in which checks a php file on a web server to see if the user is verified. The php files runs through the DB and checks and echos "verified" if they are.
Now, people are now easily bypassing the verification system by installing Xampp, routing my server to 127.0.0.1 in their hosts file, and then setting a script that echos verified.
I want to be able to check the IP address of my domain to check if it is routing to 127.0.0.1.
How would I go about resolving the IP address of a domain through delphi?
I used to use a similar hack to get around ICQ server-side verifications. Very convenient when I wanted to test alpha/beta builds that I was not invited to :-)
Indy, which ships with Delphi, has a TIdStack.ResolveHost() function, and a separate TIdDNSResolver component, which can both be used to get the domain's IP(s). It also has a TIdStack.LocalAddresses property to retreive the local IPv4 addresses. Or you can just use the socket API gethostbyname() or getaddrinfo() functions directly, along with platform-specific APIs to enumerate the local IPs, like the GetAdaptersAddresses() function on Windows.
However, rather than having the PHP script simply echo plain-text back to your app, a much more secure option that does not require you to verify IPs is to have your app create a dynamically generated nonce value and send it to the PHP script, then have the script process it, hash it, whatever as needed using an algorithm that only you know, and then send it back to the app. The app can perform the same algorithm and compare the results. Unless someone takes the time to reverse engineer your app, they will not be able to reproduce your algorithm or fake its results with their custom Xampp scripts.
Even better, use SSL/TLS to encrypt your connection to your domain server, and give your domain server an SSL certificate that your app can verify before it exchanges any data with your PHP script. If you do just this much, you can continue using the plain-text echo since SSL/TLS will verify you are connected to your domain for you.

User Disconnection Detection (i.e. "Online Status") Daemon

Summary: is there a daemon that will do postbacks when a user connects/disconnects via TCP, or is it a good idea to write one?
Details:
There are a number of questions based around this already; but I believe that this is a different "twist" on it. We're writing a Ruby on Rails web application, and we would like to be able to tell if a user is "online" or "offline", where the following definitions apply:
"online" - the user's browser is open and maintaining a TCP connection to one of our servers.
"offline" - the user's browser is no longer connected to one of our servers.
What we're thinking is a convenient way of doing this is to run a completely separate "online state" server that each of our users will connect to (exactly once):
when a connection is made to the "online state" server, it will postback to our actual RoR site and let it know "this user just logged on".
when a connection is lost from the "online state" server, it will postback to our actual RoR site and let it know "this user just logged off".
This methodology seems reasonable and keeps things quite modularized (the online state server, for instance, will be quite simple, which is nice). We're able to write this online state server, but have the following questions:
Any specific problems with the above architecture that we haven't taken into account?
Is there a daemon or application out there that does this already? Why reinvent the wheel, if it has already been written?
Is there a push server out there that offers this functionality (i.e. it maintains connections to the users, but will postback or send notifications upstream to the web servers when a user connects or disconnects?)
Is this something you envisage users would install on their systems?
If you are looking for a browser based system, WebSockets are probably your only option using something like Socket.IO http://socket.io/.
The node.js socket server provided as part of this project can be found on github: http://github.com/LearnBoost/Socket.IO-node
Node.js is a great platform designed for exactly this problem domain and there are a number of WebSocket servers for node.
Unless your app is entirely ajax based and uses a single parent page, you would need to create a persistent parent frame containing the socket that wraps your application, as each time the user clicks a link the page unloads and reloads, resulting in disconnection and re-connection from the state server.

Resources