Trying to wrap my head around some logic.
I'm creating a simple turn-based game with Node.js and Socket.io. The idea is that each user 'logs' in via some framework front-end system (quick RubyOnRails scaffold) and then once they are logged in, they can refresh the page and close the browser and they remain logged in as usual until they log out.
I want this functionality of persisted authentication with a web socket so that while 'in-game' users can close their browser and come back at any point and I can relay to other users when a player is disconnected/connected. A player could also join and leave games but could obviously only be in one at a time.
My guess is that upon each page load the new socket.id that a user connects with is needing to be stored in the database inside the users table perhaps? Or is there a simpler way to tie a current user to a socket.id?
Am I going about this the wrong way? I can't seem to find any good examples/documentation similar to what I want. Some code examples or a starter app to push me in the right direction to achieve the basic idea of what I am trying to accomplish would be amazing.
You can make use of tokens for this purpose. A good option would be using JSON Web Tokens (JWT).
JSON Web Token (JWT) is an open standard that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.
- https://jwt.io
One possible way to use JWTs in your case goes like this:
User logs in
A new JWT is issued by server when a successful login happens
This JWT is sent to the client and is stored client side (e.g. cookies, local storage, etc.)
On every socket connection event, client sends this token to the server for validation
Server accepts connection only if the provided token is validated and closes the connection otherwise
The main advantage of using JWTs is that there is no need to store additional data server side because tokens can be validated using cryptographic methods such as digital signatures.
More on JWTs: https://jwt.io
Related
I can't help but think I've implemented Open ID slightly incorrectly, but I also cannot find why I've done the implementation the way I have is bad or not.
Scenario:
Website - Used forms authentication before being updated to use OWIN. Forms auth has been stripped out.
Website now supports OpenId to Okta. This is being implemented for a large company of our users to facilitate their logins. This is functional.
The method I use for the site models how Microsoft does logins. On email domain detection, we redirect the user to the login page for their domain. In this case, Okta. We receive the callback, and look up the user in our existing data, and generate a cookie based on our existing data (or create a new user account if they don't have one).
Essentially, just using Okta to confirm they are a valid user, and then we log them in with our user data. We foresee doing this for other companies as well.
Problem:
I have a desktop (WPF) client that requires a login to our website. This talks to API's that already exist using an auth key/token system we built many years ago. Ideally, we do something similar. Use Okta to verify the user is a user of that system, then generate a token that can be used for these API's.
Here is where I'm not sure I've done this appropriately.
The desktop client calls an API endpoint on our site with the email domain the user entered. We verify the user's domain is allowed to use SSO, and if so, we issue back a challenge endpoint for the client to call. This challenge endpoint is then called by the desktop client to launch the users default browser.
This challenge endpoint is an endpoint on OUR website, that essentially triggers the challenge to the IdP. After login, a callback is called on OUR website, to process the auth response. We verify the user's account is valid, and get the refresh token from the response. With the refresh token, and an identifier of the user, this data is then sent back to the desktop client using localhost:randomPort so the client can consume the refresh token and identitifer. (Note that I do encrypt the refresh token and identifier's before returning them to the client)
This refresh token is then POSTed to OUR website, along with their identifier (so we can identify the IdP we should call), then use an OIDC client to verify the refresh token is still valid. If the refresh token is still valid, we generate an app token and return it.
Is there a glaring issue with how this is implemented that I'm not seeing? How can I do this differently?
You seem to be missing the role of an Authorization Server (AS) that you own, to manage connections to other systems and to issue tokens to your apps.
You seem to have some good separation and to be doing quite a few things well - eg you are using your own tokens rather than foreign Okta tokens. The main issue is likely to be growing the system.
PREFERRED BEHAVIOUR
An AS should result in simpler code and a system that is easier to extend:
You can add new authentication methods quickly
This should involve just adding a connection (eg Okta) to your AS
Doing so requires zero code changes in your UIs and APIs
Your UIs just use standard OpenID Connect flows and call AS endpoints, regardless of the authentication method used
Your APIs just verify tokens issued by the AS, then authorize requests, regardless of the authentication method used
Some scripting is needed in the AS, but typically this is small.
FEATURES
In terms of what an AS should do for you, have a browse of the Curity Concepts Pages. I work there, and we try to write about the science of OAuth and the common extensibility features software companies need.
CHOOSING YOUR MOMENTS
Integrating an AS and getting past all the blocking issues is a gradual journey though, and involves learning. So it requires choosing your moments, spikes and getting buy in from your stakeholders.
The main objective should always be simple and standard code in your apps, that is easy to scale. OAuth and the Authorization Server give you design patterns that help with this.
I am building an API for my rails app. Through that API I will log users in and allow them to interact with their data.
On top of that users authentication, I will also like to make sure only my iOS app has access to the API, and eventually my own web app.
I want to make sure no one else will be using the API, so on top of the user authentication, I will like to protect my API with a token for each of my apps.
How do you usually solve this problem? Would I have to pass that token over on each call in order to authenticate that the call is coming from a valid client and identify which client it is (web vs iOS).
I will very much appreciate any pointers or if you know of an article explaining how to deal with this.
As you are already using jwt's to authenticate your user, why not just use the functionality of jwt to include additional information in the token, in this instance some form of hashed string that you can verify server side if it is a valid "client id".
On each request you could refresh the string.
Kind of dual authentication in each request. the user and the client.
I am working with an Arduino that I want to send data to a remote or local Rails RESTful API of mine. When building its front-end, I can login with devise and authenticate. But I am wondering what happens when you want a third party device to POST data to the backend ?
One choice could be to use random generated long hashes as keys, as Twitter does (a client key for example and an API key) which of course is not secure but decreases the chances someone will POST data easily to another account.
However, If I am right, the data will be sent over an http connection so they could be easily sniffed. There is no problem sending temperature data, but If someone decides to send RFID IDs and names etc. it could be a vulnerability.
How could I send data with a POST request to a RESTful Rails backend API:
authenticated?
secured?
authenticaed?
You will need an endpoint that the 3rd party can call (let's call him Zed). Zed sends a request (POST) to that endpoint with his email address. Devise then sends an email to Zed, with a confirmation link that contains a confirm_token. Zed clicks the link, which opens a page where he can enter a password. Once entered, he is logged in and an auth_token is stored against his user id. Subsequently he can use that auth_token to make further requests by passing the token in an Authorization header. The 'confirm_token' is throw away (you can set it to auto-expire after a given time period).
Obviously this requires Zed to manually create his account and login. Even if you setup a 3rd party 'developer program' you still need those developers to sign-up and generate tokens for them that they can pass in requests to your api. All of this should of course be done over https. Devise provides almost all of this capability out of the box.
secured?
HTTPS helps with the 'sniffing' aspect. The method above is secure, since only people who provide an email account they have access to can create accounts and get tokens that they can then user for later requests. However, you could use mobile phone number/sms as a second factor (google 2-factor authentication).
Without authenticaion - well, sort of
The only other option I can think of is that you issue known users a 'signing key'. They sign (encrypt) their request with this key. Since the key should only be known to them and can only be decrypted by the server using the matching public key, the data can be sent over HTTP. If anyone sniffs it, they almost certainly cannot crack the key to see what the real data is. All they can really do is mimic the request and keep sending that same request to the server repeatedly in a DOS attack.
But you still have to solve the problem of how do you verify WHO you are giving keys to - ie you still need to verify who Zed is somehow. Do you plan to do that offline and then email that 'verified' individual their private key? Using RoR, I still recommend sticking with Devise as most of the grunt work is done for you already.
Currently I have two servers set up, each handling there own thing, but I want to have a unified login between them. Right now one portal's login form is simply sending the username/pass through an API to the Rails portal, and it sends back an auth token, which we then store in our session and use for future authentication and API calls.
So the problem becomes that a user visiting our site has to login once in each portal, since the Ruby API doesn't communicate with ours, and the Ruby side doesn't do anything with the session when the API is pinged but send us back and auth token.
My initial idea was to have the Rails side create the session when we send the credentials to the API, but apparently that won't work as they won't be able to set the session id in the users browser, or at least that's what I was told.
If the Ruby side moved over to using the database for session storage, would that alleviate this issue? Basically, I want to keep most of the changes on the Ruby side for this.
I have implemented session sharing using memcache concept between Ruby on rails and PHP. i got success in this. if you are familier with memcache concept then it will be useful for you. and if you need any help for the same then i can share with you.
We wound up going a slightly different route. Basically, each side looks for the auth token in the database, and we pass it around via query strings on each link to the other. For example, if the user logs in on the PHP side, the Ruby side receives the username and password via the API, creates an auth token and updates the database, then sends back the token. The PHP side then stores that token in the session and sends it back via query strings (?authToken=blahblah) to the Ruby side, which is always listening for them. If it sees the auth token, it checks the database to make sure there's a match, and if there is, the user is authenticated in the Rails session.
Conversely, the Ruby side's login form simply updates the auth token in the database, and the links that point to the PHP side also pass the auth token. That side does the same check and will authenticate in the case that there is a match.
I'm trying to build the foundation for my iPhone app and server. I have users who will sign up and sign in from the iPhone app. In a normal website login, the http server will provide cookies to allow the user's subsequent requests to remain authenticated. How should I handle this on the iPhone? Should I just send the user/password every single time I have a NSURLConnection GET or POST? That seems excessive. Or do I use the ASIHTTPRequest framework to use cookies. Can anyone point me in the right direction for a proper implementation?
Thanks!
Sending username and password in every request is not great.
You can use anything you want to send cookies. It's just another HTTP header. But that begs the question of what is in the cookie. It depends on what your client/server architecture is. Web apps use session keys because traditionally web clients haven't held any state so the app server had to. Native clients can have all sorts of state and so generally don't need the server to provide that.
But you need authentication. That's what things like OAuth and OAuth 2 are for. They allow you to authenticate once and then use tokens that can be invalidated server-side. Kind of like very long lived sessions without data.
They are a bit complicated but there are open source libraries for both the server and client pieces or you can roll your own. Most of the complication is on getting the original token which you can short-circuit if you own the client and server. OAuth can get pretty complicated because all requests are signed with a secret token. OAuth 2 can be as simple as a shared secret (thus requiring SSL) in a cookie.