I'm using the mosquitto service
I add the user to the users list in the file to authentication users when user subscribe to broker
Now if the number of users in this file is more than 100,000
Will it bring a heavy burden to the mosquitto?
My users may be more than a million user
Also i try to restrict users to self topic so i have to add topic name and username to ACL
So acl file is heavy
How do i manager all users authentication and acl without any problem
For that many users a flat file is a REALLY bad idea, even if the data is converted into a easily searched data structure it will be a real problem to find users in the files to edit/remove.
This is exactly what the authentication plugin interface is for.
You can write your own using the API or JPMens has written a plugin that lets you keep the Username/Password and ACL in a selection of different databases (e.g. MySQL, Postgress, MongoDB, Redis...) .
Related
Is it possible to secure the neo4j browser so users can only execute specific queries? I would like to provide generally open access to the browser, but not allow users to delete.
Neo4j 3.1.x security features include role based permissions. As a browser user must log in with a username and password, it is also subject to this security model.
The authentication and authorization section of the Neo4j operations guide should be helpful to you. The section describing native roles already available to you gives a good visual of what is allowed per role.
It sounds like the reader native role is the one that would make sense for browser users, as deletion requires write permissions.
Finer grained permissions are possible, but based entirely upon user defined procedures, so nowhere near as simple as using the provided native roles and permissions.
However, if certain users should only be able to run a limited set of well-defined queries, then custom roles and user defined procedures should do the trick.
I have a web application (ASP.NET MVC) which uses Azure Blob Storage for storing documents and images. Each user has specific access rights to the blobs and this
is stored in web application's database.
Currently I have a quick temporary solution which uses the web application as a middle layer that runs the authorization and if the client has read access to the blob,
it is first retrieved from Azure and then delivered to the client. This is of course not the optimal way of doing it for many reasons.
I have started to rebuild this part using SAS (Shared Access Signatures), but can't find a good source for setting up a system that will scale well as the number of
users and files grow. I am expecting the number of users to be around 100 and the number of blobs to be around 100 000.
As I see it I have two options.
1) All files have one signature stored in the web applications database and this is used for all users who have access to the file. This would be the easy way to do it,
but if a user for some reason does not still have access to the file, they will still be able to access the file if they have the link from earlier access.
2) All files have specific signatures for each user who has access to the file. This will make it easy to revoke access to files, but the number of signatures will
be massive and will this have any side effects?
Are there any more options?
Any thoughts on this are greatly appreciated!
Rather than having SAS for each users it would be better that you group the files by roles and map the users to roles which is easy to scale irrelevant to number of users.
Also giving access to users to blob directly is not recommended as you want to distribute your blob content through your application. So provide access to application with specific in context of role of user.
See below article for generating twominute SAS which expires in two minute so your users with the link does not have access to image for long time.
http://www.dotnetcurry.com/windows-azure/901/protect-azure-blob-storage-shared-access-signature
Hope this helps. :)
I have a Rails app acting as an OAuth 2.0 provider (using the oauth2-provider gem). It stores all the information related to users (accounts, personal information, and roles). There are 2 client apps that both authenticate through this app. The client apps can use the client_credentials grant type to find users by email and do other things that don't require an authorization code. Users can also log in to the client apps using the password grant type.
Now the issue we're facing is that the users' roles are defined globally on the resource host. So if a user is given an admin role on the resource host, that user is admin on both clients. My question is: what should we do to have more fine-grained access control? I.e. a user can be an editor for app1 but not for app2.
I guess the easy way to do this would be to change the role names like so: app1-admin, app2-admin, app1-editor, app2-editor, etc. The bigger question is: are we implementing this whole system correctly; that is, should we be storing so much info on the resource host, or should we denormalize the data onto the client apps?
A denormalized architecture would look like this: all user data on the resource host, localized user data on each client host. So user#example.com would have his personal info on the resource host and have his editor role stored on client app1. If he never uses it, app2 could be completely oblivious of his existence.
The drawback to the denormalized model is that there would be a lot of duplication of data (account ids, roles) and code (User and Role models on each client, separate management interfaces, etc.).
Are there any drawbacks to keeping the data separate? The client apps are both highly trusted--we made them both--but we are likely to add additional client apps, which are not under our control, in the future.
The most proper way to use oAuth and other similar external Authorization methods as I see it, is for strictly Authentication purposes. All the business/authorization logic should be handled on your server part at all times, and you should always keep a central record of the user, linking to the external info per external type of auth service.
Having a multilevel/multipart set of access, is also a must, if you want your setup to be scalable and future-proof. This is a standard design that is separate from any authorization logic and always in direct relation to business rules.
Stackoverflow does something like this, asking you to create an actual account on the site after you login using an external method.
Update: If the sites are really similar you can subset this design to an object per application that keeps the application specific access rules. This object has to also inherit from a global object that has global rules (thus you can for example impose a ban application-wide or enterprise-wide).
I would go for objects that contain acess settings, and roles that can be related to instances of both application level settings and global settings only for automating/compacting the assignment of access.
Actually you can use this design even if they are not too similar. This will help you avoid redundant settings and meaningless (business-wise) roles. You can identify a role purely by the job title/purpose, and then impose your restrictions by linking to an appropriate acess settings setup.
I have ejabberd set up using external_auth to authenticate against the user database of my web application. What I would like is to be able to create a MUC (chat room) for site moderators, and automatically add those users to the chat, to the exclusion of all other users.
Eventually I would also like to be able to map my site's groups functionality to MUC's in ejabberd also.
The external authentication API for ejabberd doesn't seem to provide for fine-grained access control, basically only allowing you to query whether a user is registered and whether a username / password combination successfully authenticates a user.
The only reference I've seen to acl's for MUC's is here:
http://www.ejabberd.im/aclpopulate
But that seems to require setting privileges through the webadmin interface.
Is there no way to do this automatically from external auth?
To answer my own question, it doesn't seem possible to do what I need by using external auth.
I ended up integrating ejabberd commands into the user / group lifecycle of my web app, which was quicker than I had anticipated, and has the added bonus of being a zillion times faster than using external auth (I use ejabberd's internal user database, using ejabberdctl to create users, update passwords, add and remove from shared rosters and create muc's).
To help with that process I created a PHP wrapper for ejabberdctl which is freely available on github:
https://github.com/tomlancaster/Ejabberd-Wrapper-PHP
Please feel free to use and abuse it as you wish.
If you have your own authentication module, in that case you can redirect the authentication process of ejabberd. In ejabberd_auth.erl file redirect authentication by modifying check_password_with_authmodule/3 and check_password_with_authmodule/5 two function. From your authentication module return back the term as these two functions return.
If you authentication module is in deferent machine, make a socket connection to communicate with your authentication module and get the result and give the result back to check_password_with_authmodul function.
After this changes rebuild ejabberd and start.
I am about to be writing a Ruby on Rails app which will use sub-domains to authenticate users. We will have two types of accounts:
user accounts
domain accounts
Users will thus be able to belong to multiple domain accounts using the same credentials. I hope to have the ability for a domain account administrator to be able to search for particular users and add them to their domain.
In addition to simply creating a domain account in the database, I want to setup an actual account on the machine (linux-based) so that users can drop files into a special directory and we can run some scripts to import that new data. Alternatively, I may write a client/server script to make this process easier.
All of this I believe I can do, however, as soon as the project attains a certain number of domain accounts, it will be necessary to figure out how to cluster the domain accounts appropriately so that we can have multiple machines.
From a database standpoint, this is fairly easy and there are lots of tutorials on how to cluster MySQL or whichever SQL server I decide to use. So my question really pertains more to machine accounts as well as how to cluster a Rails app.
If you want a comparison, think of this project like GitHub or Beanstalk but with data that isn't source control related.
Does anybody have any experience with this or know of any really good articles/books to get me started?
Thanks very much!
I suggest you look at using one of the PAM modules that lets you do account authentication against a SQL database. That way you just add the domain account to the SQL database and you get UNIX accounts (on all your servers) automagically, for free. So the clustering should just happen for free too...