What URI scheme should be used for a local concept in RDF? - url

Consider the case where you have some knowledge you want to name and you want to put it into a knowledge graph format like the resource description framework (RDF). However, you don't have an email, a web domain, or access to a namespace authority to generate a URI for the RDF knowledge graph.
This rules out tag uris, cool uris, and most other schemes, respectively.
Some possible options that I am not entirely happy with for the mentioned reasons:
http://localhost/myConcept but this implies a resolvable location. It might also still imply identical concepts for all interpreters of your knowledge graph.
file:///myConcept file scheme but this implies there is a resolvable physical location.
urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 uuid scheme, but this doesn't let you put a human readable component in the URI. It would be great if the uuid scheme allowed urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6/myConcept
Magnet uris were envisioned to help communicate between local machines and the web. But they remain a draft, aren't well defined, and the examples reuse other schemes that depend on naming authorities.
data:,myConcept data scheme but this also depends on registering a mime type, and as far as I can tell there aren't any mime types for abstract concepts. It also fails to encode any type of uniqueness such as would be the case with encoded files or communicate that this concept is only locally unique.
informal schemes like urn:sha1:, but these imply that there is some content to be hashed - and concepts with identical names but different meanings would get assigned the same hash.
What I am looking for identifies a concept in a unique way on a local machine that when communicated with others implies that the concept name can only be interpreted as unique in that single communication and may not be integrated with other knowledge graphs before being altered to be globally unique. It also doesn't rely on any namespace authorities or emails (which also require registration). Does such a scheme exist (maybe informally)? What would you do given the constraints?
Edit: Just want to clarify my view on emails and web domains. Emails are easy and the registration process is completely automated - you can sign up for one immediately. However, you are dependent on that organization to maintain the email registry, not kick you out (like if your email account is inactive), and not go out of business. Personal web domains require a subscription and it should not be required that publishers of data also pay an upkeep fee. This would likely lead to deregistration when they no longer want to pay the fee and the data can now become ambiguous if another user reuses those URIs for other purposes. Free web domains like yourName.github.io have the same issues as email addresses.

Related

OAuth client implementation w/ multiple resources, multiple auth servers

I'm trying to understand OAuth best practice implementation strategies for systems requiring access to protected resources backed by different authorization servers. The default answer is to use the access tokens provided by each authorization service and write the logic to store them on an as-needed basis, but the use case of systems requiring multiple, federated protected resources seems common enough that there might be a protocol/framework-level solution. If so, I haven't been able to find it.
Here's a hypothetical example to clarify:
I'm a user with an account on Dropbox, Google Drive, and Boxx. I'd like to make a backend API to report the total number of files I own across all three systems, i.e., Result = FileCount(Dropbox) + FileCount(Drive) + FileCount(Boxx). How to I organize the system in such a way that I can easily manage authorizations? A few cases:
Single-account: If I only have, say, a Drive account, the setup is easy. There's one protected resource (my folders), one authorization server (Google), and so I only have one token to think about. By changing the authorization endpoints and redefining the FileCount function, I can make this app work for any storage client I care about (Dropbox, Google, Boxx).
Multi-account: If I want to aggregate data from each protected resource, I now need three separate authorizations, because each protected resource is managed by a separate authorization server. AFAIK, I can't "link" these clients to use a single authorization server. As a result, if I have N protected resources backed by N authorization servers, I'll have N access tokens to manage for a given request/session. Assuming this is true, what provisions do software frameworks provide to handle this (any example in any language is fine)? It just seems too common of a problem not to be abstracted.
The closest related question I can find is probably this one. The accepted answer seems completely reasonable: one application should not be able to masquerade as another without explicit consent. What I'm looking for is (I think) slightly different: some standard methodology/framework/approach to managing multiple simultaneous access tokens per session. I've also wondered about the possibility of an independent authorization server that can proxy the others as needed and manage the token bookkeeping (still requiring user consent for each), but I think this amounts to the same thing.
Thanks in advance.

How to achieve decentralized membership in Hyperledger Fabric 1.0

Currently in Hyperledger Fabric 1.0 there is a central membership service. I want a way to make it decentralized so that atlas 50% of the members have to agree for a new member to join the network. How can I achieve this?
The idea is basically put the membership logic in chain code and let member service fetch data from chain code at the time of enrollment. But how to enforce this, I mean how do we know that membership service is actually reading from blockchain and not cheating.
This is actually natively support by Hyperledger Fabric, and the behavior you describe is actually the default for channel membership changes.
Each channel begins life with a genesis block. The contents of this genesis block define the channel members, as well as policies for which users from these organizations are authorized to perform different functions on the blockchain. For instance, some users may be allowed to submit transactions, but not read the whole blockchain, while others could do both.
To change the channel membership, you submit a channel reconfiguration transaction. This transaction specifies the new membership, and must include enough signatures to authorize this modification. By default, this is signatures from the admins of a majority of the organizations.
The policy framework is actually quite powerful, and with a little knowledge, you can define even more powerful rules. For instance, you could require that OrgA and 3/10 other organizations sign off to change membership. Or, you could require that all but one Org agree to make any membership change, or an infinite number of other permutations.
Some links you might find helpful:
http://hyperledger-fabric.readthedocs.io/en/latest/configtxgen.html
http://hyperledger-fabric.readthedocs.io/en/latest/policies.html
http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
The documentation and tools around reconfiguration are a little lacking at the time of this writing. The most useful place you can probably look is:
https://github.com/hyperledger/fabric/tree/release/examples/configtxupdate
There are two protobuf structures you must familiarize yourself with, the common.ConfigUpdate, and the common.Config. Channels are created by submitting a signed config update to the ordering service, which generates a corresponding config embedded in the genesis block.
The policy which governs membership changes for a channel is specified as the mod_policy field of the Application group, which is a subgroup of the Channel group. This field defaults to Admins, which refers to the policy definition Admins within the Application group. By default, this policy is set to MAJORITY of the Admins policies for the organization groups defined under the Application group.
So, to modify this policy before creating your channel, you would decode the configtx to JSON using the configtxlator tool, make your modifications, and then encode it back using the configtxlator tool once again. Submitting this new transaction will create the channel with the policy you specified.
If you wish to modify membership after the fact, the process is similar. Retrieve the current channel configuration, decode and modify it, then use configtxlator to compute a config update structure which represents your change. Gather signatures via peer channel signconfigtx then submit it to modify your channel's configuration.
This process is obviously all a bit manual at the moment, but in the future, common tasks should be automated by the SDKs and the tooling should improve as well.
Note: configtxlator is a REST service so that it can be accessed conveniently from inside your SDK application, independent of language.
As a quick addendum. You asked how you can be sure that no one is 'cheating' and not really getting the required signatures. This is also built into the system. All changes to the channel configuration are validated not only by the ordering network, but by all peers in the system. If a configuration arrives which cannot be validated, then all nodes in the network will notices, and will halt usage of that channel until corrective administrative action is taken.
For decentralised membership, that is not dependent on a centralized CA, take a look at Blockstack.

How should I set the OAuth redirect_uri for the LinkedIn API on multiple subdomains (different instances of the same app) without violating the TOS?

I know this isn't exactly a how-to question, but Linked-In Support directed me to StackOverflow when I asked them this question, and I cannot find the answer anywhere after googling/searching the forums:
Per the LinkedIn APIs Terms of Use (https://developer.linkedin.com/documents/linkedin-apis-terms-use), section E.1, second bullet point:
Don’t try to exceed or circumvent your limitations on calls and use.
This includes creating multiple Applications for identical, or largely
similar, usage (e.g., having one Application per customer). If we
believe that you have exceeded or circumvented our limitations, or if
you have tried to, we may temporarily suspend or permanently block
your access to the APIs, disable your developer account, or both.
It sounds like I'm not allowed to create multiple instances of an application. However, the nature of my software is that each of my clients gets a subdomain and runs an instance of my app on a server particular to that client. Each client thus needs their own OAuth redirect_uri, and the only solution that I can think of is to create an application for each of my clients (which are organizations and not individual users).
Does this practice violate the TOS, and if so, what's a viable alternative?
If this practice is allowed, what is the maximum number of applications (and API keys) I can create?
Thanks in advance.
Register a single client/app but add multiple RedirectURI's for that instance, one for each customer/domain. This is allowed per LinkedIn documentation by adding multiple URLs in the OAuth 2.0 Redirect URLs text area, separated by a comma:
OAuth 2.0 Redirect URLs: Comma separated list of absolute URLs allowed
for OAuth 2.0 redirections.

Security Design for iOS App

I'm having trouble determining what the best approach is for the following scenario.
My application POST's to my web service.
POST URL includes several parameters, including device info + a shared secret
The device is stored in my database IF the shared secret is correct
At the moment, this shared secret is hard-coded in the app and the connection to my web service is over SSL.
This obviously limits people from finding out the shared secret and abusing my web service.
However this approach isn't as secure as I'd like, due to the possibility of decoding my app etc and getting the secret.
Is there a better way of doing this, as opposed to the shared secret approach?
With local keys almost every security approach can be leaked by somebody somehow. This does of course not mean that we do not need to put any effort in at all.
If people download your app the can possible further investigate the code by reengineering and or refactoring
However, if there is no other way than putting the secret key within your apps binary, you're left with a (weaker) alternative often called security through obscurity.
There are many ways to do this and you can probably find a lot of discussion on the internet about this topic so here are just some ideas:
Split the key to multiple classes and throughout your code
Disguise your key as string which will could be used in a normal way within your app
Hash some data or code segments on startup and include them in your key
Use all of the methods named above together
There are even some frameworks out there like UAObfuscatedString which might help you implementing your logic.
Keep in mind, the best way is always to not hardcode a secret key within your apps binary but somehow "load" the secret from your server who e.g. calculates the key…

OAuth2 and role-based access control

I have a Rails app acting as an OAuth 2.0 provider (using the oauth2-provider gem). It stores all the information related to users (accounts, personal information, and roles). There are 2 client apps that both authenticate through this app. The client apps can use the client_credentials grant type to find users by email and do other things that don't require an authorization code. Users can also log in to the client apps using the password grant type.
Now the issue we're facing is that the users' roles are defined globally on the resource host. So if a user is given an admin role on the resource host, that user is admin on both clients. My question is: what should we do to have more fine-grained access control? I.e. a user can be an editor for app1 but not for app2.
I guess the easy way to do this would be to change the role names like so: app1-admin, app2-admin, app1-editor, app2-editor, etc. The bigger question is: are we implementing this whole system correctly; that is, should we be storing so much info on the resource host, or should we denormalize the data onto the client apps?
A denormalized architecture would look like this: all user data on the resource host, localized user data on each client host. So user#example.com would have his personal info on the resource host and have his editor role stored on client app1. If he never uses it, app2 could be completely oblivious of his existence.
The drawback to the denormalized model is that there would be a lot of duplication of data (account ids, roles) and code (User and Role models on each client, separate management interfaces, etc.).
Are there any drawbacks to keeping the data separate? The client apps are both highly trusted--we made them both--but we are likely to add additional client apps, which are not under our control, in the future.
The most proper way to use oAuth and other similar external Authorization methods as I see it, is for strictly Authentication purposes. All the business/authorization logic should be handled on your server part at all times, and you should always keep a central record of the user, linking to the external info per external type of auth service.
Having a multilevel/multipart set of access, is also a must, if you want your setup to be scalable and future-proof. This is a standard design that is separate from any authorization logic and always in direct relation to business rules.
Stackoverflow does something like this, asking you to create an actual account on the site after you login using an external method.
Update: If the sites are really similar you can subset this design to an object per application that keeps the application specific access rules. This object has to also inherit from a global object that has global rules (thus you can for example impose a ban application-wide or enterprise-wide).
I would go for objects that contain acess settings, and roles that can be related to instances of both application level settings and global settings only for automating/compacting the assignment of access.
Actually you can use this design even if they are not too similar. This will help you avoid redundant settings and meaningless (business-wise) roles. You can identify a role purely by the job title/purpose, and then impose your restrictions by linking to an appropriate acess settings setup.

Resources