How to list users who are counted towards the Crowd License using Rest API - atlassian-crowd

Requirement: I need to derive no of users who are counted towards license for Crowd.
Apps structure in crowd:
Currently we have 2 applications defined in crowd namely App1 & App2.
Also we have 2 directories created as well, dir1 and dir2.
Both the directories are mapped to both the applications in the same order.
Now I have created two groups "grp1" & "grp2", one in each directory respectively and added some users in each group.
Now in "Who can authenticate" section of "app1", I have mapped "grp1" under
"dir1" and under "dir2", "grp2" has been mapped.
The same goes for "app2" as well, in "who can authenticate" section for "app2", I have mapped grp1" under "dir1" and under "dir2", "grp2" has been mapped.
Now I need to fetch the no of users who are counted towards license from the above setup using rest API's.
Can anyone list out any Rest end point available in crowd for achieving this requirement or maybe can even point out the approach to be used using the existing crowd Rest Api's.
Any help would be appreciated.

Given the logic of how many users count towards your Crowd license should be - the sum of the distinct users (by the field mapped as username in your directories) in both grp1 and grp2, the REST endpoints you'd likely need are:
/rest/usermanagement/1/group/child-group/direct?groupname=grp1
/rest/usermanagement/1/group/child-group/direct?groupname=grp2
If group nesting is enabled/used, then you'll also want:
/rest/usermanagement/1/group/child-group/nested?groupname=grp1
/rest/usermanagement/1/group/child-group/nested?groupname=grp2
Get users from both, merge and de-dupe.
Refs:
- https://www.atlassian.com/licensing/crowd#serverlicenses-5
- https://docs.atlassian.com/atlassian-crowd/3.3.0/REST/#usermanagement/1/group
CCM

Related

Is Sales Transaction modeled as Hub or a Link in Data Vault 2.0

I'm a rookie in Data Vault, so please excuse my ignorance. I am currently ramping up and modeling Raw Data Vault in parallel using Data Vault 2.0. I have few assumptions and need help validating them.
1) Individual Hubs are modeled for:
a) Product(holds pk-Product_Hkey, BK,Metadata),
b) Customer(holds pk-Customer_Hkey,BK,Metadata),
c) Store(holds pk-Store_Hkey,BK,Metadata).
Now a Sales Txn's that involves all the above Business Objects should be modeled as a Link Table
d) Link table- Sales_Link(holds pk-Sales_Hkey, Sales Txn ID, Product_Hkey(fk), Customer_Hkey(fk), Store_Hkey(fk), Metadata) and a Satellite needs to be associated to Link table holding some descriptive data about Link.
Is the above approach valid ?
My rationale for the above Link Table is because
I consider Sales Txn ID as a non-BK & hence
Sales Txn's must be hosted in a Link as opposed to hub.
2) Operational data has different types of customers.(Retail, Professional). All customers (agnostic to types) should be modeled in one hub & this distinction of customer types should be made by modeling different Satellites(one for retail, one for Professional) tied to Customer hub.
Is the above valid?
I have researched online technical forums, but got conflicting theories, so I'm posting it here.
There is no code applicable here
I would suggest to model sales as Hub if you are fine with below points else link is perfectly good design..
Sales transaction as a hub (Sales_Hub) :
Whats business key? Can you consider "Sales Txn ID"(unique number) as a BK.
Is this hub or the same BK used in another Link (except Sales_Link) i.e. link on link.
Are you ok with Sales_Link with no satellite, since all the descriptive exists in Sales_Hub.
Also it will store same BK+Audit metadata info in two places (Hub/Link) and addition joins to fetch data from Hub-satellite.
Is valid when
Customer information (retail,professional..etc) stored in separate tables at source(s) system.
You should model a satellite if the data is coming thru single source table then you apply soft rules to bifurcate them into their type in Business data vault.

Microservices (Application-Level joins) more API calls - leads to more latency?

I have 2 Micro Services one for Orders and one for Customers
Exactly like below example
http://microservices.io/patterns/data/database-per-service.html
Which works without any problem.
I can list Customers data and Orders data based on input CustomerId
But now there is new requirement to develop a new screen
Which shows Orders of input Date and show CustomerName beside each Order information
When going to implementation
I can fetch the list of Ordersof input Date
But to show the corresponding CustomerNames based on a list of CustomerIds
I make a multiple API calls to Customer microservice , each call send CustomerId to get CustomerName
Which lead us to more latency
I know above solution is a bad one
So any ideas please?
The point of a microservices architecture is to split your problem domain into (technically, organizationally and semantically) independent parts. Making the "microservices" glorified (apified) tables actually creates more problems than it solves, if it solves any problem at all.
Here are a few things to do first:
List architectural constraints (i.e. the reason for doing microservices). Is it separate scaling ability, organizational problems, making team independent, etc.
List business-relevant boundaries in the problem domain (i.e. parts that theoretically don't need each other to work, or don't require synchronous communication).
With that information, here are a few ways to fix the problem:
Restructure the services based on business boundaries instead of technical ones. This means not using tables or layers or other technical stuff to split functions. Services should be a complete vertical slice of the problem domain.
Or as a work-around create a third system which aggregates data and can create reports.
Or if you find there is actually no reason to keep the microservices approach, just do it in a way you are used to.
New requirement needs data from cross Domain
Below are the ways
Update the customer Id and Name in every call . Issue is latency as
there would be multiple round trips
Have a cache of all CustomerName with ID in Order Service ( I am
assuming there a finite customers ).Issue would be , when to refresh
cache or invalidate cache , For that you may need to expose some
rest call to invalidate fields. For new customers which are not
there in cache go and fetch from DB and update cache for future . )
Use CQRS way in which all the needed data( Orders customers etc ..) goes to a separate table . Now in this schema you can create a composite SQL query . This will remove the round trips etc ...

How do we enforce privacy while providing tracing of provenance using multiple channels in Hyperledger Fabric v1.0?

In Hyperledger Fabric v0.6, a supply chain app can be implemented that allows tracing of provenance and avoids double-spending (i.e., distributing/selling items more than it has) and thus avoids counterfeit. As an example, when a supplier supplies 500 units of an item to a distributor, this data is stored in the ledger. The distributor can distribute a specified quantity of an item to a particular reseller by calling a "transfer" function. The transfer function does the following:
checks if the distributor has enough quantity of an item to distribute to a particular reseller (i.e., if quantity to transfer <= current quantity)
updates the ledger (i.e., deducts the current quantity of the distributor and adds this to the current quantity of the reseller)
With this approach, the distributor cannot distribute more (i.e., double spend) than what it has (e.g., distributing counterfeit/smuggled items).
In addition, a consumer can trace the provenance (e.g., an item was purchased from reseller1, which came from a distributor, which came from a supplier) by looking at the ledger.
However, since it uses a single ledger, privacy is an issue (e.g., reseller2 can see the quantity of items ordered by reseller1, etc.)
A proposed solution to impose privacy is to use multiple channels in Hyperledger Fabric v1.0. In this approach, a separate channel/ledger is used by the supplier and distributor. Similarly, a separate channel/ledger is used by the distributor and reseller1, and another separate channel/ledger for the distributor and reseller2.
However, since the resellers (i.e., reseller1 and reseller2) have no access the the channel/ledger of the supplier and distributor, the resellers have no idea of the real quantity supplied by the supplier to the distributor. For example, if the supplier supplied only 500 quantities to the distributor, the distributor can claim to the resellers that it procured 1000 quantities from the supplier. With this approach, double spending / counterfeiting will not be avoided.
In addition, how will tracing of provenance be implemented? Will a consumer be given access to all the channels/ledgers? If this is the case, then privacy becomes an issue again.
Given this, how can we use multiple channels in Hyperledger Fabric v1.0 while allowing tracing of provenance and prohibiting double spending?
As Artem points out, there is no straightforward way to do this today.
Chaincodes may read across channels, but only weakly, and they may not make the content of this read a contingency of the commit. Similarly, transactions across channels are not ordered, which creates other complications.
However, it should be possible to safely move an asset across channels, so long as there is at least one trusted participant in both channels. You can think of this as the regulatory or auditor role.
To accomplish this, the application would essentially have to implement a mutex on top of fabric which ensures a resource does not migrate to two different channels at once.
Consider a scenario with companies A, B, and regulator R. A is known to have control over an asset Q in channel A-R, and B wants to safely take control over asset Q in channel A-B-R.
To safely accomplish this the A may do the following:
A proposes to lock Q at sequence 0 in A-R to channel A-B-R. Accepted and committed.
A proposes the existence of Q at sequence 0 in A-B-R, endorsed by R (who performs a cross channel read to A-R to verify the asset is locked to A-B-R). Accepted and committed.
A proposes to transfer Q to B in A-B-R, at sequence 0. All check that the record for Q at sequence 0 exists, and includes it in their readset, then sets it to sequence 1 in their writeset.
Green path is done. Now, let's say instead that B decided not to purchase Q, and A wished to sell it to C. in A-C-R. We start assuming (1), (2), have completed above.
A proposes to remove asset Q from consideration in channel A-B-R. R reads Q at sequence 0, writes it at sequence 1, and marks it as unavailable.
A proposes to unlock asset Q in A-R. R performs a cross channel read in A-B-R and confirms that the sequence is 1, endorses the unlock in A-R.
A proposes the existence of Q at sequence 1 in A-C-R, and proceeds as in (1)
Attack path, assume (1), (2) are done once more.
A proposes the existence of Q at sequence 0 in A-C-R. R will read A-R and find it is not locked to A-C-R, will not endorse.
A proposes to remove the asset Q from consideration in A-R after a transaction in A-B-R has moved control to B. Both the move and unlock transaction read that value at the same version, so only one will succeed.
The key here, is that B trusts the regulator to enforce that Q cannot be unlocked in A-R until Q has been released in A-B-R. The unordered reads are fine across the channels, so long as you include a monotonic type sequence number to ensure that the asset is locked at the correct version.
At the moment there is no straight forward way of providing provenance across two different channels within Hyperledger Fabric 1.0. There few directions to support such scenarios:
First one is to have an ability to keep portions of the data of the ledger and provide discrete segregation within the channel, the work item described here: FAB-1151.
Additionally there is proposal of adding support for private data while maintaining the ability to proof existence and ownership of claimed asset was posted in mailing list.
What you can do currently is to leverage application side encryption to provide privacy and keep all related transactions on the same channel, e.g. same ledger (pretty much similar to approach you had back in v0.6).
Starting in v1.2,
Fabric offers the ability to create private data collections,
which allow a defined subset of organizations on a channel the ability
to endorse, commit, or query private data without having to create a
separate channel.
Now in your case, you can create a subset of your reseller data to be private to the particular entity without creating a separate channel.
More Info refer: Fabric Doc.

Implement privacy settings in status updates in Neo4j database

Social Networks nowadays allow a user to set privacy settings for each post. For example in Facebook a user can set privacy setting of a status update to "just me", "friends", "public" etc. How would one design such a system in Neo4j (no documentation on something like this)?
One way would be to set a property for each post. But this doesn't look scalable as this feature cannot be extended to something like "share with group 1" (Assuming users are allowed to create their own groups (or circles in google+)). Also, when a user tries to access the newsfeed, merging separate results from "just me", "friends", "public" sounds like a nightmare.
Any suggestions?
For the sharing-level I'd set a label on the content (:SharePublic (or leave that off) :ShareFriends, :ShareGroup, :ShareIndividual)
For groups and individuals create a relationship from the content.
To aggregate the newsfeed for a user
Until limit
go over potential news items
allow public
allow individual if pointing to me
allow group if pointing to one of my groups
allow friend if friend with author (check if there is a friendship rel from me to author)
As the feed is a core high performance use-case I'd probably not do it in cypher but a server extension.
Cypher "pseudocode"
MATCH (u:User {login:{login}})
MATCH (:Head)-[:NEXT*1000]->(p:Post)
WHERE p:SharePublic
OR p:ShareIndividual AND (p)-[:SHARED_WITH]->(u)
OR p:ShareFriend AND (p)<-[:AUTHOR]-()-[:FRIEND]-(u)
OR p:ShareGroup AND (p)-[:SHARED_WITH]->(:Group)<-[:IN_GROUP]-(u)
RETURN p
LIMIT 30

Desire 2 Learn Org Unit ID

What is the API call for finding a particular orgUnit ID for a particular course? I am trying to pull grades and a class list from API but I can not do it without the orgUnitID
There's potentially a few ways to go about this, depending on the kind of use-case you're in. Firstly, you can traverse the organizational structure to find the details of the course offering you're looking for. Start from the organization's node (the root org) and use the route to retrieve an org's descendants to work your way down: you'll want to restrict this call to only course-offering type nodes (org unit type ID '3' by default). This process will almost certainly require fetching a large amount of data, and then parsing through it.
If you know the course offering's Code (the unique identifier your organization uses to define course offerings), or the name, then you can likely find the offering in the list of descendants by matching against those values.
You can also make this search at a smaller scope in a number of ways:
If you already know the Org Unit ID for a node in the structure that's related to the course offering (for example, the Department or Semester that's a parent of the course offering), you can start your search from that node and you'll have a lot fewer nodes to parse through.
If your calling user context (or a user context that you know, and can authenticate as) is enrolled in the course offering, or in a known parent org (like a Department), then you can fetch the list of all that user's enrollments, and parse through those to find the single course offering you're looking for. (Note that this enrollments route sends back data as a paged result set, and not as a simple JSON array, so you may have to make several calls to work your way through a number of data pages before finding the one you want.)
In all these scenarios, the process will end up with you retrieving a JSON structure that will contain the Org Unit ID which you can then persist and use directly later.

Resources