I have a domain in which exist team members that can assign tasks to each other. I have 2 bounded context:
TeamBC: Management of the team members and their info.
TaskBC: Management of tasks and their assignaments.
TeamBC is upstream and TaskBC is downstream. The concept "member" in the TeamBC is the concept "recipient" in the TaskBC. The recipient of a task is the team member who the task is assigned to.
I use sync integration, with rest api in TeamBC and ACL in TaskBC. Recipient is a VO in TaskBC.
My question:
When integrating with rest api (not using messaging between BCs), does the downstream context have to duplicate any data from the upstream? In my case... does the TaskBC have to store any data in its database from the member entity of the TeamBC ?
It doesn't have to duplicate anything but can.
With event-only integration via messaging, BC1 has no choice but store the interesting bits of information it got from BC2 upon message reception, because it cannot re-request them whenever it wants.
With REST API integration, there's no such restriction. However, forcing yourself to only store local copies and not reach out to the other BC at will still has the advantages of relative asynchrony and/or fewer direct calls.
Actually, you probably get one of the two : asynchrony if you choose polling and fewer direct calls if you choose BC1 => BC2 notification via API.
Related
I am replicating the Successfactors Employee Central Data (including FO, MDF, BG elements and etc.) via OData API to local database for third party integration.
It is able to trace changed records by filtering last modify date. However, the deleted record is not able to capture from OData API. Hence I cannot delete the record in my local database when corresponding EC record is deleted.
Is there any way I can get the deleted records from the API or other sources? Thanks.
OData API is not able to handle this task.
Extract from SF OData API doc:
Don't use our OData APIs when:
● Your system cannot consume either OData APIs or SOAP for an initial data load. In this case, you would go
for Import/Export with a CSV. Automation via FTP would also be a possibility.
● You need employee replication field level delta, snapshot, or read modified employees only, then SOAP Compound Employee API is your tool of choice. You can find more information in the guide Implementing the Employee Central Compound Employee API.
● You only need to read data, then the SOAP Compound Employee API would also be your tool of choice.
However with SFAPI (SuccessFactors CompoundEmployee API) it's easy. SFAPI has a special parameter changedSegmentsOnly that does exactly just what you want, the API returns only changed segments with an action code not equal to NO_CHANGE in delta transmission.
You make a query, for example, for changed employee data:
<urn:query>
<urn:queryString>select person,
personal_information,
address_information,
email_information,
phone_information,
employment_information,
job_information,
compensation_information,
paycompensation_recurring,
paycompensation_non_recurring,
direct_deposit,
national_id_card,
payment_information
from CompoundEmployee
where person_id_external = 'cgrant'
</urn:queryString>
<urn:param>
<urn:name>resultOptions</urn:name>
<urn:value>changedSegmentsOnly</urn:value>
</urn:param>
<urn:param>
<urn:name>maxRows</urn:name>
<urn:value>50</urn:value>
</urn:param>
</urn:query>
This API query will return you all the employees that were changed or deleted.
After that you can filter your response by that field.
ADDENDUM: any change to MDF entity, both standard and custom, can be tracked via OData Audit logs. How to enable them:
Go to this setting in API center
Enable this switch. It can be enabled for SFAPI as well
This way all API calls payloads will be saved and you will be able to see what entity was changed with each call
Prerequisite: the object must be visible by API and MDF version history must be enabled
More about API types for SuccessFactors:
SAP Note 2613670
OData API reference Guide
SFAPI reference Guide
We are going to have a multi-tenant application that is going to be scanning files for each user per organization for multiple orgs. We would like to get notification if any user uploads/changes a file in the drive. There are at-least 2 options to retrieve those: either store delta link for each user and poll periodically to get the change or subscribe using webhooks to get notification on change. If we have 10k+ users, I am not sure if the first option is feasible. For the latter one, my only concern with webhook is do I have to register for each user separately? ie., does the resource need to be /users//drive/root or should it just be /drive/root? Since there is a limitation of no. of webhooks per app/tenant, I am not sure if creating webhooks for each user is a right approach.
Please advise.
The limitation applies to the users/groups objects (i.e. if you wanted to subscribe to users/groups being updated), not to the drives.
Yes, you need to subscribe to each drive individually, and to renew the subscriptions individually as well. If you want to save the number of roundtrips, you can group those operations in a batch.
You can also combine the delta + webhooks: do the initial sync with delta and store the token, register your webhook. And then trigger querying the delta link upon receiving a change notification. This way you get the best of both features and avoid regularly polling delta when there might not be any change.
I'm trying to send different message cards to multiple teams channels.
I have already created a webhook (telekom/webhook) for this which gives me the right variables via json.
There are four department receiver channels (telekom/rest-api-component) which are also configured to send pre-formatted teams message cards with the variables they have submitted.
Currently this happens to all channels at the same time. In between I would need an "action" in which I can decide which of the channels is served based on the input values. Unfortunately I don't find anything suitable due to the variety of the apis. Do you know how I could realize this ? So something like if value department = Backoffice then (Teams "Account Management") action.
In order to be able to talk with the different applications from Office 365 I wanted to use the Microsoft Graph api which is now available for some time. I couldn't find them in Flowground. Are you planning to include this module ?
For the implementation with Office365 flows this would be absolutely necessary for me.
I want to come back to this question: The CBR is a good choice for executing decisions indeed. But is is this the best solution in every situation? I do not think so.
Assume the following task:
Depending on an input parameter test you want to fire a request to different web services (WS1:google.de and WS2:bing.de)
Solution 1: You realize the requests with dedicated connectors for WS1 and WS2.
In this case you need the CBR in front of WS1 connector and WS2 connector to decide, what connector has to been used next.
Solution 2: You are able to realize both requests with REST-API connector. In this case you can use a JSONATA expression as URL mapping, e.g.
(test="google") ? "http://google.de" : http://bing.de
By using JSONATA expressions every connector has (limited) capability for executing decisions.
Solution 2 has a big advantage when you are using realtime flows. In this case you are able to reduce the number of connectors they are needed for running the flow and (very important from a cost perspective ) the number of permanently claimed token by this flow.
For reducing the complexity of JSONATA expressions (e.g. when you add further search engines) and for separation of individual configuration items you can use the configuration connector (we can discuss this in a separate thread if needed).
Solution 1 is the choice without alternative when you have to decide between different structures/connectors they need to be executed within a flow.
Please try the Content-Based-Router: https://doc.flowground.net/guides/content-based-router.html, it is available on the Connector Catalog.
I building multitenancy into the unit of work for a set of services. I want to keep the tenancy question out of the way of day-to-day business domain work, and I do not want to touch every existing consumer in the system (I am retrofitting the multitenancy onto a system without any prior concept of a tenant).
Most messages in the system will be contexted by a tenant. However, there will be some infrastructure messages which will not be, particularly for the purpose of automating tenant creation. I need a way of determining whether to use a tenant-contexted unit of work, or a infrastructure unit of work uncontexted by a tenant because the way I interact with the database is different depending on whether I have tenant context. The unit of work is built in the process of spinning up the dependencies of the consumer.
As such I need a way of peeking at the message or its metadata before consuming it, and specifically, I need to be able to peek at it during the dependency building. I was intended to have a tag interface to mark tenant management messages out from normal business domain messages, but any form of identifying the difference could work. If I am in a unit of work resulting from an HTTP request, I can look at WebApi's HttpContext.Current and see the headers of the current request, etc. How do I do something analogous to this if I am in a unit of work resulting from messaging?
I see there is a way to intercept messages with BeforeConsumingMessage() but I need a way of correlating it to the current unit of work I am spinning up and I'm not seeing how that would work for me. Pseudocode for what I am trying to do:
if MessageContext.Message.GetType() = typeof<ITenantInfrastructureMessage>:
database = new Database(...)
else:
tenantId = MessageContext.Headers.TenantId;
database = new TenantDatabase(..., tenantId)
I am working in C#/.NET using MassTransit with RabbitMQ and Autofac with MassTransit's built-in support for both.
Your best option is to override at the IConsumerFactory<T> extension point, and extract the tenant from the message (either via a message header, or some message property) and register that in the container child lifetime scope so that subsequent resolutions from the actual consumer class (and it's dependencies) are properly matched to the tenant in the message.
In our systems, we have a TenantContext that is registered in a newly created LifetimeScope (we're using Autofac), after which we resolve the consume from the child scope, and the dependencies that use the tenant context get the proper value since it's registered as part of building the child container for the message scope.
It works extremely well, we even built up extension methods to make it easy for developers registering consumers to specify "tenant context providers" that go from a message type to the proper tenant id, which is used to build the TenantContext.
You can do similar things with activity factories in Courier routing slips (which are a specialization of a consumer).
I have one more doubt with RuntimeStore.
I am able to exchange strings using RuntimeStore.
But i want object also to be exchanged.
Example: 3 independent applications are there A, B, C.
A creates an object of C an share it with B using RuntimeStore, and then B will use the same object and invoke the methods or data of C.
Can we do something like this using RuntimeStore.
I couldn't find it.
If you have any idea please share them with me.
Thanks.
The Runtime Store can be used for inter-application communication. As long as your two applications maintain the same data schema you shouldn't have a problem allowing for upgrades.
There is an example at the link that should help you get it going.
Runtime store can be used to exchange data. It would be better if we can know when is the data exchanged as well. You could use GlobalEventListener or Notification Manager for this purpose. Using these, you can express interest to receive certain types of events and register a listener [the same way as action listener on a button]. And when such an event occurs you could read the data from Runtime Store. But callback of Global Event Listener itself can accomodate the data exchange as well.
Hope it helps.!
Here is an example for you to check out. Actually, this is an honest example of IPC that really answers your question. You might also want to consider the security of the data you are supposedly exchanging.
I am not sure what is your intention and objective. Let me present you with different scenarios and provide plausible solutions.
There exists a 3rd party application A and you are authoring application B. Now your application is interested in invoking some services of A or some protected /sensitive/personal features. If this is the case, you could use specific and pre defined and well known permissions that are defined here and then ask system to provide you with those permissions. System in turn asks the user that application B is requesting permission for a list of operations. A UI is presented to user to request permissions. If user grants them, your application will in-turn be granted; else your application will be denied permission.
You are the sole author two applications A and B and have complete control over both the applications. Your objective here is to exchange some data securely, even in presence of other rogue applications that can sniff data. In this case, you could use Application Manager's postGlobal event and Global Event Listener to signal when exactly to exchange the data. Now, you can use RuntimeStore to exchange data; in order to put security in this exchange, you could sign the data with your keys and place it in runtime store. Only the other entities that can provide credentials will be granted access to your data on run time store. This is called controlled access to private data
RuntimeStore.put( MY_DATA_ID, new ControlledAccess( myHashtable, codeSigningKey ) ); // in application A
Hashtable myHashtable = (Hashtable) RuntimeStore.get( MY_DATA_ID, codeSigningKey ); // in Application B
The notification between applications A and B can also facilitated by Notifications Manager.
So, let us know what exactly is you are trying to accomplish. We can accordingly direct you to code examples.