How to differentiate XML transaction in Liaison Delta and ECS - edi

Can you please some one help to with following scenario.
I have 2 XML messages(ORDERS, SHIPRTN) are placing into SFTP location, using ECS i am picking and translating with Delta, but How do i differentiate both ORDERS and SHIPRTN and call the respective Maps in Delta.

You have a few ways to do this. You can put a condition on the event rule if the data contains the batch , then call map "A". if the data contains the data then call map "B".
The other way is to set identity values in the model so that TPM can figure it out.

Related

Event Store DB : temporal queries

regarding to asked question here :
suppose that we have ProductCreated and ProductRenamed events which both contain the title of the product.now we want to query EventStoreDB for all events of type ProductCreated and ProductRenamed with the given title.i want all these events to check whether there is any product in the system which has been created or renamed to the given title, so that i could throw the exception of repetitive title in the domain
i am using MongoDB for creating UI reports from all the published events and everything is fine there.but for checking some invariants, like checking for unique values, i have to either query the event store for some events along with their criteria and by iterating over them, decide whether there is a product created with the same title which has not renamed or a product renamed with the same title.
for such queries, the only way that event store provides is creating a one-time projection with the proper java script code which filters and emits required events to a new stream.and then all i have to do is to fetch events from the new generated stream which is filled by the projection
no the odd thing is, projections are great for subscriptions and generating new streams, but they seem to be odd for doing real time queries.immediately after i create a projection with the HTTP api, i check the new resulting stream for the query result, but it seems that the workers has not got the chance to elaborate on the result and i get 404 response.but after waiting for a bunch of seconds, the new streams pops out and gets filled with the result.
there are too many things wrong with this approach:
first, it seems that if the event store is filled with millions of events across many streams, it wont be able to process and filter all of them immediately to the resulting stream.it does not create the stream immediately, let alone the population.so i have to wait for some time and check for the result hoping the the projection is done
second, i have to fetch multiple times and issue multiple GET HTTP commands which seems to be slow.the new JVM client is not ready yet.
Third, i have to delete the resulting stream after i'm done with the result and failing to do so will leave event store with millions of orphan query result streams
i wish i could pass the java script to some api and get the result page by page like querying MongoDB without worrying about the projection, new streams and timing issues.
i have seen a query section in the Admin UI, but i dont know whats that for, and unfortunetly the documentation doesn't help much
am i expecting the event store to do something that is impossible?
do i have to create a bounded context inner read model for doing such checks?
i am using my events to dehyderate the aggregates and willing to use the same events for such simple queries without acquiring other techniques
I believe it would not be a separate bounded context since the check you want to perform belongs to the same bounded context where your Product aggregate lives. So, the projection that is solely used to prevent duplicate product names would be a part of the same context.
You can indeed use a custom projection to check it but I believe the complexity of such a solution would be higher than having a simple read model in MongoDB.
It is also fine to use an existing projection if you have one to do the check. It might be not what you would otherwise prefer if the aim of the existing projection is to show things in the UI.
For the collection that you could use for duplicates check, you can have the document schema limited to the id only (string), which would be the product title. Since collections are automatically indexed by the id, you won't need any additional indexes to support the duplicate check query. When the product gets renamed, you'd need to delete the document for the old title and add a new one.
Again, you will get a small time window when the duplicate can slip in. It's then up to the business to decide if the concern is real (it's not, most of the time) and what's the consequence of the situation if it happens one day. You'd be able to find a duplicate when projecting events quite easily and decide what to do when it happens.
Practically, when you have such a projection, all it takes is to build a simple domain service bool ProductTitleAlreadyExists.

Dynamic query usage whole streaming with google dataflow?

I have a dataflow pipeline that is set up to receive information(JSON) and transform it into a DTO and then insert it into my database. This works great for insert, but where I am running into issues is with handling delete records. With the information I am receiving there is a deleted tag in the JSON to specify when that record is actually being deleted. After some research/experimenting, I am at a loss as whether or not this is possible.
My question: Is there a way to dynamically choose(or change) what sql statement the pipeline is using, while streaming?
To achieve this with Dataflow you need to think more in terms of water flowing through pipes than in terms of if-then-else coding.
You need to classify your records into INSERTs and DELETEs and route each set to a different sink that will do what you tell them to. You can use tags for that.
In this pipeline design example, instead of startsWithATag and startsWithBTag you can use tags for Insert and Delete.

Talend: tMap not accepting lookup connection

I have database queries which returns two sets of data from the same table. I need to map the results of those queries to each other based on the values they have in common. This is how far I got:
My original idea was to use the output from tLogRow_2 as a lookup, but Talend won't let me do that. What am I doing wrong?
You can't have a closed loop in Talend. You need to remove the 2nd OnComponentOk link (to tMysqlInput_1) if you want to use that output as a lookup. You only need to hook the OnComponentOk trigger to the first component of your next subjob (a startable component).

How to apply ParDo to a PDone object

My Pipeline has a follwing 2 steps.
To extract data from BigQuery, then translate it and store CloudStorage.
To extract data from the CloudStorage, then translate it and store S3.
After step1, I can get PDone object, but it does not have apply or read method.
Are there any way to attach step 2 process after step1 process?
You can write the data extracted from BigQuery to several places at the same time. E.g. see this answer.

Visual Studio REST API Iteration and Area ID's

I am working with the VSO REST API and have a question on how Iteration and Area ID's are assigned. Specifically, why is it when I assign a work item to the root Iteration or Area the ID that is returned for the WIT is not returned when I query the classification nodes?
For example, imagine I have this hierarchy when I query /DefaultCollection/my project/_apis/wit/classificationnodes?$depth=2
My Project: id=1234
Area 1: id=5678
Area 2: id= 9012
And I then query for a work item using /DefaultCollection/_apis/wit/workItems/1?$expand=all
If the work item is in Area 1 or Area 2, the System.AreaId field is as expected (5678 and 9012, respectively). However, if I assign the work item to My Project, the System.AreaID is some value that is not included when I query for all classification nodes. There appears to be some kind of relationship between the ID's as they are serial (e.g. the ID returned by the classification node query is 1232 for the area and 1233 for the iteration), but I can't seem to find a way to query to get the actual ID returned by the work item query.
In fact, not only is the ID returned for a work item not present when I query for all classification nodes, if I assign the work item to both the root iteration and area, the ID returned for both fields is the same value that is not included in the classification node query.
What I need is a way to look at a work item and figure out the area and iteration it belongs to. I could probably do something with the path field strings that are returned, but that seems error prone since users can change them.
****Edit****
The above appears to be a bug in the REST API, but for anyone who comes across this post there is a way to get a usable iteration ID by the path string. Structure your REST call like so:
/DefaultCollection/[Project Name]/_apis/wit/classificationnodes/iterations/Release 1/Sprint 1 (etc.)
I have never done this with ID. I only use the path. In the classification service you can get the node by path easily enough.
For example, using the REST API - you can access this url to get the data about a specific iteration:
/DefaultCollection/[Project Name]/_apis/wit/classificationnodes/iterations/[Release X]/[Sprint Y]
Note that trying to access the default iteration path (the project name instead of a specific iteration) will return an error:
/DefaultCollection/[Project Name]/_apis/wit/classificationnodes/iterations/[Project Name]
will give :
{"$id":"1","innerException":null,"message":"VS402485: The node name is not recognized: [Project Name]","typeName":"Microsoft.TeamFoundation.WorkItemTracking.Server.Metadata.WorkItemTrackingTreeNodeNotFoundException, Microsoft.TeamFoundation.WorkItemTracking.Server","typeKey":"WorkItemTrackingTreeNodeNotFoundException","errorCode":0,"eventId":3200}
So if you do batch work, you have to filter those before querying the api.
There are three ways of identifying an Area (everything I post equally applies to Iterations also). The Path (string), the ID (int) and a Guid. each of them are used in different ways, and have different ramifications.
For example renaming an Area, does NOT change it's identity, therefore does not update a work item (the path returned for in a workitem is dynamic).
It is also possible to delete and recreate and identical path, but it will have a different ID.
The GUID is used primarily for the Excel Reports (such as are parent of the SharePoint portal)
Depending on how you want things to react determines the appropriate element to use.
I have not seen any issues with the ID that you mention, and if you could create a simple repro, I would be glad to look at it.
david(dot)corbin(at)dynconcepts(dot)com

Resources