Header values set after splitter, are not coming post aggregation using spring integration dsl - spring-integration-dsl

I am using Spring integration dsl for the below requirement.
The requirement is that, I have to split a message using splitter and add a header value to the messages post splitting during some transformation and do the aggregation later on. But post aggregation, the header value I added post splitter to the message is lost. Is this due to aggregation policy. I am using default policy.
Please suggest a way how to persist the header and use it post aggregation. So that I can do some custom transformation.

I have solved this by aggregating the whole messages instead of just aggregating the message payloads which helped in getting all the individual message headers and the message payloads and accordingly used in the aggregator in building the final response.

Related

How to expand all custom field in Jira API

when I make a call to JIRA API the response contain a large number of custom field with different IDs.
It's possible resolve it directly into API response?
I just try to call api on /rest/api/2/field to obtain all jira custom fields, but after that I should do a manual mapping script, it's soo annoing. There is a solution for this problem?
In the next time I need to export, for every issue, antire activity, attacchments and comments to store in external archive that can be consult in future.

How to make a request to get all information of a tenant like microsoft teams does?

I was reading the Microsoft Graph API Documentation to batch queries right here and did not find what I need.
Basically I need to combine two or more requests but one depends to another value. I know there is a "dependsOn" feature to wait the other request, it is not what I am looking for.
Request one: GET '/me/joinedTeams';
Request two: GET 'teams/{groupId}/channels';
The "Request one" returns an array of groups and inside these array values there's an id property. Can I batch these two requests using the value of ther first one to get the second?
I am searching a way to do a GET and return all values of one teant like the Microsoft Teams Application does, returning all teams, all chats, etc. Batching requests is the more closer we can get it I think.
Or there is another way to generate the token to https://chatsvcagg.teams.microsoft.com/api/v1/teams/users/me url like Microsoft does?
#Gaspar, multiple api calls can be batched using json batching but any interdependent calls batch can not handle.
If you have any dependency, you have to make separate calls.

XSLT transform/mapping work - what questions will you ask?

I have been given an assignment. It pertains to integration/transformation using xml/xslt and is deliberately vague. I have been given a sample integration (shown below) and I have been tasked with listing several questions I would ask before delivering this design,
The hypothetical integration
Data Source --> Mapping ---> output
The question is so vague I couldn't think much. I am not looking for anyone to plagiarise from, but I am hoping someone could post some sample questions to help me get started.
Pertinent Information
Note: Stack Overflow is not a place for you to cheat on an interview process. I am providing this information for other users who are looking to familiarize themselves with integrations. If you don't already know what questions to ask here, and are applying for an SOA job, you will likely be fired within a month. Dishonesty can cost a business a lot of money, and if you cheat your way into a job don't be surprised when you get blackballed or worse - perpetuate a harmful stereotype.
There are a variety of questions you would need to ask before implementing this type of integration. Here are a few things that come to mind.
1. What type of integration is this?
There are a variety of different integration paradigms. I would need to know if it is
An app/request driven orchestration
A scheduled orchestration
A file transfer
A pub/sub subscription
2. Is it invoked or triggered
An invoked integration is one that begins when it is specifically called. If I had a REST service that returned a list of countries, and your call that service every time a button was clicked that would be an invocation based integration.
Integrations can also be trigger based. Let's say you had a table that stored customers. You want to send an email whenever a new customer is added to that table. If you set your initial data source (adapter) as a trigger source on a row insert you could essentially have the integration run without explicitly being triggered.
3. What is the data source?
I would need to know if the data source is REST, SOAP, a database (DB2, MySQL, Oracle DB, etc), a customer adapter, etc. IS the data source adapter the entry point here or is the initial app adapter not shown?
4. What is the schema definition of the request / response body, and how is it specified?
You have a data source (which appears to be your initial app adapter), then you have a transformation, and a response. You can't do any transformation (or build an integration) if you don't know what the input / output will be (with some exceptions). This is really a multi level question.
How do I specify the request and response? Do I need to draft a JSON Schema or XSD document? Some platforms allow you to specify XML or JSON and they will do it's best to generate a schema for you.
What is the request and response content type? You can specify the request / response in whatever format is acceptable, but that doesn't necessarily mean that is the request / response type. For example some platforms let you specify your request body with an XSD but the content type is actually JSON. Is it XML, JSON, Plain Text, other?
5. What about other parameters
Basically, what does the endpoint look like? Are there query parameters, template parameters, custom header parameters, etc?
6. How is this integration secured?
Is this integration security using OAuth? If so what type of tokens are used (JWT, etc)? Does the integration use basic authentication?
Based off the answers to the previous questions you may then have questions about the mapping. For example, if I was provided a schema definition for the output that had an attribute called "zip" I might ask how they wish to format that, etc. I wouldn't ask anything about what technology is used for the mapping. Firstly, because it'as almost always XPath/XSLT, secondly that isn't something you need to know, it's something you would figure out on your own.

How to enable Watson conversation service to use your own database for serving user's request

I want to build a smart search agent which would use Watson conversation to process the request and give response but will use my own database say SQL server to search the desired output.
In Short Instead of writing intents and dialogues manually or importing from a csv file, I want to write my won code in .net in such a way that all the request and responses are influenced by my own data stored in my database. I only intent to use watson's processing and interpreting capability. But the processing must happen on my data.
E.g If the user searches for a word say "Dog", the Watson conversation service must search in my database and give relevant answers to the user based on the search.
Take a look at the solution architecture in the Watson Conversation documentation. Your database would be one of the depicted backend systems. Your application would be, as you mentioned, written in .NET and would use WCS to process the user input. It would return a response with all the associated metadata. Instead of having complete answers configured in a dialog, you would use something I have described as "replaced markers" in my collection of examples. Those markers are kind of hints to your application of which database query or which action to perform.
Note that WCS requires some intents and entities to work on. If you want to rely just on the detected intents and entities, you could work with one or two generic dialog nodes. As another technique you could use data from your database to generate intents and entities as an initial setup. In my "Mutating EgoBot" I use the Watson Conversation API to add intents and entities on the fly.
I believe you should use the standard trick:
instead of defining resposnses in the node of your diaglog, define an action on the output object of the node and let your applicatation take care of providing response (see https://console.bluemix.net/docs/services/conversation/develop-app.html#building-a-client-application)

Surveymonkey: Get all responses from a single day on a single transaction

Is there a way to get ALL the responses for a single day in one transaction for a specific survey? on the API doc, I know there is the /surveys/{id}/responses/bulk option, and even I can send the start_created_at variable.
But I think that the API response has a max number of records/data it can send, it that case, what could the solution be? Paging through the results?
I'm using the .net API, found at this site, but I can build my own wrapper if necessary.
Reference link to API doc: /Surveys/SURVEY_ID/responses/bulk
Yes you're right the /surveys/{id}/responses/bulk endpoint is what you're looking for, and you can use the start_created_at and end_created_at to filter data to a date range.
The SurveyMonkey API doesn't allow a full dump of all your data, it will always be paginated. By default it'll paginate 50 at a time, but you can change that by using the per_page GET parameter.
That max per_page varies by endpoint, for responses BULK it is 100. So you'll have to fetch 100 at a time, looping through the pages to get all your data.
One alternative is to use webhooks and set up a subscriber, that way you can get new responses in real time and fetch them one by one. That way you can keep your data updated on your side as new responses come in, rather than running a script or endpoint to bulk dump all your data. But this depends on your use case, if you're building something like an export feature, then you'll have to go through the paginated route.

Resources