I have created a scenario where I iterate through multiple modules with an array of data. This works fine.
After this completes, I want to run a module once before the scenario completes.
How do I add a module that won't get called in the loop?
There are few ways to achieve this,
Use Router to Create a new Route that will be triggered after the
first route is complete
Trigger new Scenario via Webhooks after you are done with the
scenario
If you are working with array, then using Array Aggregator or other
Aggregators will allow you to first complete the iteration and then
trigger the module you want to use
I am not sure exactly what you want to do after the iteration is complete, but setting the scenarios as displayed in the screenshot below should help you get started on this,
Using Router
For this you can create a router, the upper hand of the router is always executed first, so the iterator and other operations will be done there. After which, the next hand/route will be executed which will be the module you want to trigger at last.
However, If you want to pass some values from the first hand/route to the last one then you will need to set a variable and fetch it on the second route. See details here : https://www.integromat.com/en/help/converger
Using Aggregator Module
You can either use Array, Text or Numeric Aggregator to aggregate all the iteration operations and then trigger the module that you want to use at last.
As far as my knowledge goes, there is no Integromat default modules that can be configured before the scenario ends. We can leverage the Integromat API in future that is currently in development to do so.
I found a filter to be the most easy way of doing this. Essentially chekcing if this bundle position is equal to the total number of bundles!
If you're interested in doing something on the last iteration only, you can use a filter to check if the current bundle is equal to the total number of bundles
last bundle filter
They won't let me paste pics sigh
Related
Context
I want to create a Power-Automate flow that automatically creates a sub-task in Azure DevOps when the Effort of a PBI is set.
When the Effort field goes from blank to a positive value, the task should be added (using the newly set Effort value as the task's Original Estimate and Remaining Work).
I managed to create a flow that does that using When a work item is update trigger.
Problem
The flow runs too often (whenever the work item changes, as long as the Effort is > 0).
Question
What would be the best way to ensure this flow runs only once per PBI?
Thoughts
Perhaps check for the presence of child tasks?
Perhaps set a hidden property when adding the task the first time and check that property afterwards?
You could add a trigger condition to the settings of your trigger action. Use the following expression:
#greater(triggerOutputs()?['body/fields/Microsoft_VSTS_Scheduling_Effort'], 0)
Add a Send an HTTP request action directly after the trigger. Use the following URI:
YourProjectName/_apis/wit/workItems/#{triggerOutputs()?['body/id']}/updates?api-version=6.0
Add a Filter Array. Use this expression for the From
body('Send_an_HTTP_request_to_Azure_DevOps')['value']
In the where of the Filter Array use the following expression which you add via the advanced mode:
#greater(length(string(item()?['fields']?['Microsoft.VSTS.Scheduling.Effort']?['newValue'])), 0)
In a condition check if the Filter Array returns no results. If it does, the value of Effort has not been changed in the past and you can safely create your new task
length(body('Filter_array'))
is equal to 1
Actually I am new to data movement SDK,I want to know how we can used data movement sdk to remove collection from docs which match's specific condition in real time in marklogic ?
Yes, DMSK can reprocess documents in the database including modifying the collections on the documents.
The most efficient way to change document collections on the server might be to take an approach similar to the out-of-the-box ApplyTransformListener (as summarized by
https://docs.marklogic.com/guide/java/data-movement#id_51555) but to execute a custom module instead of a transform.
Summarizing the main points:
Write an SJS (Server-Side JavaScript) module that declares a variable (using the JavaScript var statement) to receive the document URIs sent by the client and modifies the collections on those documents using a function such as
https://docs.marklogic.com/xdmp.documentSetCollections
Install the SJS module in the modules database as described here
https://docs.marklogic.com/guide/java/resourceservices#id_13008
Create a QueryBatcher to get the document URIs either from a query on the database or from a client iterator as described here:
https://docs.marklogic.com/guide/java/data-movement#id_46947
Supply a lambda function for the QueryBatcher.onUrisReady() method - see
https://docs.marklogic.com/javadoc/client/com/marklogic/client/datamovement/QueryBatcher.html#onUrisReady-com.marklogic.client.datamovement.QueryBatchListener-
In the lambda function, construct and execute a ServerEvaluationCall to the SJS module, assigning the variable to the URIs passed to the lambda function - see:
https://docs.marklogic.com/guide/java/resourceservices#id_84134
Be sure to register failure listeners using the QueryBatcher.onQueryFailure() ApplyTransformListener.onFailure​() methods to log error or otherwise respond to the unexpected.
Hoping that helps,
This question already has an answer here:
Karate API Testing - Reusing variables in different scenarios in the same feature file
(1 answer)
Closed 1 year ago.
I am doing a test case where it will call API and that data will use next API call as part of One Scenario.
I am passing testdata as part of example 4 records .Here I have under one scenario first Given API call output passing to second given API call.As part of comapare the results i need the first API call output data to compare with second API call results.
So is there any way to capture all four test records data first API call data in one variable (each time variable to update)
example :
*def var = 'hello'
var = var +'world'
Please need help
Please read the docs, copied below for convenience: https://github.com/intuit/karate#script-structure
Variables set using def in the Background will be re-set before every Scenario. If you are looking for a way to do something only once per Feature, take a look at callonce. On the other hand, if you are expecting a variable in the Background to be modified by one Scenario so that later ones can see the updated value - that is not how you should think of them, and you should combine your 'flow' into one scenario. Keep in mind that you should be able to comment-out a Scenario or skip some via tags without impacting any others. Note that the parallel runner will run Scenario-s in parallel, which means they can run in any order.
So please don't expect a variable in one Scenario to be update-able by another Scenario.
But within a Scenario if you want to "collect" data, there are many ways. For example try appending to a list - refer: https://github.com/intuit/karate#json-transforms
* def init = []
# do some API call
* karate.appendTo(init, response)
I have an application that uses a combination of ContentService.Saved & ContentService.Saving to extend Umbraco to manage content.
I have two websites in one Umbraco installation I am using those methods to keep content up to date in different parts of the tree.
So far I have got everything working the way I wanted to.
Now I want to add a feature that: depending on which Umbraco User is logged in, will either publish the content or simply send it for approval.
So I have changed some lines of code from:
cs.SaveAndPublishWithStatus(savedNode, 0, false)
To this:
cs.SendToPublication(savedNode);
Now the problem that I am finding is that unlike the SaveAndPublishWithStatus() method, the cs.SendToPublication(); doesn't have the option of passing false so that a save event is not raised. So I get into an infinite loop.
When I attach the debugger and manually stop the infinite loop the first time it calls cs.SendToPublication(savedNode); I get exactly the behavior I want.
Any ideas about how I can get round this problem? Is there a different method that I should be using?
You are correct in saying that it currently isn't possible to set raiseEvents to false when sending an item to publication - that's a problem.
I've added that overload in v. 7.6 (http://issues.umbraco.org/issue/U4-9490).
However considering that you need this now, an interim solution could be that you make sure your code is only run once when triggered by the .Saved / .Saving events.
One way to do this would be to check the last saved date (UpdateDate) in your code. If the content was saved within the last second of the current save operation, you know that this is a save event triggered by the save happening in SendToPublication action. Then you also know that the item has already been sent to publication and that this doesn't need to be done again - thereby preventing the endless loop from happening.
My setup: Rails 2.3.10, Ruby 1.8.7
I need to implement an API that is essentially a GET but depending on a date, could involve DELETE and POST actions as well. Let me explain, for a particular day, the API needs to add 10 items to one table randomly selected from another table but this is only done once a day. If the items added are from the previous day, then the API needs to delete those items and randomly add 10 new ones. If multiple calls are made to the API in the same day, then it's just a GET after the initial creation. Hope this makes some sense.
How would I implement this as a RESTful API if at all possible.
How about?
GET /Items
If the next day has arrived, then generate 10 new items before returning them. If the next day has not arrived, then return the same 10 items you previously returned. There is no reason the server cannot update the items based on a GET. The client is not requesting an update so the request is still considered safe.
Not sure if I'm understanding you correctly, but just by looking at this, all I can think is the following: What a horrible thing, to perform an add which depending on what it's added, performs a delete. No disrespect, but seriously. Or maybe it is the way you are describing it.
Whatever the case, if you want to have a RESTful API, then you have to treat GET and PUT distinctively.
I don't think you have a clear use-case picture of how your API (or your system for that matter is to be done.) My suggestion would be to re-model this as follows:
Define a URI for your resource, say /random-items
a GET /random-items gets you between 0 and 10 items currently in the system.
a PUT/random-items with an empty body does the following:
delete any random items added on or before yesterday
add as many random items as necessary to complete 10
an invocation to DELETE /random-items) should return a 405 Method Not Allowed http error code.
an invocation to POST/random-items` should add no more than 10 items, deleting as needed.
/random-items/x is a valid URI so long as x is one of the items currently under /random-items.
A GET to it should return a representation for it or a 404 if it does not exist
A DELETE to it deletes it from under /random-items or 404 if it does not exist
A PUT to it should change its value if it makes sense (or return a 405)
A POST to it should return a 405 always
That should give you a skeleton sorta RESTful API.
However, if you insist, or need to overload GET so that it performs the additions and deletions behinds the scene, then you are making it non-RESTful.
That in itself is not a bad thing if you legitimately have a need for it (as no architectural paradigm is universally applicable.) But you need to understand what RESTful mean and when/why/how to break it.