Azure data factory - query pipeline by data annotations - data-annotations

Is there a way to programmatically query (using .net SDK) list of running pipelines by data annotations?
I can set data annotation when I run pipeline as explained here, but not sure how to query pipelines or filter pipelines using the same?

According to the API documentation, query pipelines by annotation is not supported. Filter parameter support following:
Gets or sets parameter name to be used for filter. The allowed
operands to query pipeline runs are PipelineName, RunStart, RunEnd and
Status; to query activity runs are ActivityName, ActivityRunStart,
ActivityRunEnd, ActivityType and Status, and to query trigger runs are
TriggerName, TriggerRunTimestamp and Status. Possible values include:
'PipelineName', 'Status', 'RunStart', 'RunEnd', 'ActivityName',
'ActivityRunStart', 'ActivityRunEnd', 'ActivityType', 'TriggerName',
'TriggerRunTimestamp', 'RunGroupId', 'LatestOnly'

Related

how agent pass data to task router?

I have an issue.
here is my flow:
agentA-->taskRouter-->agentB.
I know how to pass data(some additional customer info) from taskRouter to agentB(by attribute), but I don't know how agentA pass data to my taskrouter.(or how taskrouter receive the data)
Twilio developer evangelist here.
To pass data to other agents on a task through Flex, you can add further attributes to your task from within the Flex interface.
You would need to build a Flex plugin that allows you to add that data. I have an example Flex plugin that adds a text input to the task panel and allows agents to use it to set the customer name in the task attributes.
Once you have access to the task object, you can update the attributes with the function:
task.setAttributes(newAttributes);
setAttributes does overwrite existing attributes, so ensure you don't lose all the existing attributes like so:
const newAttributes = { ...task.attributes, name: this.state.name };
task.setAttributes(newAttributes);
This will update the attributes on the task that another agent will then see when they are assigned the task.

Is using Durable entities a good way to store result of the workflow?

I want my orchestrator function to return an object representing what happened in my workflow, basically some stats about what my workflow has done: users retrieved from an API, users inserted in a database, ...
What I was doing until now was to return these information from my activies functions and aggregating them in my orchestrator before returning them :
return new
{
UserInsterted = myActivity1.InsertedUsersNumber,
UsersRetrievedFromApi = myActivity2.RetrievedUserNumber
};
However I am have now activities that run in parallel (thanks to a Task.WhenAll(myActivity1, myActivity2) so I can't return a result with a different type.
That's why I was wondering if using a Durable Entity in my code to store everything I want to return at the end in my orchestrator was a good solution.
I don’t think you need Durable Entities to store the results of your workflows. The syntax you’re using will still work even if your activity functions return values of different types.
That said, Durable Entities might be a good option if you want to save the results of activities outside of your orchestration. Then they could be queried independently and don’t even require your orchestration to be complete.

How to use ValueProviders to assign table and columns for SpannerIO at runtime?

I have a use case where in I have to pick up selected data from a spanner table and dump to BigQuery.
The catch here is that for the batch job the name of the table and the columns to select will only be known at runtime.
It seems that dataflow's SpannerIO doesn't accept the table and the columns at runtime. Please refer below code for better understanding:
p.apply(SpannerIO.read().withSpannerConfig(spannerConfig)
.withTable("tablename")
.withColumns(list or array of columns))
It only accepts string and not ValueProviders. How to make this work?
To access runtime values you need to use the ReadAll transform and build an instance of ReadOperation in the previous step.
See Reading data from all available tables from the examples.
Yes, the withColumn and withTable methods of SpannerIO.Read do not take in a ValueProvider by default.
Could you write code outside your function to get the table and column names and then pass them into withTable and withColumn as a list of strings at runtime?
If they can be passed in as commandline arguments, consider using PipelineOptions.
Here is a simple example. More docs on using dataflow connectors from Cloud Spanner can be found here.
I used read operation as suggested by Mairbek
p.apply(Create.ofProvider(options.getMyParamValueProvider(), StringUtf8Coder.of()))
.apply(MapElements.via(new SimpleFunction<String, ReadOperation>() {
#Override
public ReadOperation apply(String value) {
return ReadOperation.create()
.withTable("TableName").withColumns(value);
}
}))
.apply(SpannerIO.readAll().withSpannerConfig(spannerConfig));

Get list item version history in SharePoint 2016 provider hosted app

I have a provider hosted app and i want to retrieve the list item version history with all the columns of an item in my provider hosted app using CSOM.
I tried using CAML Query but i am not able to achieve the required functionality.
Any pointer will be helpful.
When loading file, use lambda expression to explicitly request all required attributes.
Microsoft.SharePoint.Client.File file = context.Web.GetFileByUrl("URL_of_FILE"); //Can be referenced however you want
context.Load(file, x=>x.ListItemAllFields, x=>x.Versions);
//Execute query to reference attributes
context.ExecuteQuery();
To reference specific column values:
string columnVal= file.ListItemAllFields["col_val_id"].ToString();

spring-data-neo4j query using dynamic key and dynamic value

spring-data-neo4j query using dynamic key and dynamic value,
like following code:
public interface NodeReposity extends Neo4jRepository<Node,Long> {
#Query("MATCH(n:Node{{key}={value}})return n")
Iterable<Node> queryByProperty(#Param("key")String key,#Param("value") String value);
}
But it says the {key} must be something like variable in string, such as MATCH(n:Node{name={value}})return n.Can't be {key}. But My property's key is dynamic like the value, how to implement it and is it possible?
Short answer: The query will be send "as it is" to the database and because cypher does only support placeholders for values, this will cause an error.
Slightly longer answer: When it comes to executing the method Spring Data Neo4j will look if it has already pre-processed the query and either process and cache it or just load it from the cache. This is done to improve the time it takes to execute the method from the application.
Pre-processing means SDN knows what parameters are in there and just adds the values in the right place when the method is called.
If SDN would provide more features for the query than cypher, the query would have to be processed every time the method gets called to create a new query that can be used with Neo4j.

Resources