Azure DevOps REST: Retrieve all warning messages from build - azure-devops-rest-api

I'm using the Microsoft.TeamFoundationServer.Client NuGet package to access these REST APIs; I have this working well for a number of different scenarios already. A new requirement is to surface all the warning messages emitted during a build pipeline.
My initial call to
Timeline timeline = await BuildHttpClient.GetBuildTimelineAsync(project, buildId);
seemed to be what I was looking for, as calling timeline.Records.Select(r => r.WarningCount).Sum() returns the same number of warnings as displayed on the build summary in the web UI, and warnings are appearing in the Issues property of the Records instances. However, for a build that returned 14000+ warnings, timeline.Records.SelectMany(r => r.Issues) is returning only 57 in total; the API appears to only be returning the first ten or so warnings from each task the pipeline executed.
I've seen elsewhere that I could do something like
Stream s = await BuildHttpClient.GetBuildLogAsync(project, buildId, logId);
to get the logs individually for each task (after enumerating them with a separate call) then parse the returned stream for warning codes, but as there are quite a few tasks in this pipeline, some of them with rather verbose logging, I would prefer to avoid hammering the services more than necessary, as well as downloading large quantities of data unnecessarily.
Is there a more efficient mechanism for retrieving a collection of all warning (and, optionally, error) messages, or something I've missed in the REST docs, such as a way of getting the timeline service call to return them all?

Related

error handling in data pipeline using project reactor

I'm writing a data pipeline using Reactor and Reactor Kafka and use spring's Message<> to save
the ReceiverOffset of ReceiverRecord in the headers, to be able to use ReciverOffset.acknowledge() when finish processing. I'm also using the out-of-order commit feature enabled.
When an event process fails I want to be able to log the error, write to another topic that represents all the failure events, and commit to the source topic. I'm currently solving that by returning Either<Message<Error>,Message<myPojo>> from each processing stage, that way the stream will not be stopped by exceptions and I'm able to save the original event headers and eventually commit the failed messages at the button of the pipeline.
The problem is that each step of the pipline gets Either<> as input and needs to filter the previous errors, apply the logic only on the Either.right and that could be cumbersome, especially when working with buffers and the operator get 'List<Either<>>' as input. So I would want to keep my business pipeline clean and get only Message<MyPojo> as input but also not missing errors that need to be handled.
I read that sending those message erros to other channel or stream is a soulution for that.
Spring Integration uses that pattern for error handling and I also read an article (link to article) that solves this problem in Akka Streams using 'divertTo()':
I couldn't find documentation or code examples of how to implement that in Reactor,
is there any way to use Spring Integration error channel with Reactor? or any other ideas to implement that?
Not familiar with reactor per se, but you can keep the stream linear. The trick, since Vavr's Either is right-biased is to use flatMap, which would take a function from Message<MyPojo> to Either<Message<Error>, Message<MyPojo>>. If the Either coming in is a right (i.e. a Message<MyPojo>, the function gets invoked and otherwise it just gets passed through.
// Apologies if the Java is atrocious... haven't written Java since pre-Java 8
incomingEither.flatMap(
myPojoMessage -> ... // compute a new Either
)
Presumably at some point you want to do something (publish to a dead-letter topic, tickle metrics, whatever) with the Message<Error> case, so for that, orElseRun will come in handy.

How to know if a Neo4j background job completed successfully?

I'm using Neo4j. For large data imports from external csvs, parquets, etc. there is a very handful command for "fire and forget", the apoc.periodic.submit. There is also the apoc.periodic.list that list the background jobs.
During the execution of the background job it appears in the output of apoc.periodic.list. But after it finishes, either by an error or by a successful execution, it will disappear from this list without any feedback from the completion status.
Is there a general way to check if a background job finish status? Is there a more suitable API for my purposes?
If there is a way to directly check error status on the fire&forget routines, I don't see it in the documentation (they are fire&forget, so it comes with the territory?)
Ideas
don't background the query itself, background a process/task that waits for a blocking Cypher execution to finish and capture the error code...
check for success instead of failure? (if it didn't succeed you know it failed right?), this may be evident based on what the Cypher does, or you could add a graph content update for this purpose. E.g. Update property on a NODE with last_updated. Do that last so that if the cypher fails, the property is not updated
You could enable query log and then check there to see what happened, most likely this query has a unique signature and the last execution could be found easily in the log (with status/error code)

How can I save additional messages that would normally be excluded by Loglevel in case of errors

I have a basic serilog-usage-scenario: Logging messages from an Web-Application. In production I set the log-level to "information".
Now my question: Is it possible to write the last ~100 debug/trace messages to the log after an error occurs, so that I have a short history of detailed messages before the error occurred. This would keep my log clean and gives me enough informations to track errors.
I created such a mechanism years ago for another application/logging-framework, but I'm curious if thats already possible with Serilog.
If not, where in the pipeline would be the place to implement such logic.
This is not something that Serilog has out-of-the-box, but it would be possible to implement by writing a custom sink that wraps all other sinks and caches the most recent ~100 Debug messages and forwards them to the sinks when an Error message occurs.
You might want to look at the code of Serilog.Sinks.Async for inspiration, as it shows you a way of wrapping multiple sinks into one.

Show all failing builds in TFS/VSTS

I've created a widget for TFS/VSTS which allows you to see the number of failing builds. This number is based on the last builds result for each build definition. I've read the REST api documentation, but the only way to get this result is:
Get the list of definitions
Get the list of builds filtered by;definition=[allIds], maxBuildsPerDefinition = 1, resultFilter=failed
This is actually pretty slow (2x callback, lot's of response data) and I thought it should be possible in a single query. One of the problems is that the maxBuildsPerDefinition doesn't work without the definition filter. Does anyone have an idea how to load this data more efficient?
I'm afraid the answer is no. The way you use is the most efficient way for now.

Async logging of a static queue in Objective-C

I'd like some advice on how to implement the following in Objective C. The actual application in related to an AB testing framework I'm working on, but that context shouldn't matter too much.
I have an IOS application, and when a certain event happens I'd like to send a log message to a HTTP Service endpoint.
I don't want to send the message every time the event happens. Instead I'd prefer to aggregate them, and when it gets to some (configurable) number, I'd like to send them off async.
I'm thinking to wrap up a static NSMutableArray in a class with an add method. That method can check to see if we have reached the configurable max number, if we have, aggregate and send async.
Does objective-c offer any better constructs to store this data? Perhaps one that helps handle concurrency issues? Maybe some kind of message queue?
I've also seen some solutions with dispatching that I'm still trying to get my head around (I'm new).
If the log messages are important, keeping them in memory (array) might not suffice. If the app quits or crashes the NSArray will not persist on subsequent execution.
Instead, you should save them to a database with an 'sync' flag. You can trigger the sync module on every insert to check if the entries with sync flag set to false has reached a threshold and trigger the upload and set sync flag to true for all uploaded records of simply delete the synced records. This also helps you separate your logging module and syncing module and both of them work independently.
You will get a lot of help for syncing SQLite db or CoreData online. Check these links or simply google iOS database sync. If your requirements are not very complex, and you don't mind using third party or open source code, it is always better to go for a readily available solution instead of reinventing the wheel.

Resources