Is there an API to access the Import/Copy Components log files in D2L? - desire2learn

Using the (g)ui in desire2learn, if you have sufficient permissions you can look at the history of import/copy requests for a target org unit. Is there a programmatic way to access that history?
We copy lots of items using Valence and the only indications we get are if the job itself failed. Often a job will succeed, yet some part of the copy failed and we want know that.

A new experimental API route to retrieve the logs for an import job will be released on v10.4.10 of Brightspace in continuous release. The docs for it will be coming very soon, but the route won't be available to clients on platforms older than v10.4.10.

Related

Parameterized Remote Trigger plugin doesn't respect Build Token Root

I'm trying to trigger a job from one Jenkins (A) on another one (B). I've installed 2 plugins:
Parameterized Remote Trigger
Build Token Root
My issue is, that I'm able to trigger build on Jenkins (B) using for example curl and token only, which means the Build Token Root plugin is working as expected, but Parameterized Remote Trigger seems to don't respect this.
I probably should mention that I've tried different auth options, Trust All certs, etc.
My Jenkins (A) config:
Logs are the same with and without Build Token Root support enabled.
Logs I'm getting:
I was able to get this working by allowing Anonymous users Overall Read and Job Read access. It appears this is necessary because the Parameterized Remote Trigger plugin attempts to call additional APIs apart from just the /build and /buildWithParameters and those calls are the ones that fail.
It makes sense that, in order to have the default blocking capability, you need to call additional APIs to poll, but even setting blockBuildUntilComplete : false did not fix the issue. Considering that Parameterized Remote Trigger plugin plainly says it "plays well" with the Build Token plugin in its documentation, it really is not an easy feat to make them work together.
In my opinion, using the two together isn't an ideal solution because of the necessity for allowing unauthenticated users to browse your jenkins instance via the ui - I suspect you could (although I haven't tried it) get an API Token for a user with only Overall Read and Job Read access instead of giving all Anonymous Users the rights, but this includes the overhead of managing a user and an API Token, which defeats our primary motivation to use the Build Token plugin in the first place.

How to do this type of testing in Dataflow(called feature testing at twitter)?

We do something called feature testing like so -> https://blog.twitter.com/engineering/en_us/topics/insights/2017/the-testing-renaissance.html
TLDR of that article, we send request to microservice(REST POST with body), mock GCP Storage, mock downstream api call so the entire microservice can be refactored. Also, we can swap out our platforms/libs with no changes in our testing which makes us extremely agile.
My first questions is can DataFlow (apache beam) receive a REST request to trigger the job? I see much of the api is around 'create job' but I don't see 'execute job' in the docs while I do see get status returns the status of job execution. I just don't see a way to trigger a job to
read from my storage api (which is mockable and sits in front of GCP)
process the file hopefully across many nodes
call the apis downstream (which is also mockable)
Then, I simply want to in my test simulate the http call, then when file is read, return a real customer file and then after done, my test will verify all the correct requests were sent to the apis downstream.
We are using apache beam in our feature tests though not sure if it's the same version as google's dataflow :( as that would be the most ideal!!! -> hmmm, is there a reported apache beam version of google's dataflow we can get?
thanks,
Dean
thanks,
Dean
Apache Beam's DirectRunner should be very close to Dataflow's environment, and it's what we recommend for this type of single-process pipeline test.
My advise would be the same: Use the DirectRunner for your feature tests.
You can also use the Dataflow runner, but that sounds like it would be a full integration test. Depending on the data source / data sink, you may be able to pass it mocking utilities.
BigQueryIO is a good example. It has a withTestServices method that you can use to pass objects that mock the behavior of external services

Getting change id's of depending changes from gerrit rest or ssh api?

Gerrit allows associated external changes into a single change request via "Depends-On" on the commit message. However, by the looks of it, rest api does not expose these dependencies.
I can ofcourse get the commit message and then parse it, and then get change request for this external change.
Anyone know if there would be a bit more streamlined option to archive the same ?
You can get the related changes using REST API:
'GET /changes/{change-id}/revisions/{revision-id}/related'
Retrieves related changes of a revision. Related changes are changes
that either depend on, or are dependencies of the revision.
Request GET
/changes/gerrit~master~I5e4fc08ce34d33c090c9e0bf320de1b17309f774/revisions/b1cb4caa6be46d12b94c25aa68aebabcbb3f53fe/related
HTTP/1.0
See more info in the Gerrit documentation here

Dashboard using output results from Jenkins API

I am new to Jenkins API. I just had assignement in company where PL asked me to create a new job in Jenkins where I will run all the testing,build related things on my code and it should create dashboard where all figures and graph should be shown. He said that its feasible. Can anyone please guide me to do so.
Checkout Sectioned-Vew-Plugin.
Create a Job on Jenkins and add /api after the Url. You could see the API Information related to the Job you have just created. The API will contain the Get End points for Retrieving the data. Its available in JSON as well as XMl which you can parse and use as the source of Info for your dashboard. You can also trigger a new Build by using the Post API.

TFS 2010 build email notification when build is running beyond permissible time span

I am using TFS 2010 for build service. I need to send an email if the build is running for longer time.
For ex: Suppose the build normally runs for 10 mins, but now if the build is running for more than 20 mins... i need to send an email notification.
May I have your help on this?
This functionality is not available out of the box. This can however, make a great feature request, raise it for consideration here => http://visualstudio.uservoice.com/forums/121579-visual-studio
However, to get this to work here is what you can do... Write a tfs build activity which using the tfs api extracts the last build execution time and insert that at various places in the process workflow ideally before and after each work flow task to check how much time has the build already consumed while measuring this against the expected time. Use the email notification task to send out an email accordingly.
Here is an example that shows u how to get the last build details, http://blogs.microsoft.co.il/blogs/shair/archive/2011/01/11/tfs-api-part-33-get-build-definitions-and-build-details.aspx and here a custom task example http://msdn.microsoft.com/en-us/library/t9883dzc.aspx
Alternatively, query the TFS Build Queue and check the runtime of the builds in progress. When any of the builds exceeds the defined thresholds, send the email. This can be done in a windows service with relative ease.
You'd use the TFS Client Object Model to query the builds like this. Tarun already provided a nice link to that.

Resources