Azure Data Factory(ADF) Trigger not working properly - odata

I have created an ADF pipeline which fetches the data from OData source. I have created a scheduled trigger to run the pipeline. It was working fine for more than a month. From the last few days when pipeline was triggered on the scheduled time instead of data, copy activity just returns the table name. All the inputs were passed properly. When I try to trigger the pipeline manually also getting the same error. When I used the debug option, it's working fine. What could be the reason for this and how to resolve this?.

Related

How to re-run a build with the same parameters scheduled by cron

I know about the rebuild and replay functionality, but both of them are manually triggers. So here is our problem:
We have multiple servers which can be deployed with any branch that exists. But this deploy is manually. But we want to ensure, that at least once a day, the latest version of that branch is deployed to avoid having servers being outdated.
So what I want to do it, create a scheduler job that runs once a day and triggers a Jenkins job to rebuild the last job using the exact same parameters.
Would be great if someone has some input here :-)
You can try out the Persistent Parameter plugin and use it to define the relevant parameters inside the deploy job that you want to reuse.
This plugin enables you to set you input parameters (string, text, boolean and choice) with default values taken from the previous build - so every time you run the build with parameters manually or trigger it from another job the values that will be used are the values from the last execution.
Your caller job can still pass parameters to the deploy job during the daily execution - but for all parameters that are not passed their latest value will be used.
You can also override parameters defined as persistent as it just affects the default value.

Seed job repeats building infinitely every minute due to 'job_template' change

I use jenkins-job-dsl plugin. Created seed job to run myJobs.jenkins_jobs file, inside which I have written job job_template and another job, which is using 'job_template'. However, after building seed job, it continues to build again and again, until I disable it.
In https://jenkinsci.github.io/job-dsl-plugin/#path/job-using I see
Creates a new job configuration, based on the job template referenced by the parameter and stores this. When the template is changed, the seed job will attempt to re-run, which has the side-effect of cascading changes of the template the jobs generated from it.
However, I'm not sure what could I do to get rid of this constantly rebuilding.
My myJobs.jenkins_jobs file looks like this:
job('job_template'){
}
job('railgun-db-importer-DSL') {
using 'job_template'
}
SOLUTION
The error was that template job had 'description' field updated with date after every run - this caused it to change every run, and run again every run. After putting separate 'description' in every job and hardcoding template job's description, so it doesn't change itself upon run, I got rid of perpetual runs.
You must not maintain template jobs by job-dsl.
The idea behind a template job is that you can create new jobs via job-dsl, based on an existing job that's not maintained by job-dsl (this is the template job).
Typically, you want to do that if there's some complex plugin configuration which is hard to implement in job-dsl directly -- in those cases, it can be more simple to create a template job manually, and use it as a basis for further configuration via job-dsl.
In your example, every DSL run will touch the template job; since modifications of the template job will trigger DSL again, this can lead to the infinite loop that you observe.

Retrieve builds triggered by the 'build' step after builds are finalized

I am using a Pipeline job to trigger several freestyle jobs during its execution, using the native 'build' step.
Later, after all the runs are finalized, I'd like to read info about those runs (name, number, duration, timestamp...) to collect metrics/stats regarding the overall runs.
I'm using the Groovy Event Listener plugin for that task.
Now, I was thinking to read the WorkflowRun object's actions (looking for BuildTriggerAction instances) to get the downstream builds, but they are not available.
run.getActions(BuildTriggerAction.class)
returns an empty list.
I've seen indeed that action is removed every time the triggered build is completed, as described in this ticket https://issues.jenkins-ci.org/browse/JENKINS-28673
My questions are:
do you know how to retrieve the handle to the triggered builds in another way? Not at runtime (I don't have the RunWrapper object, but the WorkflowRun instead). Maybe using the flow objects?
is there another way to fix the issues seen in the ticket, instead of removing the BuildTriggerAction actions?
Thanks in advance!

Jenkins plugin code that should excute before any kind of job is run in Jenkins

I am new to Jenkins plugin development. M trying to write a plugin that should be executed before any Multi configuration type job runs in Jenkins.
In this plugin I want to write rules that will check what configuration parameters user has selected while submitting the job, based on selected parameters, I want to decide whether to allow the job to run or to restrict it.
User should be shown reason as to why that job cannot be run in the Console Output.
Does anyone have any ideas which class I need to extend or which interface I need to implement in order to get a hook into Jenkins job run?
You could look at the Matrix Execution Strategy which allows for a groovy script to select which matrix combinations to run. I would think if your script threw an exception it would stop the build.
For background, the multi configuration projects run a control job (or flyweight) which runs the SCM phase then starts all the actual combinations. This plugin runs after the flyweight SCM checkout.
If nothing else, this will give you a working plugin to start from
Disclaimer: I wrote this plugin
Blocked queue job plugin was what I needed
Out of the box that plugin supports two ways to block the jobs -
Based on the result of last run of another project.
Based on result of last run of the current project
In that plugin the BlockQueueItemTaskDispatcher.java extends Jenkin's QueueTaskDispatcher providing us a hook into Jenkins logic to allow or block the jobs present in the queue from running.
I used this plugin as a starting point for developing a new plugin that allows us to restrict project based on parameters selected and the current time. Ultimate goal is to restrict production migrations from running during the day.
Overriding the isBlocked() method of QueueTaskDispatcher gave access to hudson.model.Queue.Item instance as an argument to me. Then I used the Item instance's getParams method to get access to build parameters selected by the user at runtime. Parsed the lifecyle value from it. Checked the current time. If lifecycle was Production and current time was day time then restricted the job by returning non null CauseOfBlockage from isBlocked() method. If that condition was false, then returnedCauseOfBlockage as null allowing the queued job to run.

Unable to add build step in jenkins job

I was able to create the new job but I am NOT able to add any build step.
This behaviour is reproducible and it occurs when I try to do it from the initial “configure” page I get after job creation as well as with a later configure attempt. And its persists for all job types.
It does not depend on whether I am logged in or not.
The problem is that when I open the “add build step” I get a selection of possible job types (“shell script”, “windows batch”, …) but when I select one of those nothing more happens.
I also have other jobs of this type already up and running and I am also not able to add more build steps to those.
I had this with v1.625.3, all of a sudden. Problem in Chrome and FF. Workaround was to use IE.

Resources