SCDF keeping task schedules while undeploying and redeploying tasks - spring-cloud-dataflow

Back in SCDF 2.1.x releases, when Undeploying task and redeploying with new version of the application, the schedules actually never gets removed.
Now that we are on a newer version(2.9.x), what is the best way to keep the schedule while doing "task destroy"?
If "task destroy" will always remove schedule, is there another way we can upgrade application registration for task without removing task schedules?

Related

What happens to running Jenkins builds when Jenkins is preparing for safe shutdown?

Does the current running build on Jenkins still completes when you click on "Prepare for Shutdown"?
I want to take admin control on Jenkins to make some config changes so I am clicking on "Prepare for Shutdown" but I want to ensure it completes the current builds before shutting down.
As cloudbees describes,
The Prepare for Shutdown feature prevents any more jobs from running. When all jobs are finished, you still need to restart/stop the instance manually.
It is the http://<jenkins.server>/quietDown command
They also describe various other options:
http://<jenkins.server>/restart
http://<jenkins.server>/safeRestart
http://<jenkins.server>/exit
http://<jenkins.server>/safeExit
http://<jenkins.server>/quietDown
http://<jenkins.server>/cancelQuietDown
Personally, I think they should have called it something different, like setToIdle or blockQueue
In your case, you will want to quietDown, then safeExit or safeRestart (to be safe). Normally you can just safeRestart.
There are some nuances regarding pipelines or jobs that trigger downstream jobs.
There are some plugins that add a button to achieve the same effect.
Saferestart just adds buttons.
Lenient shutdown talks about downstream jobs and also lets you manage nodes.
There's other to stop all jobs (bruteforce) and re queue them after (nice admin). Others also manage the queue.
IIRC, pipelines will complete the stage and can resume after the restart or cancel.

Triggering child jobs and having them check for Perforce changes first?

I'm trying to set up a build server for a process we're trying to automate, and have run into an issue that I'm hoping there's an easy fix (or plugin) for.
The end goal is to have a set of jobs that each check for changes in a small subset of our Perforce depot, and then perform a task. The issue is that each of these tasks requires the entire depot in order to execute properly, and since the depot is so large (30+GB) and there are so many of these tasks (50+), actually duplicating the depot and syncing it would be an extreme waste of disk space and network bandwidth.
Instead, I'd like to have one "master" job that deals with syncing the depot (which all the child jobs share), and then have each child job use their own workspace and the "Preview Check Only" populate option in the Jenkins Perforce plugin (which syncs with p4 sync -k). Each child job's workspace would then exist only for the job to be able to detect when changes it is interested in have happened, after which they would run their tasks from inside the "master" workspace depot, and everything should just work!
Except I have not been able to figure out how to trigger the child jobs AND have them check for changes to their local workspace before deciding to run. Is there any way to have the children run all the checks they would normally run (if they were just scheduled to run every once in a while) when triggered by another job?
Or maybe there's a better way to do what I'm trying to do? Something that would allow me to share a single Perforce depot, but have child jobs that only run when part of that depot changes? There will also be more child jobs created over time, so being able to set them up and configure them easily would also be a nice thing to have.
I finally figured out a solution here, but it's pretty convoluted. Here goes!
The master job does the Perforce sync, and then runs a batch script that uses curl to run a polling of each child job: curl -X POST http://jenkins/view/Job/polling. When the child jobs have Poll SCM enabled (but do not have a schedule set), this lets them poll the SCM when the receive the polling request from the web API. Kinda messy, but it works!

How to enforce synchronous execution of separate Jenkins jobs?

Our Jenkins server is configured with two primary jobs to build apks. Each of those jobs has a child job that installs the APK onto an Android device that is attached to the build server and executes UI tests.
For example:
Project-A-apk
Project-A-tests
Project-B-apk
Project-B-tests
where
Project-A-apk kicks off Project-A-tests
Project-B-apk kicks off Project-B-tests
and
both Project-A-tests and Project-B-tests install and run on the same test device.
The issue is, we can't have the test jobs running at the same time, as they will both try to interact with the same device.
Is there a way to configure a job to wait until some other job (not in it's parent chain) before executing?
I use the Throttle Concurrent Builds Plugin to control when jobs should run concurrently.
Setup a category name such as android-device with Maximum Concurrent Builds Per Node set to 1. Assign this category to the jobs that run the test on the android device. Once the plugin is installed there is a place to assign the category on the job configuration page.
All jobs with the same category name assigned will execute serially instead of concurrently.
We use the Jenkins Exclusion Plugin to manage our builds that have to share a couple of different DB resources. This plug works by allowing you to define Critical Blocks in your build steps. Critical blocks will only run if they can acquire a specific resource. At the end of the block the resource is released. This means that you don't have to block an entire job, just the parts that you need the resource for.
You should try the Build Blocker Plugin:
This plugin keeps the actual job in the queue if at least one name of
currently running jobs is matching with one of the given regular
expressions.
Another way would be to limit the number of Jenkins executors to one. This would ensure that only one job is running and other wait in queue. However, this might block other future jobs that do not access the test device.
One more: Heavy Job Plugin
You can weight your Android jobs to block all executors.

Timer Jobs don't run after changing the schedule

I created a custom timer job. It used to run fine during development an initial testing when I using SPMinutSchedule for scheduling it every minute of 5 minutes. The intention is to run it once a day in production. So, I changed the schedule using SPDailySchedule and it stopped running. I kind of fixed it by clearing the cache of the server each time I change the schedule.
I deploy the job using a feature with Web Application scope.
Am I missing something here?
From REF: http://www.mstechblogs.com/shailaja/reschedule-or-change-the-interval-of-a-sharepoint-custom-timer-job/
To reschedule or change the interval of a SharePoint custom Timer Job
Change the schedule interval in the custom code using the
SPDailySchedule class of the SharePoint object model
Build and deploy in the GAC.
IISRESET
Go to the command prompt and navigate to the 12 hive of SharePoint
Uninstall the existing Timer feature, for example, stsadm.exe –o uninstallfeature –filename yourfeaturename\feature.xml
Install the new Timer feature, for example, stsadm.exe –o installfeature –filename yourfeaturename\feature.xml
Go to the windows services and restart the Windows SharePoint Services Timer

Update Quartz.NET Job DLL without Service Restart

I just started with Quartz.net and I have it running as a service. I created a Job and moved the resulting .dll to the Quartz folder and added a new entry to the jobs.xml file to kick it off every 3 seconds.
I updated the job .dll but it is in use by Quartz (or is locked).
Is it possible to update the .dll without restarting the Quartz service? If not what would happen to a long running job if I did stop/start the Quartz service?
You cannot update the job dll without restarting the service. Once the server has started it loads the the job dll and the loaded types stay in memory. This is how .NET runtime works. To achieve something like dynamic reloading you would need to use programmatically created app domains etc.
If you stop the scheduler you can pass a bool parameter whether to wait for jobs to complete first. Then you would be safe with jobs completing and no new ones would spawn meanwhile the scheduler is shutting down.

Resources