Is there a way for one ant script to check if another is already running? - ant

We have several automated build scripts, some of which are run automatically every 2 hours, and some of which are only ever run manually.
If a build script is started manually while another is already running, it can cause...problems. Such as merging untested branches into the production branch.
I'd like to prevent this happening again, and the simplest solution in my mind is to have each build script start by checking that another is not currently running.
Is there a way in ant to directly check if another ant instance/script is currently running?
If not, what's the simplest way to add such a check? My first thought is a file created at the beginning and deleted at the end of a build. I'd prefer a way that handles user-cancelled builds nicely, but it's not necessary. It needs to work if a build succeeded and if a build failed (but was not killed by the user).

If these are separate Ant processes, then I think the only solution is to define a lockfile of some sort that each Ant process needs to acquire before it can continue.
Perhaps the tempfile task could be used for this?
Actually, a sort-of semaphore based on a directory might be better because the tempfile really is a unique tempfile. The first thing your script does is use mkdir to create a shared resource directory name, but it only does this if the directory does not exist.
Upon exit it invokes delete on this shared resource name.
The idea is that the content and name of the directory is meaningless -- it only serves as an "IPC" cooperative locking mechanism.
This isn't particularly elegant, but I think your only other option is to set up a build server that handles scheduled and continuous builds based on various triggers. One that many people use is Jenkins (or has it been renamed?)
[Update]
Perhaps Do I have a way to check the existence of a directory in Ant (not a file)? would do the trick?
To be honest, this approach may work in the short term, but it just moves the problem around. Instead of resetting unit test results you'll be removing lockfiles by hand to get builds working again. My advice is to set up a CI build system, but I recognize this is a fair amount of work (and introduces a whole different set of future problems.)

Related

How do I get workspace status in bazel

I would like to version build artefacts with build number for CI passed to bazel via workspace_status_command. Sometimes I would like to include build number to the name of the artefact.
Is there a way how do I access ctx when writing a macro(as I was trying to use ctx.info_file)? So far it seems that I am able to access such info just in new rule when creating a new rule which in this case is a bit awkward.
I guess that having a build number or similar info is pretty common use case so I wonder if thre is a simpler way how to access such info.
No, you really need to define a custom rule to be able to consume information passed from workspace_status_command through info_file and version_file file and even then you cannot just access it's values from Starlark, you can pass the file to your tooling (wrapper) and process the inputs there. After all, (build) rules do not execute anything, they emit actions to be executed at a later phase.
Be careful though, because if you depend on info_file (STABLE_* entries), changes to the file invalidate targets depending on it. For something like CI build number, it's usually not what you want and version_file is more likely what you are after. You may want to record the id, but you usually do not want to rebuild stuff just because the build ID has changed (it's a new CI run). However, even simple inclusion of IDs could be considered problematic, if you want your results to be completely reproducible.
Having variable artifact names is a whole new problem and there would be good reasons why not to. But generally since as proposed the name would be decided during execution of actions (reading in version_file in your tool), you're past the analysis phase to decide what comes out of the action. The only way I am currently aware of (that is for out of tree source of variable input, you can of course always define a Starlark variable and load it from your BUILD file) to be able to do that is to use tree artifacts (using declare_directory in your rule.

Exporting and Importing Jenkins Pipeline script approvals

I have a significant set of Groovy pipeline scripts for our Jenkins build process. I am in the process of moving those scripts onto another instances, and would like to replicate the set of approved scripts that were not originally white listed.
Is it possible to export the list of approved signatures and import them into another instance?
The only other solution I have is to constantly run and rerun the scripts and approving each signature as it breaks the build. Since the scripts are quite complex, and not every run is guaranteed to hit each line, this is not going to be a quick process.
The other option would be to create a master 'white list' script which runs all the currently non-approved scripts again and again until all instances had been approved.
Neither of these options is great, so I'm hoping for a simple import/export to avoid having to do this work altogether, but I certainly can't see an option available to be in the UI.
Cheers
I do not believe there is import/export functionality by default but maybe there's a plugin that'll do it.
If you have access to the directory Jenkins' is installed or runs in you should be able to find the scriptApproval.xml file.
If you explore that you'll find approvedScriptHashes and approvedSignatures etc. You can lift this file entirely and paste it in the new instance or copy across the specifics you need (either way you'll need a restart).
Looks like there's an open request for this sort of functionality here

Jenkins pipeline selective delete

I'm slowly replacing traditional jobs with Jenkins pipelines. We've got some jobs which I've previously optimised by only deleting some key files from the workspace of a previous build - thus we end up with incremental builds rather than full ones. FTR this makes our basic builds 3/4 times faster, and I'm keen to preserve it.
I need to delete those files (to simplify real scenario) that contain "cache". I currently use "**/cache" as an include parameter to the Delete Workspace build step. Question: is there something similar already in pipeline steps? I could probably do it using find or similar, but this has to work on Windows too and that has portability implications.
You could use the cleanWS step to clean up certain parts of the workspace. However, it is a plugin you can find here: Workspace Cleanup Plugin.
You can find syntax about a snippet generator for this step at your-jenkins-url/pipeline-syntax/
I've switched away from using cleanWS having used it. Rather I am using the file operations to explicitly delete the files concerned.
The file operations act there and then. The cleanWs acts at the end of a run and can't be relied upon if that run went wrong and did not finish - e.g. syntax error - or that was running a different script.

Best way to manage dependent ant builds over multiple servers?

I have these ant scripts that build and deploy my appservers. My system though is actually over 3 servers. They all use the same deploy script (w/flags) and all work fine.
Problem is there are some dependencies. They all use the same database so I need a way to stop all appservers across all machines before the build first happens on machine 1. Then I need the deployment on machine 1 to go and complete first as it's the one that deals with the database build (which all appservers need to start).
I've had a search around and there are some tools that might be useful but they all seem overkill for what I need.
What do you think would be the best tool to sync and manage the ant builds over multiple machines (all running linux)?
Thanks,
Ryuzaki
You could make your database changes non-breaking, run your database change scripts first and then deploy to your appservers. This way your code changes aren't intrinsically tied to your database changes and both can happen independently.
When I say non-breaking I mean that database changes are written in such a way that 2 different version of code can function against the same database. For example rather than renaming columns, you add a new one instead.

ant ask user which target to call if not specified

i have this ant build.xml file with 3 targets in it:
target1, target2 and target3.
If the user simply runs ant and not an explicit ant target1 or something like that, i want to prompt the user asking which target he would like to call.
Remember, the user should only be prompted for this only if he doesnt explicitly call a target while running ant
Ant is not a programming language, it's a dependency matrix language. There's a big difference between the two.
In a program language, you can specify the absolute order of sequence. Plus, you have a lot more flexibility in doing things. In Ant, you don't specify the execution order. You specify various short how to build this steps and then specify their dependencies. Ant automatically will figure out the execution order needed.
It's one of the hardest things for developers to learn about Ant. I've seen too many times when developers try to force execution order and end up executing the same set of targets dozens of times over and over. I recently had a build her that took almost 10 minutes to build, and I rewrote the build.xml to produce the same build in under 2 minutes.
You could use <input/> to get the user input, then use <exec> or <java> to execute another Ant process to execute the requested target. However, this breaks the way Ant is suppose to work.
The default target should be the default target that developers would want to execute on a regular basis while they program. It should not clean the build. It should not run 10 minutes of testing. It should compile any changed files, and rebuild the war or jar. That's what I want about 99% of the time. The whole process takes 10 seconds.
I get really, really pissed when someone doesn't understand this. I hate it when I type ant and I get directions on how to execute my build. I get really irritated when the default target cleans out my previous compiles. And, I get filled with the deadly desire to pummel the person who wrote the damn build file with a large blunt object if I am prompted for something. That's because I will run Ant, do something else while the build happens, then come back to that command window when I think the build is done. Nothing makes me angrier to come back to a build only to find out it's sitting there waiting for me to tell it which target.
If you really, really need to do this. Use a shell script called build.sh. Don't futz with the build.xmlto do this because that affects development.
What you really need to do is teach everyone how to use Ant:
Ant will list user executable targets when you type in ant -p. This will list all targets, and their descriptions. If a target doesn't have a description, it won't list it. This is great for internal targets that a user shouldn't execute on their own. (For example, a target that merely does some sort of test to see if another target should execute). To make this work, make sure your targets have descriptions. I get angry when the person who wrote the Ant file puts a description for some minor target that I don't want, but forgets the description of the target I do want (like compile). Don't make David angry. You don't want to make David angry.
Use default target names for your group. That way, I know what targets do what across the entire project instead of one project using BUILD vs. build-programs vs. Compile vs build-my-stuff vs. StuffBuild. We standardized on Maven lifecycle names names. They're documented and there's no arguments or debates.
Do not use <ant/> or <antcall> to enforce build order. Do not divide your build.xml into a dozen separate build.xml programs. All of these probably break Ant's ability to build a target dependency matrix. Besides, many Ant tools that show dependency hierarchy in a build and they can't work across multiple build files.
Do not wrap your builds inside a shell script. If you do this, you're probably not understanding how builds work.
The build should not update any files in my working directory that were checked out by me. It shouldn't polute my working directory with all sorts of build artifacts spread out all over the place. It shouldn't do anything outside of the working directory (except maybe do some sort of deploy, but only when I run the deploy target). In fact, all build processing should take place in a sub-directory INSIDE my working directory. A clean should merely delete this one directory. Sometimes, this is called build, sometimes dist. I usually call it target because I've adopted Maven naming conventions.
Your build script should be a build script. It shouldn't do checkouts or updates -- at least not automatically. I know that if you use CruiseControl as a continuous build process, you have to have update and checkout functionality inside your build.xml. It's one of the reasons I now use Jenkins.
Sorry about this answer not necessarily being the one you're looking for. You didn't really state what you're doing with Ant. If you're doing builds, don't do what you're trying to do. If you're writing some sort of program, use a real programming language and not Ant.
An Ant build should typically finish in under a minute or two, and redoing a build because you changed a file shouldn't take more than 30 seconds. This is important to understand because I want to encourage my developers to build with Ant, and to use the same targets that my Jenkins server uses. That way, they can test out their build the same way my Jenkins server will do the official build.
you may use the input task provided by ant and make it your default target.
<input
message="Please enter Target ID (1,2 or 3):"
validargs="1,2,3"
addproperty="targetID"
/>
Use the value of this property to decide which target to execute.
From the ant documentation:
message : The Message which gets displayed to the user during the build run.
validargs: Comma separated String containing valid input arguments. If set, input task will reject any input not defined here.
You may pass any arguments according to your needs.
addproperty : The name of a property to be created from input.Behaviour is equal to property task which means that existing
properties cannot be overridden.

Resources