I have many targets in my build.xml for Ant. Generally I am running two via a shell script, one to construct the application and one for cleaning up. The shell script checks the exit status of the construction to see if it should clean up or leave the clutter behind so I can determine what went wrong and fix it.
So went all is going well, the majority of the time, Ant is executed once for construction and once for clean up. This results in my build.number being incremented for each execution. So in steady state, my build.number increments by 2.
How can a tell Ant to not increment the build.number? I would do this for clean up as I haven't built anything.
I know the obvious answer of creating a separate script for clean up only, but I'd rather keep the entire build.xml in one file.
Why not define one target (the complete) build as being dependent on a compilation target and a cleanup target ? That way Ant only has to execute once, and if it bails out it won't execute the cleanup task.
Related
I am currently trying to measure the time bazel build //api/... takes to build our "api" project with different --spawn_strategys.
I am having a hard time doing so because Bazel doesn't rebuild anything as long as I don't touch the source files.
I was able to force rebuilds by editing all files inside our "api" project, but doing this repeatedly is cumbersome.
What is the best way to force Bazel to rebuild so that I can measure build times for our repository?
Preferably, I would like to use something like bazel build //api/... --some_option_which_forces_rebuilding or something similar.
A bit dirty, but you could use --action_env to change the build environment and invalidate all the actions. From the docs:
Environment variables are considered an essential part of an action. In other words, an action is expected to produce a different output, if the environment it is invoked in differs; in particular, a previously cached value cannot be taken if the effective environment changes.
also (from this page):
[...] The value of those environment variable can be enforced from the command line with the --action_env flag (but this flag will invalidate every action of the build).
Just setting a random variable should be enough:
> bazel build --action_env="avariable=1" :mytarget
> bazel build --action_env="avariable=2" :mytarget
> ...
If you delete all the stuff in the output directory for the api package and its subpackages (should be under the bazel-bin symlink), bazel will rerun all the actions that produced those things, and not run any actions that produced things in other packages. This should also avoid rerunning the analysis phase, which changing config_settings or --action_env will do.
I am currently learning the ins and outs of Jenkins and Pipeline.
One thing I do not yet understand is the following:
A Jenkins job by default can be executed concurrently (I can check the checkbox "Do not allow concurrent builds" if I don't want that).
What I don't understand is the following:
Let say Jenkins checks out code in /var/lib/jenkins/workspace/my-project-workspace/
Now how would it be possible to run concurrent builds without conflicts?
Let's say that build nr 1 checks out code in that path and starts testing it, and while doing that, build nr 2 is started and checks out code in that same path.
How will that not conflict with build nr 1?
I am probably missing something obvious here... Please help :)
The subdirectory inside the workspace/ folder will not always be your project name, but a (randomly) generated directory name. That's all the magic.
When this option is checked, multiple builds of this project may be executed in parallel.
By default, only a single build of a project is executed at a time — any other requests to start building that project will remain in the build queue until the first build is complete.
This is a safe default, as projects can often require exclusive access to certain resources, such as a database, or a piece of hardware.
But with this option enabled, if there are enough build executors available that can handle this project, then multiple builds of this project will take place in parallel. If there are not enough available executors at any point, any further build requests will be held in the build queue as normal.
Enabling concurrent builds is useful for projects that execute lengthy test suites, as it allows each build to contain a smaller number of changes, while the total turnaround time decreases as subsequent builds do not need to wait for previous test runs to complete.
This feature is also useful for parameterized projects, whose individual build executions — depending on the parameters used — can be completely independent from one another.
Each concurrently executed build occurs in its own build workspace, isolated from any other builds. By default, Jenkins appends "#" to the workspace directory name, e.g. "#2".
The separator "#" can be changed by setting the hudson.slaves.WorkspaceList Java system property when starting Jenkins. For example, "hudson.slaves.WorkspaceList=-" would change the separator to a hyphen.
For more information on setting system properties, see the wiki page.
However, if you enable the Use custom workspace option, all builds will be executed in the same workspace. Therefore caution is required, as multiple builds may end up altering the same directory at the same time. enter image description here
We have several automated build scripts, some of which are run automatically every 2 hours, and some of which are only ever run manually.
If a build script is started manually while another is already running, it can cause...problems. Such as merging untested branches into the production branch.
I'd like to prevent this happening again, and the simplest solution in my mind is to have each build script start by checking that another is not currently running.
Is there a way in ant to directly check if another ant instance/script is currently running?
If not, what's the simplest way to add such a check? My first thought is a file created at the beginning and deleted at the end of a build. I'd prefer a way that handles user-cancelled builds nicely, but it's not necessary. It needs to work if a build succeeded and if a build failed (but was not killed by the user).
If these are separate Ant processes, then I think the only solution is to define a lockfile of some sort that each Ant process needs to acquire before it can continue.
Perhaps the tempfile task could be used for this?
Actually, a sort-of semaphore based on a directory might be better because the tempfile really is a unique tempfile. The first thing your script does is use mkdir to create a shared resource directory name, but it only does this if the directory does not exist.
Upon exit it invokes delete on this shared resource name.
The idea is that the content and name of the directory is meaningless -- it only serves as an "IPC" cooperative locking mechanism.
This isn't particularly elegant, but I think your only other option is to set up a build server that handles scheduled and continuous builds based on various triggers. One that many people use is Jenkins (or has it been renamed?)
[Update]
Perhaps Do I have a way to check the existence of a directory in Ant (not a file)? would do the trick?
To be honest, this approach may work in the short term, but it just moves the problem around. Instead of resetting unit test results you'll be removing lockfiles by hand to get builds working again. My advice is to set up a CI build system, but I recognize this is a fair amount of work (and introduces a whole different set of future problems.)
i have this ant build.xml file with 3 targets in it:
target1, target2 and target3.
If the user simply runs ant and not an explicit ant target1 or something like that, i want to prompt the user asking which target he would like to call.
Remember, the user should only be prompted for this only if he doesnt explicitly call a target while running ant
Ant is not a programming language, it's a dependency matrix language. There's a big difference between the two.
In a program language, you can specify the absolute order of sequence. Plus, you have a lot more flexibility in doing things. In Ant, you don't specify the execution order. You specify various short how to build this steps and then specify their dependencies. Ant automatically will figure out the execution order needed.
It's one of the hardest things for developers to learn about Ant. I've seen too many times when developers try to force execution order and end up executing the same set of targets dozens of times over and over. I recently had a build her that took almost 10 minutes to build, and I rewrote the build.xml to produce the same build in under 2 minutes.
You could use <input/> to get the user input, then use <exec> or <java> to execute another Ant process to execute the requested target. However, this breaks the way Ant is suppose to work.
The default target should be the default target that developers would want to execute on a regular basis while they program. It should not clean the build. It should not run 10 minutes of testing. It should compile any changed files, and rebuild the war or jar. That's what I want about 99% of the time. The whole process takes 10 seconds.
I get really, really pissed when someone doesn't understand this. I hate it when I type ant and I get directions on how to execute my build. I get really irritated when the default target cleans out my previous compiles. And, I get filled with the deadly desire to pummel the person who wrote the damn build file with a large blunt object if I am prompted for something. That's because I will run Ant, do something else while the build happens, then come back to that command window when I think the build is done. Nothing makes me angrier to come back to a build only to find out it's sitting there waiting for me to tell it which target.
If you really, really need to do this. Use a shell script called build.sh. Don't futz with the build.xmlto do this because that affects development.
What you really need to do is teach everyone how to use Ant:
Ant will list user executable targets when you type in ant -p. This will list all targets, and their descriptions. If a target doesn't have a description, it won't list it. This is great for internal targets that a user shouldn't execute on their own. (For example, a target that merely does some sort of test to see if another target should execute). To make this work, make sure your targets have descriptions. I get angry when the person who wrote the Ant file puts a description for some minor target that I don't want, but forgets the description of the target I do want (like compile). Don't make David angry. You don't want to make David angry.
Use default target names for your group. That way, I know what targets do what across the entire project instead of one project using BUILD vs. build-programs vs. Compile vs build-my-stuff vs. StuffBuild. We standardized on Maven lifecycle names names. They're documented and there's no arguments or debates.
Do not use <ant/> or <antcall> to enforce build order. Do not divide your build.xml into a dozen separate build.xml programs. All of these probably break Ant's ability to build a target dependency matrix. Besides, many Ant tools that show dependency hierarchy in a build and they can't work across multiple build files.
Do not wrap your builds inside a shell script. If you do this, you're probably not understanding how builds work.
The build should not update any files in my working directory that were checked out by me. It shouldn't polute my working directory with all sorts of build artifacts spread out all over the place. It shouldn't do anything outside of the working directory (except maybe do some sort of deploy, but only when I run the deploy target). In fact, all build processing should take place in a sub-directory INSIDE my working directory. A clean should merely delete this one directory. Sometimes, this is called build, sometimes dist. I usually call it target because I've adopted Maven naming conventions.
Your build script should be a build script. It shouldn't do checkouts or updates -- at least not automatically. I know that if you use CruiseControl as a continuous build process, you have to have update and checkout functionality inside your build.xml. It's one of the reasons I now use Jenkins.
Sorry about this answer not necessarily being the one you're looking for. You didn't really state what you're doing with Ant. If you're doing builds, don't do what you're trying to do. If you're writing some sort of program, use a real programming language and not Ant.
An Ant build should typically finish in under a minute or two, and redoing a build because you changed a file shouldn't take more than 30 seconds. This is important to understand because I want to encourage my developers to build with Ant, and to use the same targets that my Jenkins server uses. That way, they can test out their build the same way my Jenkins server will do the official build.
you may use the input task provided by ant and make it your default target.
<input
message="Please enter Target ID (1,2 or 3):"
validargs="1,2,3"
addproperty="targetID"
/>
Use the value of this property to decide which target to execute.
From the ant documentation:
message : The Message which gets displayed to the user during the build run.
validargs: Comma separated String containing valid input arguments. If set, input task will reject any input not defined here.
You may pass any arguments according to your needs.
addproperty : The name of a property to be created from input.Behaviour is equal to property task which means that existing
properties cannot be overridden.
I have an Ant task which runs if the lock file is not existing.
But if the build fails, then the lock file is not deleted at the end of the task and subsequently the task is not invoked from my scheduled jobs.
Is there anyway to handle such that even if build fails, I should be able to call my cleanUp task to delete the lock files?
Look at this: Testing and exception handling with Ant
There is macrodef with trycatch
This sounds to me like something that should be cleaned up at the beginning of any build.
Do you have an init task or some task on which all other tasks depend? I would just put the deletion of that file in there so that it always gets deleted even if a previous build failed.
However, it's a confusing requirement. It doesn't sound very idiomatic. Ordinarily, task execution is controlled through dependency and conditional properties. See the relevant section of the targets section of the manual for more details about if and unless. Creating a file is an expensive way to get the functionality already present in ant's core.