ant ask user which target to call if not specified - ant

i have this ant build.xml file with 3 targets in it:
target1, target2 and target3.
If the user simply runs ant and not an explicit ant target1 or something like that, i want to prompt the user asking which target he would like to call.
Remember, the user should only be prompted for this only if he doesnt explicitly call a target while running ant

Ant is not a programming language, it's a dependency matrix language. There's a big difference between the two.
In a program language, you can specify the absolute order of sequence. Plus, you have a lot more flexibility in doing things. In Ant, you don't specify the execution order. You specify various short how to build this steps and then specify their dependencies. Ant automatically will figure out the execution order needed.
It's one of the hardest things for developers to learn about Ant. I've seen too many times when developers try to force execution order and end up executing the same set of targets dozens of times over and over. I recently had a build her that took almost 10 minutes to build, and I rewrote the build.xml to produce the same build in under 2 minutes.
You could use <input/> to get the user input, then use <exec> or <java> to execute another Ant process to execute the requested target. However, this breaks the way Ant is suppose to work.
The default target should be the default target that developers would want to execute on a regular basis while they program. It should not clean the build. It should not run 10 minutes of testing. It should compile any changed files, and rebuild the war or jar. That's what I want about 99% of the time. The whole process takes 10 seconds.
I get really, really pissed when someone doesn't understand this. I hate it when I type ant and I get directions on how to execute my build. I get really irritated when the default target cleans out my previous compiles. And, I get filled with the deadly desire to pummel the person who wrote the damn build file with a large blunt object if I am prompted for something. That's because I will run Ant, do something else while the build happens, then come back to that command window when I think the build is done. Nothing makes me angrier to come back to a build only to find out it's sitting there waiting for me to tell it which target.
If you really, really need to do this. Use a shell script called build.sh. Don't futz with the build.xmlto do this because that affects development.
What you really need to do is teach everyone how to use Ant:
Ant will list user executable targets when you type in ant -p. This will list all targets, and their descriptions. If a target doesn't have a description, it won't list it. This is great for internal targets that a user shouldn't execute on their own. (For example, a target that merely does some sort of test to see if another target should execute). To make this work, make sure your targets have descriptions. I get angry when the person who wrote the Ant file puts a description for some minor target that I don't want, but forgets the description of the target I do want (like compile). Don't make David angry. You don't want to make David angry.
Use default target names for your group. That way, I know what targets do what across the entire project instead of one project using BUILD vs. build-programs vs. Compile vs build-my-stuff vs. StuffBuild. We standardized on Maven lifecycle names names. They're documented and there's no arguments or debates.
Do not use <ant/> or <antcall> to enforce build order. Do not divide your build.xml into a dozen separate build.xml programs. All of these probably break Ant's ability to build a target dependency matrix. Besides, many Ant tools that show dependency hierarchy in a build and they can't work across multiple build files.
Do not wrap your builds inside a shell script. If you do this, you're probably not understanding how builds work.
The build should not update any files in my working directory that were checked out by me. It shouldn't polute my working directory with all sorts of build artifacts spread out all over the place. It shouldn't do anything outside of the working directory (except maybe do some sort of deploy, but only when I run the deploy target). In fact, all build processing should take place in a sub-directory INSIDE my working directory. A clean should merely delete this one directory. Sometimes, this is called build, sometimes dist. I usually call it target because I've adopted Maven naming conventions.
Your build script should be a build script. It shouldn't do checkouts or updates -- at least not automatically. I know that if you use CruiseControl as a continuous build process, you have to have update and checkout functionality inside your build.xml. It's one of the reasons I now use Jenkins.
Sorry about this answer not necessarily being the one you're looking for. You didn't really state what you're doing with Ant. If you're doing builds, don't do what you're trying to do. If you're writing some sort of program, use a real programming language and not Ant.
An Ant build should typically finish in under a minute or two, and redoing a build because you changed a file shouldn't take more than 30 seconds. This is important to understand because I want to encourage my developers to build with Ant, and to use the same targets that my Jenkins server uses. That way, they can test out their build the same way my Jenkins server will do the official build.

you may use the input task provided by ant and make it your default target.
<input
message="Please enter Target ID (1,2 or 3):"
validargs="1,2,3"
addproperty="targetID"
/>
Use the value of this property to decide which target to execute.
From the ant documentation:
message : The Message which gets displayed to the user during the build run.
validargs: Comma separated String containing valid input arguments. If set, input task will reject any input not defined here.
You may pass any arguments according to your needs.
addproperty : The name of a property to be created from input.Behaviour is equal to property task which means that existing
properties cannot be overridden.

Related

Jenkins workspaces and concurrent builds, how do they work?

I am currently learning the ins and outs of Jenkins and Pipeline.
One thing I do not yet understand is the following:
A Jenkins job by default can be executed concurrently (I can check the checkbox "Do not allow concurrent builds" if I don't want that).
What I don't understand is the following:
Let say Jenkins checks out code in /var/lib/jenkins/workspace/my-project-workspace/
Now how would it be possible to run concurrent builds without conflicts?
Let's say that build nr 1 checks out code in that path and starts testing it, and while doing that, build nr 2 is started and checks out code in that same path.
How will that not conflict with build nr 1?
I am probably missing something obvious here... Please help :)
The subdirectory inside the workspace/ folder will not always be your project name, but a (randomly) generated directory name. That's all the magic.
When this option is checked, multiple builds of this project may be executed in parallel.
By default, only a single build of a project is executed at a time — any other requests to start building that project will remain in the build queue until the first build is complete.
This is a safe default, as projects can often require exclusive access to certain resources, such as a database, or a piece of hardware.
But with this option enabled, if there are enough build executors available that can handle this project, then multiple builds of this project will take place in parallel. If there are not enough available executors at any point, any further build requests will be held in the build queue as normal.
Enabling concurrent builds is useful for projects that execute lengthy test suites, as it allows each build to contain a smaller number of changes, while the total turnaround time decreases as subsequent builds do not need to wait for previous test runs to complete.
This feature is also useful for parameterized projects, whose individual build executions — depending on the parameters used — can be completely independent from one another.
Each concurrently executed build occurs in its own build workspace, isolated from any other builds. By default, Jenkins appends "#" to the workspace directory name, e.g. "#2".
The separator "#" can be changed by setting the hudson.slaves.WorkspaceList Java system property when starting Jenkins. For example, "hudson.slaves.WorkspaceList=-" would change the separator to a hyphen.
For more information on setting system properties, see the wiki page.
However, if you enable the Use custom workspace option, all builds will be executed in the same workspace. Therefore caution is required, as multiple builds may end up altering the same directory at the same time. enter image description here

why is bazel faster than gradle

originally, I use gradle to build my android project, but recently, I migrate it to bazel, and I find that bazel is truly fast than gradle, so I want to know why, but the doc of bazel doesn't give much idea about this, can anyone help me?
Thanks very much!
Full disclosure: I work on Bazel.
That's not an easy question to answer for two reasons. First, performance is highly dependent on the scenario. For example, we'd generally expect a clean build to be slower than a build where only a single file has changed. Second, I don't know how Gradle works internally, and they've done a lot of work recently to improve Gradle performance.
But I can talk about Bazel and what we're doing to make it fast. We've been working on build performance for ~10 years, starting long before we made it public.
The key feature is that we require all dependencies to be declared, and we track them explicitly. If you use a header file in C++, or depend on a Java library, you must declare this dependency in your BUILD file (and we enforce that these are declared by sandboxing individual actions). There are three effects from this:
First, we can heavily parallelize the build, because we know which things depend on which other things.
Second, we can make incremental builds very fast, because we can tell what parts of the build have to be re-done when you change a specific file (BUILD file, header file, source file, ...).
Third, we almost never have to do clean builds. Other build tools often require 'make clean' to get into a predictable state - since Bazel knows all the dependencies, it can get to a predictable state on every single build.
Another effect is that we can cache remotely (i.e., across users), and even execute on another machine, although neither of these are fully supported at the time of this writing.

Is there a way for one ant script to check if another is already running?

We have several automated build scripts, some of which are run automatically every 2 hours, and some of which are only ever run manually.
If a build script is started manually while another is already running, it can cause...problems. Such as merging untested branches into the production branch.
I'd like to prevent this happening again, and the simplest solution in my mind is to have each build script start by checking that another is not currently running.
Is there a way in ant to directly check if another ant instance/script is currently running?
If not, what's the simplest way to add such a check? My first thought is a file created at the beginning and deleted at the end of a build. I'd prefer a way that handles user-cancelled builds nicely, but it's not necessary. It needs to work if a build succeeded and if a build failed (but was not killed by the user).
If these are separate Ant processes, then I think the only solution is to define a lockfile of some sort that each Ant process needs to acquire before it can continue.
Perhaps the tempfile task could be used for this?
Actually, a sort-of semaphore based on a directory might be better because the tempfile really is a unique tempfile. The first thing your script does is use mkdir to create a shared resource directory name, but it only does this if the directory does not exist.
Upon exit it invokes delete on this shared resource name.
The idea is that the content and name of the directory is meaningless -- it only serves as an "IPC" cooperative locking mechanism.
This isn't particularly elegant, but I think your only other option is to set up a build server that handles scheduled and continuous builds based on various triggers. One that many people use is Jenkins (or has it been renamed?)
[Update]
Perhaps Do I have a way to check the existence of a directory in Ant (not a file)? would do the trick?
To be honest, this approach may work in the short term, but it just moves the problem around. Instead of resetting unit test results you'll be removing lockfiles by hand to get builds working again. My advice is to set up a CI build system, but I recognize this is a fair amount of work (and introduces a whole different set of future problems.)

Jenkins Ant and dynamic build.properties files

I have a single code based being used to build an application for multiple platforms.
Locally I have setup a main build-env.properties file, and a series of additional *.properties files that I use to switch settings for the different platforms I am publishing to.
Doing my build on the command line I simply use the command:
ant build -propertyfile dev-build.properties
How can I do this in Jenkins?
I currently use the "Invoke Ant" Build Step with the target set to build, but am at a loss for how to specify the secondary propertyfile?
Although not exactly the same, you can take the contents of those properties and put them into the Jenkins Invoke Ant build step, utilizing the properties advanced field.
The most basic way:
You will need to create a new task for each different set of sub properties you wish to utilize.
In your "Invoke Ant" build step, if you press Advanced..., this reveals a "Properties" field, you can copy the properties from one of your *.properties files into that field.
Repeat for each different properties file you wish to utilize.
Parametrized build plugin might help you. This is assuming the number of properties you are changing is one or two. So when you run a job, you get a drop-down to select you OS and go.
Though, as I have mentioned here , what goes against this plugin is that it makes the process manual
On this thread Hudson / Jenkins: share parameters between several jobs you can read the 2nd option in Anders's answer as an alternate approach.
A better approach for this is using a parameterised job with file parameter(refer to doc for creating the builds). Mentioning the file location as "propertyfile" would help. This would be better than reconfiguring the job again and again to run a build (To copy the properties file to the input location).

ANT returning properties from an externally called build

I'm currently trying to refactor a couple of our ANT scripts and what I'd like to do is try and centralise some of the common targets they use into some kind of shared area.
The target I'm trying to work on just now is configuration. Both our scripts currently have the same code which loads in external properties and sets up our classes.
What I've tried to do is move this target into another build script called configuration.xml and call this through <ant antfile="configuration.xml"> from each script.
What's happening though is the target is running but I can't work out how I can get the values being set to return to the parent build script. Is there anyway I can do that?
Another approach I thought of was to create some kind of "base" script that the other two could inherit from. I don't think that's ideal in the long term but it's an option I thought I could try. Again though I can't find anything online to say ANT can do this.
Depends on what version of Ant you're using as to what methods you have available, but there is the <import file="your-include.xml"/> option.
Here's an example of some imports, if it helps.
http://subversion.assembla.com/svn/cfdistro/trunk/cfdistro/src/cfdistro/scm.xml

Resources