H5BP build script - dependencies - ant

I'm an ant novice but my expectation is that the build script in h5bp would use last modified info to ensure that it only generated new files when it needed to. This does not happen - everything seems to run on every invocation of ant build . Is there a way I can prevent this? Is it a design feature of the h5bp build script?
I've extended project.xml to ftp to my server when I need to (and also to copy to my development server) which I find really useful. However since the images to be copied from the publish folder have new dates, even though unchanged, they are all ftp'd up each time which is slow and unnecessary. FWIW I'm running this on Windows 7 (uploading to a Linux/Apache server).
Looking through the build.xml file, I see plenty of overwrite settings - most to "no" and a few to "yes", so I guess some conscious decisions have been made. Appreciate understanding why.
Grateful for any suggestions.
Thanks
Abo
PS Apart from this, the build script is just great!

So, I think I can answer some of my own question.
The publish (and intermediate) directories are deleted on each build, so the published image files are always regenerated.
Anyone know if there's a reason that the img directory cannot be left in publish and intermediate?
Thanks

IT generates a new build from the source each time you build one. Leaving old images would just result in lot of unnecessary files.
Perhaps you could set a flag when the image files are FTPed for the first time, and then on not upload the images unless specifically asked to do so?

Related

How to display configuration differences between two jenkins Jenkins builds?

I want to display non-code differences between current build and the latest known successful build on Jenkins.
By non-code differences I mean things like:
Environment variables (includes Jenkins parameters) (set), maybe with some filter
Version of system tool packages (rpm -qa | sort)
Versions of python packages installed (pip freeze)
While I know how to save and archive these files as part of the build, the only part that is not clear is how to generate the diff/change-report regarding differences found between current build and the last successful build.
Please note that I am looking for a pipeline compatible solution and ideally I would prefer to make this report easily accessible on Jenkins UI, like we currently have with SCM changelogs.
Or to rephrase this, how do I create build manifest and diff it against last known successful one? If anyone knows a standard manifest format that can easily be used to combine all these information it would be great.
you always ask the most baller questions, nice work. :)
we always try to push as many things into code as possible because of the same sort of lack of traceability you're describing with non-code configuration. we start with using Jenkinsfiles, so we capture a lot of the build configuration there (in a way that still shows changes in source control). for system tool packages, we get that into the app by using docker and by inheriting from a specific tag of the docker base image. so even if we want to change system packages or even the python version, for example, that would manifest as an update of the FROM line in the app's Dockerfile. Even environment variables can be micromanaged by docker, to address your other example. There's more detail about how we try to sidestep your question at https://jenkins.io/blog/2017/07/13/speaker-blog-rosetta-stone/.
there will always be things that are hard to capture as code, and builds will therefore still fail and be hard to debug occasionally, so i hope someone pipes up with a clean solution to your question.

Is there a way for one ant script to check if another is already running?

We have several automated build scripts, some of which are run automatically every 2 hours, and some of which are only ever run manually.
If a build script is started manually while another is already running, it can cause...problems. Such as merging untested branches into the production branch.
I'd like to prevent this happening again, and the simplest solution in my mind is to have each build script start by checking that another is not currently running.
Is there a way in ant to directly check if another ant instance/script is currently running?
If not, what's the simplest way to add such a check? My first thought is a file created at the beginning and deleted at the end of a build. I'd prefer a way that handles user-cancelled builds nicely, but it's not necessary. It needs to work if a build succeeded and if a build failed (but was not killed by the user).
If these are separate Ant processes, then I think the only solution is to define a lockfile of some sort that each Ant process needs to acquire before it can continue.
Perhaps the tempfile task could be used for this?
Actually, a sort-of semaphore based on a directory might be better because the tempfile really is a unique tempfile. The first thing your script does is use mkdir to create a shared resource directory name, but it only does this if the directory does not exist.
Upon exit it invokes delete on this shared resource name.
The idea is that the content and name of the directory is meaningless -- it only serves as an "IPC" cooperative locking mechanism.
This isn't particularly elegant, but I think your only other option is to set up a build server that handles scheduled and continuous builds based on various triggers. One that many people use is Jenkins (or has it been renamed?)
[Update]
Perhaps Do I have a way to check the existence of a directory in Ant (not a file)? would do the trick?
To be honest, this approach may work in the short term, but it just moves the problem around. Instead of resetting unit test results you'll be removing lockfiles by hand to get builds working again. My advice is to set up a CI build system, but I recognize this is a fair amount of work (and introduces a whole different set of future problems.)

TFS MSBuild Copy Files from Network Location Into Build Directory

We are using TFS to build our solutions. We have some help files that we don't include in our projects as we don't want to grant our document writer access to the source. These files are placed in a folder on our network.
When the build kicks off we want the process to grab the files from the network location and place them into a help folder that is part of source.
I have found an activity in the xaml for the build process called CopyDirectory. I think this may work but I'm not sure what values to place into the Destination and Source properties. After each successful build the build is copied out to a network location. We want to copy the files from one network location into the new build directory.
I may be approaching this the wrong way, but any help would be much appreciated.
Thanks.
First, you might want to consider your documentation author placing his documents in TFS. You can give him access to a separate folder or project without granting access to your source code. The advantages of this are:
Everything is in source control. Files dropped in a network folder are easily misplaced or corrupted, and you have no history of changes to them. The ideal for any project is that everything related to the project is captured in source control so you can lift out a complete historical version whenever one is needed.
You can map the documentation to a different local folder on your build server such that simply executing the "get" of the source code automatically copies the documentation exactly where it's needed.
The disadvantage is that you may need an extra CAL for him to be able to do this.
Another (more laborious) approach is to let him save to the network location, and have a developer check the new files into TFS periodically. If the docs aren't updated often this may be an acceptable compromise.
However, if you wish to copy the docs from the net during your build, you can use one of the MSBuild Copy commands (as you are already aware), or you can use Exec. The copy commands are more complicated to use because they often populated with filename lists that are generated from outputs of other build targets, and are usually used with solution-relative pathnames. But if you're happy with DOS commands (xcopy/robocopy), then you may find it much easier just to use Exec to run an xcopy/robocopy command. You can then "develop" and test the xcopy command outside the MSBuild environment and then just paste it into the MSBuild script with confidence that it will work - much easier than trialling copy settings as part of your full build process.
Exec is documented here. The example shows pretty well how to do what you want, but in your case you can probably just replace the Command attribute with the entire xcopy/robocopy command (or even the name of a batch file) you want to use, so you won't need to set up the ItemGroup etc.

Copy files to another folder during check in (TFS Preview)

I have the following scenario: The company edits aspx/xml/xslt files and copy manually to the servers in order to publish them. So, no build is done. For the sake of control we've decided to adopt TFS Preview since it tracks the version, who edited and so on. Needless to say, it works like a charm. :)
The problem is that since we are unable to build the apps we can't set a build definition to automate the copy of files to another place which, as I've stated before, is done manually.
My question is: Is it possible to copy the files to another place (a folder in a server or local) during the check in? If so, how? (remember, we don't build. so we can't customize the build process...)
You have two options.
1) Create a custom check in policy. I'm not familiar with this process enough to give you any pointers, but I believe it can be done.
2) Create a custom build template, and use that for your builds. You should be able to wipe the build template down to nothing, and then only add the copy operation to it. This is probably the route I would take. Get started here.
You mention you are using TFSPreview, which is hosted on the cloud so it won't be able to access any machines in your network unless you're prepared to open up your firewalls :).
You can copy source controlled files around the TFS Instance ([say into a Source Controlled Drop F1) and then check this out after the build completes.
Start by familiarising yourself with customising the TFS Build Process.
When you're up to speed, you need to look at adding a "Copy" Activity in the Workflow to move the files to the drop folder.

TFS MSBuild: $(ProjectDir) blank or random

I have a vcproj file that includes a simple pre-build event along the lines of:
Helpertask.exe $(ProjectDir)
This works fine on developer PCs, but when the solution is built on our TFS 2008 build server under MSBuild, $(ProjectDir) is either blank or points to an unrelated folder on the server!
So far the best workaround I have managed is to hard code the developer and server paths instead:
if exist C:\DeveloperCode\MyProject HelperTask.exe C:\DeveloperCode\MyProject
if exist D:\BuildServerCode\MyProject HelperTask.exe D:\BuildServerCode\MyProject
This hack works in post-build steps but it doesn't work for a pre-build step (the Pre-build task now does nothing at all under MSBuild!)
Do you have any ideas for a fix or workaround? I have very little hair left!
$(MSBuildProjectDirectory) worked for me
I think your problem may be related to how items are initalized. An items include attribute is evaluated at the begining of a build. So if you depend on files that are created in the build process you must declare these as dynamic items. Dynamic items are those defined inside of a target, or by using the CreateItem task. I've detailed this on my blog MSBuild: Item and Property Evaluation.
Sayed Ibrahim Hashimi
My Book: Inside the Microsoft Build Engine : Using MSBuild and Team Foundation Build
I think the problem is that build server's workspace probably isn't initialized properly.
I just kept getting problems with this - I tried many different approaches but they all failed in mysterious ways.
Once $(ProjectDir) started behaving properly again, the pre-build step stopped executing the command (I added echo commands above and below it - they were both executed, but the program in between them was not. No errors or output of any kind were generated to indicate why it failed).
I don't know if this is a dodgy server of if MSBuild is having a laugh.
I've given up now. I gave the build server a big kick and have changed tack: We now run this tool offline (manually) and check in the results for the build server to use. So much for an automated build :-( If only MSBuild would run solutions in the same way as Visual Studio does - it's maddening that it sets up the environment completely differently (different paths coming out of the solution variables, ouptus redirected into different folders so you can't find them where they're supposed to be, etc)
I branched an existing project and $(ProjectDir) kept the old directory in the newly branched code. But that's because I had some compiling errors. Once every project in the solution compiled without errors, $(ProjectDir) changed to the correct path.
Carlos A Merighe

Resources