Best way to manage dependent ant builds over multiple servers? - ant

I have these ant scripts that build and deploy my appservers. My system though is actually over 3 servers. They all use the same deploy script (w/flags) and all work fine.
Problem is there are some dependencies. They all use the same database so I need a way to stop all appservers across all machines before the build first happens on machine 1. Then I need the deployment on machine 1 to go and complete first as it's the one that deals with the database build (which all appservers need to start).
I've had a search around and there are some tools that might be useful but they all seem overkill for what I need.
What do you think would be the best tool to sync and manage the ant builds over multiple machines (all running linux)?
Thanks,
Ryuzaki

You could make your database changes non-breaking, run your database change scripts first and then deploy to your appservers. This way your code changes aren't intrinsically tied to your database changes and both can happen independently.
When I say non-breaking I mean that database changes are written in such a way that 2 different version of code can function against the same database. For example rather than renaming columns, you add a new one instead.

Related

Best practice for moving fastlane deployment of whitelabel apps off local machine and to a server/service

We create iOS and Android apps that are white-labeled. They all use a single code base (one for iOS and one for Android). Whenever we need to make changes to all of our apps (> 100 live in App Store) we rely on Fastlane. We have a "bulk" command that submits each new build to Apple, changing out config variables first and a few files so each app is unique.
This has worked well for us... but... its getting really slow. We'd love to be able to take advantage of some of the continuous development services out there. It seems like they weren't necessarily made for this use case but it might still work?
Ideally instead of running bulk on a local machine we could spin up 100 instances on something like CircleCI and they all run side by side, using our fastlane script to build, submit, etc.
We started by looking into CircleCI. The problem we are running into is they don't allow injection of variables into a job (https://ideas.circleci.com/ideas/CCI-I-690).
Is there a better service for this goal? Is there a tool that was built to achieve this? Struggling to find an alternative to hacking together a bunch of smaller tools.
I think you already identified your first step: You will have to split your fastlane (and other tooling) configuration, so it is possible to build each app in isolation.
Then you can trigger a job for each app on a CI service like for example Travis CI or Azure Pipelines (both have a simple API you can use to start jobs and give them some parameters that will be available to your job) that builds and releases the app.
All the other things (e.g. one big build vs. many small build steps etc.) are just implementation details and will depend on the individual service or tools you choose.

Any quick way to convert VS .net manual build into Jenkins?

We are migrating 50+ .net project from TFS to GitHub, at the same, we want to use Jenkins to automate the build. Currently all the builds are done inside the Visual Studio manually. I know how to automate this build using MSBuild and we already have a lot of these projects building inside Jenkins.
My question: is there a way to set up these 50+ project quickly w/o creating them one by one manually? Anyway to script them? e.g. a Jenkins project has everything inside a folder, I can copy a sample project/folder to create a new one and modify something. Or create a Jenkins project using a script reading a config file? Any idea can save some time is appreciated.
Not a direct answer but too long for a comment so here it goes anyway. Following the Joel test (which in no way is dogmatic for me but does make a lot of good points), and in my experience, you should already have an msbuild file now to build all those projects 'in one click'. Then, setting up a build server, in fact any build server, is just a matter of making it build that single parent project. This might not work for everyone, but for several projects I've worked on this had the following advantages:
the entire build process gets defined by developpers, working locally on their machine, using 'standard' tools
as such they don't need to spend hours in a web interface figuring out the appropriate build steps, dependencies and whatnot (also those hours would have been worthless in the end if switching to a different build server)
since a complete build is now just a matter of msbuild master.proj, possibly along with some options to define configuration/platform/output directories getting this running on any build server should be painless and quick
in the same manner this makes it easy to test different build servers with a minimum of time and migrate between them (also no need to ask SO questions on how to set everything up :)
this also makes it easy for other developpers to get complete builds as well without having to go round via a build server
Anecdote: we once had Jenkins running on multiple different projects as well. It took us days to get everything running, with the templates etc, and we found the web intercae slow and cumbersome (and getting to know the API would have taken even more days). Then one day I got sick of this and made a bunch of msbuild scripts which could build everything from one msbuild command. That took much less time than setting up Jenkins, a couple of hours or so. Then I took a TeamCity installation we already had and made it build the new master project. Took like an hour and everything worked. Just recently I took the same project and got it working on Visual Studio Online, again in no time.
If those projects are more or less similar to build, you will probably be interested in using the template plug-in for jenkins. There you configure a dummy project such that it does what is common to (most of) the 50+ projects.
Afterwards you create a separate project for each: Create the first project and make it use the template project for each of the steps which can be shared with the template project (use build step from other project). All subsequent projects can be created as slightly adopted copy of this first 'real' project.
I use it such that the variable $JOB_NAME (the actual project name in jenkins that is) is part of the repository path and I can thus clone from http://example.org/$JOB_NAME/
Configured that way, I can include the source code management step in the templating job and use it unmodified. Similar with the build step and post-build step: they are run by a script which is somewhat universal accross all my projects (mostly calling make and guessing deployment / publication paths upon $JOB_NAME again).

Is there a way for one ant script to check if another is already running?

We have several automated build scripts, some of which are run automatically every 2 hours, and some of which are only ever run manually.
If a build script is started manually while another is already running, it can cause...problems. Such as merging untested branches into the production branch.
I'd like to prevent this happening again, and the simplest solution in my mind is to have each build script start by checking that another is not currently running.
Is there a way in ant to directly check if another ant instance/script is currently running?
If not, what's the simplest way to add such a check? My first thought is a file created at the beginning and deleted at the end of a build. I'd prefer a way that handles user-cancelled builds nicely, but it's not necessary. It needs to work if a build succeeded and if a build failed (but was not killed by the user).
If these are separate Ant processes, then I think the only solution is to define a lockfile of some sort that each Ant process needs to acquire before it can continue.
Perhaps the tempfile task could be used for this?
Actually, a sort-of semaphore based on a directory might be better because the tempfile really is a unique tempfile. The first thing your script does is use mkdir to create a shared resource directory name, but it only does this if the directory does not exist.
Upon exit it invokes delete on this shared resource name.
The idea is that the content and name of the directory is meaningless -- it only serves as an "IPC" cooperative locking mechanism.
This isn't particularly elegant, but I think your only other option is to set up a build server that handles scheduled and continuous builds based on various triggers. One that many people use is Jenkins (or has it been renamed?)
[Update]
Perhaps Do I have a way to check the existence of a directory in Ant (not a file)? would do the trick?
To be honest, this approach may work in the short term, but it just moves the problem around. Instead of resetting unit test results you'll be removing lockfiles by hand to get builds working again. My advice is to set up a CI build system, but I recognize this is a fair amount of work (and introduces a whole different set of future problems.)

Continuous TFS local deployment

I have a configured CI with TFS. What are the best ways to organize post-build (or even better post-test) deployment. My binaries are some libraries with single executable file.
Here is what I need:
Build on each commit. (This is configured and done)
When build is successful (or tests), grep binaries and drop it to some specific folder on the same build machine with full replacement of previous files and folders. (I`d like to be able to configure somehow the folder location)
Launch the application with some parameters and I need to have standart output redirection. For example: App.exe param=paramValue > log.txt
And before starting the application I need to kill the previous instance of it. (This is some kind of server instance that is alive all the time)
The most obvious solution that I tried was to do this with post-build script. But this try failed. See here
Use Release Management in conjunction with PowerShell (or better still, Desired State Configuration) scripts. Depending on your MSDN licensing, it could be free for you, and it's specifically designed from the ground up to handle managing releases.
Overextending the build process to also do deployment is an awful idea. The build tools were designed to build, and they're good at it! They're not good at the types of considerations you have when you're trying to do deployments.
The problem is that most CI solutions (TFS included) would get you to the point where you had binaries, then say "Welp, you're on your own! Have fun figuring out how to deploy this stuff!" This never ends well -- you end up with something inflexible and very difficult to troubleshoot and maintain.
The modern "devops" approach here is to have your application's requirements in source control, treated as code (in this case, as a DSC script or scripts).
One other consideration: It sounds like you're trying to treat a console application as a service. This is going to be a big, big pain for you, since most software that handles releases will not run in an interactive session. Turn it into a true Windows service and your life will be easier.

How to Sandbox Ant Builds within Hudson

I am evaluating the Hudson build system for use as a centralized, "sterile" build environment for a large company with very distributed development (from both a geographical and managerial perspective). One goal is to ensure that builds are only a function of the contents of a source control tree and a build script (also part of that tree). This way, we can be certain that the code placed into a production environment actually originated from our source control system.
Hudson seems to provide an ant script with the full set of rights assigned to the user invoking the Hudson server itself. Because we want to allow individual development groups to modify their build scripts without administrator intervention, we would like a way to sandbox the build process to (1) limit the potential harm caused by an errant build script, and (2) avoid all the games one might play to insert malicious code into a build.
Here's what I think I want (at least for Ant, we aren't using Maven/Ivy right now):
The Ant build script only has access to its workspace directory
It can only read from the source tree (so that svn updates can be trusted and no other code is inserted).
It could perhaps be allowed read access to certain directories (Ant distribution, JDK, etc.) that are required for the build classpath.
I can think of three ways to implement this:
Write an ant wrapper that uses the Java security model to constrain access
Create a user for each build and assign the rights described above. Launch builds in this user space.
(Updated) Use Linux "Jails" to avoid the burden of creating a new user account for each build process. I know little about these though, but we will be running our builds on a Linux box with a recent RedHatEL distro.
Am I thinking about this problem correctly? What have other people done?
Update: This guy considered the chroot jail idea:
https://www.thebedells.org/blog/2008/02/29/l33t-iphone-c0d1ng-ski1lz
Update 2: Trust is an interesting word. Do we think that any developers might attempt anything malicious? Nope. However, I'd bet that, with 30 projects building over the course of a year with developer-updated build scripts, there will be several instances of (1) accidental clobbering of filesystem areas outside of the project workspace, and (2) build corruptions that take a lot of time to figure out. Do we trust all our developers to not mess up? Nope. I don't trust myself to that level, that's for sure.
With respect to malicious code insertion, the real goal is to be able to eliminate the possibility from consideration if someone thinks that such a thing might have happened.
Also, with controls in place, developers can modify their own build scripts and test them without fear of catastrophe. This will lead to more build "innovation" and higher levels of quality enforced by the build process (unit test execution, etc.)
This may not be something you can change, but if you can't trust the developers then you have a larger problem then what they can or can not do to your build machine.
You could go about this a different way, if you can't trust what is going to be run, you may need a dedicated person(s) to act as build master to verify not only changes to your SCM, but also execute the builds.
Then you have a clear path of responsibilty for builds to not be modified after the build and to only come from that build system.
Another option is to firewall off outbound requests from the build machine to only allow certain resources like your SCM server, and your other operational network resources like e-mail, os updates etc.
This would prevent people from making requests in Ant to off the build system for resources not in source control.
When using Hudson you can setup a Master/Slave configuration and then not allow builds to be performed on the Master. If you configure the Slaves to be in a virtual machine, that can be easily snapshotted and restored, then you don't have to worry about a person messing up the build environment. If you apply a firewall to these Slaves, then it should solve your isolation needs.
I suggest you have 1 Hudson master instance, which is an entry point for everyone to see/configure/build the projects. Then you can set up multiple Hudson slaves, which might very well be virtual machines or (not 100% sure if this is possible) simply unprivileged users on the same machine.
Once you have this set up, you can tie builds to specific nodes, which are not allowed - either by virtual machine boundaries or by Linux filesystem permissions - to modify other workspaces.
How many projects will Hudson be building? Perhaps one Hudson instance would be too big, given the security concerns you are expressing. Have you considered distributing the Hudson instances out - one per team. This avoids the permission issue entirely.

Resources