My project has multiple mostly independent modules each with their own Ivy files. A couple of the modules are 'top-level' in that nothing depends on them, they just depend on other modules.
I'd like to generate a pom for those modules to use to publish the dependencies for my users to consume.
The makepom task only accepts a single file via the ivyfile attribute afaict. I create a master module which declares that it depends on the top-level modules and provide that to makepom, but it does not transitively include the dependencies of the top level modules, it only lists the top level modules themselves.
I realize I could just provide several pom files, but for my sanity, i'd prefer to keep it to just one.
So I am wondering if you can somehow pass multiple Ivy files to the make pom task, or if there is a way to get it to list all the dependencies when I use a master Ivy file? Or at worst, is there an easy way to merge pom files w/o doing it by hand?
If this is a documentation concern, configure each module to generate a report of it's transitive dependencies, see the ivy report task.
The makepom task only accepts a single argument because just like a Maven project modules only have a single file that declares its dependencies.
Perhaps what you could do is create a parent ivy file which has a dependency on each child module?
As Mark O'Connor pointed out, the <ivy:report> task does an excellent job of documenting all the transitive jars. Which ones are required for runtime, compilation, etc.
Another possibility is to use <ivy:retrieve> to retrieve all the jars used by your Master module into an empty directory. Even if this isn't required by your project itself. The list of required jars would be a simple directory listing.
Related
I am still somewhat green to Grails. Seems to me that there are multiple locations where you can put dependent JARs, among them are BuildConfig.groovy and then there is dependencies.txt in the grails folder and then there can also be n number of 'lib' folders.
What is the difference between these? When use one over the other? Why can't Grails have one central place where all dependencies are kept?
The preferred method to manage your dependancies is through BuildConfig.groovy since it's uses repositories (maven for example) for making those resources available.
The reason for grails-app/lib being available for use is because there are cases where resources aren't kept in a repository for one reason or another and you need a way to include the resource directly with the application itself.
When in doubt always use BuildConfig.groovy unless you have a use case where you can't.
Update
The dependencies.txt file is simply a listing of the dependencies used by Grails and is not used to resolve them. You can read more about it in the documentation.
You can find a list of dependencies required by Grails in the
"dependencies.txt" file in the root directory of the unpacked
distribution.
One of the tips Burt Beckwith provides when creating plugins is to delete files you don't use.
So if you don't use UrlMappings.groovy - delete it.
I was wondering about directories. If you have no controllers, should you delete the controller directory?
Thanks
The short answer is "Yes, you should." Looking at some of the other plugins you can see this is pretty standard practice. For example the Redis plugin on GitHub.
You can delete directories, but they'll get re-created after running various scripts, in particular package-plugin. I tend to remove them as source folders in GGTS so they're not distracting - I like to only see directories that are being used. I used to use an Ant script to do various build tasks for plugins, but at this point all I use them for is the post-package-cleanup task that deletes unused folders, e.g .https://github.com/grails-plugins/grails-spring-security-core/blob/master/build.xml.
It turns out that only three plugin files are required - all of the rest can be deleted if they're not used. These are the plugin descriptor, application.properties (although this is only used to specify the Grails version), and BuildConfig.groovy. BuildConfig.groovy might be optional too if you don't need to publish the plugin to a repo and have no dependencies. At a minimum it's needed to specify the release plugin, but if you don't need that they you can probably get by with just 2 files :)
Being a Maven newbie, I want to know if its possible to use multiple classifiers at once; in my case it would be for generating different jars in a single run. I use this command to build my project:
mvn -Dclassifier=bootstrap package
Logically I would think that this is possible:
mvn -Dclassifier=bootstrap,api package
I am using Maven 3.0.4
Your project seems like a candidate for refactoring into a couple of what Maven calls "modules". This involves splitting the code into separate projects within a single directory tree, where the topmost level is normally a parent or aggregator POM with <packaging>pom</packaging> and a <modules/> list containing the sub-project directory names.
Then, I'd advise putting the API interfaces/exceptions/whatnot into an api/ subdirectory with its own pom.xml, and putting the bootstrap classes into a bootstrap/ subdirectory with its own pom.xml. The top-level pom.xml would then list the modules like this:
<modules>
<module>api</module>
<module>bootstrap</module>
</module>
Once you've refactored the project, you will probably want to add a dependency from the bootstrap module to the api module, since I'm guessing the bootstrap will depend on interfaces/etc. from the api.
Now, you should be able to go into the top level of the directory structure and simply call:
mvn clean install
This approach is good because it forces you to think about how different use cases are supported in your code, and it makes dependency cycles between classes harder to miss.
If you want an example to follow, have a look at one of my github projects: Aprox.
NOTE: If you have many modules dependent on the api module, you might want to list it in the top-level pom.xml in the <dependencyManagement/> section, so you can leave off the version in submodule dependency declarations (see Introduction to the Dependency Mechanism).
UPDATE: Legacy Considerations
If you can't refactor the codebase for legacy reasons, etc. then you basically have two options:
Construct a series of pom.xml files in an empty multimodule structure, and use the build-helper-maven-plugin along with source includes/excludes to fragment the codebase and allocate the classes to different modules out of a single source tree.
Maybe use a plugin like the assembly plugin to carve up the target/classes directory (${project.build.directory}) and allocate classes to the different jars. In this scenario, each assembly descriptor requires an <id/> and by default this value becomes the classifier for that assembly jar. Under this plan, the "main" jar output will still be the monolithic one created by the Maven build. If you don't want this, you can use a separate execution of the assembly plugin, and in the configuration use <appendAssemblyId>false</appendAssemblyId>. If the output of that assembly is a jar, then it will effectively replace the old output from the jar plugin. If you decide to pursue this approach, you might want to read the assembly plugin documents to get as much exposure to different examples as you can.
Also, I should note that in both cases you would be stuck with manipulating the list of things produced by using a set of profiles in the pom in order to turn on/off different parts of the build. I'd highly recommend making the default, un-qualified build one that produces everything. This makes it more likely for things like the release plugin to catch everything you want to release, and up-rev versions, etc. appropriately.
These solutions are usually what I promote as migration steps when you can't refactor the codebase all at once. They are especially useful when migrating from something like an Ant build that produces multiple jars out of a single source tree.
I am considering switching a Maven project that I manage to Apache-Ant/Ivy. I need more control over the build process and am getting very frustrated with Maven. Please no comments about how great Maven is. My question is about Ivy.
I would like to set up a "standard" Ant build template that can later be used for other projects with minimal changes.
I will set up a central "enterprise" repository where we can place third-party libraries that are not available in the public Maven repositories (e.g. commercial libraries, Sun libraries, proprietary libraries, etc.). This enterprise repository will be available on our local LAN, but may not be available from outside the office.
Each developer will have a private repository in ~/.ivy/repository. I would like the Ant build to automatically update this private repository with changed versions of libraries from the enterprise repository.
In ~/.ivy/ant, I plan on placing "standard" modules for including in the individual project build.xml files, using the include task in Ant 1.8. These modules will provide things like Scala and Clojure compilation targets with different versions for different Scala and Clojure versions (e.g.: scala-compile-2.9.1.xml, clojure-compile-1.3.xml, etc.) The build modules will be available in the enterprise repository and should be updated automatically in the private repositories if they change.
Each project will follow a standard Maven directory structure: ${project}/src/main/java, ${project}/target/classes, etc.
In the past, I tried using Ivy but the Ant build files got to be very large (> 500 lines for the template build file) and hard to manage/edit. I am hoping that by putting standard targets in their own build modules in the ~/.ivy/ant directory, I can avoid that code bloat.
Can this be done? Am I way off base? The only documentation I can find on Ivy is at the Apache web site (http://ant.apache.org/ivy). Is there any other documentation available, including books?
Rather sensible idea about dividing template build file into includable helper files. Personally, now i'm switchin' a really large project from ant (no dependency managment at all - only copying files from ftp) to ant/ivy solution. So i've done this way - i have a file with milestones targets - i.e ready-to-compile, compiled, ready-to-archiving, archived - so on. I think u got the idea. I've configured dependencies of all this targets ( dependencies in terms of ant, do't get me wrong). In that way - compiled depends on ready-to-compile, ready-to-compilede depends on initialized - smth like this. This targets don't have body - they are for including in every build-file of every module of your multi-module project. The sole purpose of this targets for maintaining the STATE of build, because of this import stuff things become rather tricky and it's hard time to know what target was overriden, and when this target would be run. But with this file i can easily change state of vy build on every sensible milestone. I want in one module to compile help files with exteran exe. No problem - in this project i just do this - ready-to-archiving depends on the target for compiling help. And as this milestones targets are included - i can override only some of them - all others would presere the desired way of building project.
Another part of my strategy - mixins build files - for every specific area. So i have a file for ivy. There i put initializing, resolving, publishing and so on. When i want to use ivy - i just include this file and manage depdendencies through my milestones targets. If the build is typical - i only include this file and have a convention-over-configuration functionality. All out of the box. How?? Just combining with other mixins. Mixins may include other mixins to depend on them. So each mixin is a reusable part of my build strategy. The stuff from OOP - single-concerned unit. In your case it's scala mixin with targets specific for scala stuff.
Then i have delegate.xml that delegates child projects common build activities. I have dist, all, test and whatever u want for multimodule project. The build order is evaluated with ant-ivy task buildlist.
There also some other files - but this are the strategically basic files that helped me to have a reusable and maintanable build with this BIG and VERY Conservative project. So, if u are interested about details, don't be shy and contact me. I will be very pleased to help you, because ivy docs are really comlicated and incomplete.
EDIT: About books - Ant in Action may help you, i took several ideas from this book, and i really highly recommend it everyone to read. There u can find ivy stuff, also. And about ivy docs - sorry, it's all that is available. But when i was in trouble with this cumbersome ivy+ant - i found several interesting articles on private blogs. So ... that may fill the gap some way.
I would like to simplify my main build scripts, and I'd like to have the ability to reuse certain common ant tasks as a library, but I'd also like them to be easily available as a package.
For instance, I've got a task which sets up the Flex environment variables that I need to build a variety of projects. I can (And am currently) include those scripts by relative path from another location in source control. But what I want to do is make a single download-able package that I can grab via Ivy that contains all of these generic tasks.
A jar seems the most natural solution, since this is doable from java (Use the class loader to access the file inside the jar.), but I can't seem to find a "native" way in Ant to just get the xml file.
In short, I want to do:
<import file="some.jar!bootstrap.xml">
But that doesn't work.
Is there someway to do this? Any other suggestions for making a library of ant scripts would be much appreciated as well.
From what I understand you're trying to extract a file containing more ant tasks from your jar and then tell ant to execute the tasks in those extracted files. Since the files are static, you'd probably be better off creating actual java Task definitions in your jar and declaring them in your ant build file. However, if you don't want to do that, you can just use the Unzip ant task to extract the resource out of the jar and onto the file system and then use the Ant task to execute the extracted file.
IIRC there's ongoing work in Ant to support this but it's not supported in any published version.