My flyway config - using mvn package to run flyway
<plugin>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-maven-plugin</artifactId>
<version>4.2.0</version>
<configuration>
<locations>
<location>db.migration.h2</location>
</locations>
</configuration>
</plugin>
I have a maven app that works just fine for one DB (using h2 database engine) I need to support flyway to other DB systems (db2, oracle ee, postgres. On another project we are doing something similar flyway config file on flyway version 3.2.1 to manage table cration between H2 and timesten.
(new findings) When I use flyway location or configuration file entry in hte pom file. "mvn clean package" works just fine using. However "mvn verify" gives me the error that it has multiple V#_# files.
I had H2 flyway files in the following directory structure
atdd/src/main/java/db/migration/V1_2__comment.java
atdd/src/main/resources/db/migration/V1_1__create_tables.sql
I created a subdirectory "h2" under migration and moved the flyway files into that subdirectory.
I made copies db and oracle ee versions of those files in "db/migration/db2" and "db/migration/oracle_ee
RUNNING maven package only gives me:
Caused by: org.flywaydb.core.api.FlywayException: Found more than one migration with version 1.1
Offenders:
->/Users/XXXXX/Documents/fun/atdd/target/classes/db/migration/h2/V1_1__create_tables.sql (SQL)
->/Users/xxxxxx/Documents/fun/atdd/target/classes/db/migration/db2/V1_1__create_tables.sql (SQL)
I have tried using a property file and that does not work either
<plugin>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-maven-plugin</artifactId>
<version>4.2.0</version>
<configuration>
<configFile>./flyway.properties</configFile>
</configuration>
</plugin>
atdd/flyway.properties has
flyway.locations=db.migration.h2
Seems to be a problem with what ever runs regression tests (surefire plugin - i am fairly new to maven)
Is there anything special about surefire plugin?
There is a workaround, you could try setting location explicitly from your program using Flyway.setLocations ("some/path/test.sql")
I was able to get working by adding config arguments to failsafe plugin
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.19.1</version><!--$NO-MVN-MAN-VER$ -->
<configuration><argLine>-Dflyway.locations=db.migrtion.h2</argLine></configuration>
<executions>
with
mvn clean package verify site -Dflyway.locations=db.migration.h2
oddly does not work if I don't have "-Dflyway.locations=db.migration.h2 "
Flyway searches the entire classpath recursively looking for migrations to apply. The "recursively" word means examining folders nested in other folders.
So all of your SQL files were found. There is no way for Flyway to know which of those nested folders should be used or should be ignored.
As the other Answer suggests, you must specify the desired folders explicitly if you want some oath them ignored.
Related
I've started using travis-ci to automate my builds. I have several open source projects and they all deploy to nexus sonatype from where they go to maven central. They're all Java projects that use Maven to build and github as a repo.
I've been doing this manually for years and I have appropriate keys and logins and my pom is compatible etc.
Implementing the first one was easy enough, it is a single module project and it builds and deploys just fine. Then I did a second one, a multi module project and got that working in much the same way. My third, however, is baffling me.
The maven build on this thing is a bit tricky but it does run fine locally and I even have it running the actual build on travis successfully. But the deploy doesn't work.
The problem is that when it tries to connect to nexus sonatype I get an authorisation error:
Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project madura-bundles:
Failed to deploy artifacts: Could not transfer artifact nz.co.senanque:madura-bundles:pom:4.5.6 from/to sonatype-nexus-staging (https://oss.sonatype.org/service/local/staging/deploy/maven2/):
Failed to transfer file: https://oss.sonatype.org/service/local/staging/deploy/maven2/nz/co/senanque/madura-bundles/4.5.6/madura-bundles-4.5.6.pom.
Return code is: 401, ReasonPhrase: Unauthorized.
It looks like I have not set up my sonatype credentials correctly. But I have set it up the same way as I did for the other two projects. Specifically I go into Nexus Sonatype and get my Access User Token and add those to my environment (SONATYPE_USERNAME and SONATYPE_PASSWORD, I deleted both of these and re-entered them in case it was a typo). I also add references to those in my local maven settings file:
...
<server>
<id>ossrh</id>
<username>${env.SONATYPE_USERNAME}</username>
<password>${env.SONATYPE_PASSWORD}</password>
</server>
...
The local maven settings file is a file in my project and the .travis.yml maven commands refer to it. The travis.yml file has a deploy section identical to the other two (working) projects, except I have been adding extra bits to try and make it work. But none of the differences there look relevant. The working deploys look like this:
deploy:
provider: script
script: "mvn versions:set -DnewVersion=${TRAVIS_TAG} && mvn clean deploy -B -U -P release --settings travis/settings.xml"
on:
tags: true
so this is only going to kick off if the repo has been tagged and it uses the tag as the version number. In the other projects this works fine, but not in the one I'm trying to get working. The tag does trigger the deploy as it should, but the deploy fails.
Does anyone know why I get the deploy on one project but not another? Thanks for any help.
Okay, I figured it out. The problem is that the parent pom of the failing project does not have a release profile, the parent pom of the working project does have one. The release profile in both cases looks like this:
<profile>
<id>release</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.5</version>
<executions>
<execution>
<id>sign-artifacts</id>
<phase>verify</phase>
<goals>
<goal>sign</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.sonatype.plugins</groupId>
<artifactId>nexus-staging-maven-plugin</artifactId>
<version>1.6.3</version>
<extensions>true</extensions>
<configuration>
<serverId>ossrh</serverId>
<nexusUrl>https://oss.sonatype.org/</nexusUrl>
<autoReleaseAfterClose>true</autoReleaseAfterClose>
</configuration>
</plugin>
</plugins>
</build>
</profile>
It is needed to sign the generated artifacts (jar files, javadoc files etc) with the gpg plugin and to deploy them to nexus. The deploy to nexus is attempted without this but because it didn't have the reference to serverId:ossrh it doesn't pick up credentials from the maven settings file and therefore I get an authorization error.
The release profile needs to be on the parent project and all the module projects. I had added it to the modules but forgot the parent.
Our organization has a convention for naming rpms. Typically, the rpm will have a shorter base name than the Maven project. There is also a convention around how releases are named. So we want a name like
${shortname}-${project.version}-${release}.noarch.rpm.
I want to build such rpms using the rpm-maven-plugin rather than older nmake technology.
And this is easily accomplished using the plugin's parameters. The rpm generated in the target directory has the desired name.
However, when mvn install installs this rpm into the maven repository, it insists on storing it the "maven way": ${project.artifactId}-${project.version}.rpm
I want the rpm stored in the standard maven repository directory using the name that is initially created on package.
How may I accomplish this?
Update:
I tried using the maven-install-plugin (install-file goal) and did not get the results I was after. But this was partly because I wasn't invoking it properly. It wasn't being invoked. If you define an install-file goal, it must be explicitly tied to the install phase. Doing so, ie, adding a <phase>install</phase> to the configuration at least invoked the execution of the install that I wanted but it still did not allow me to name the rpm as desired.
According to Karl Heinz Marbaise, a committer on the Apache Maven Project, what I am trying to do is impossible, and should not be attempted.
However, I need what I need, and have found a compromise that gives me most of that. The only thing I had to sacrifice was the assumption that the repository RPM must live in the same repository directory as the rest of the project. This is a very minor sacrifice. Once I make that, I can store the rpm, named as I want it to be, in a directory of the Maven repository, named with the short name.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-install-plugin</artifactId>
<version>2.5.2</version>
<executions>
<execution>
<id>install-rpm</id>
<goals>
<goal>install-file</goal>
</goals>
<phase>install</phase>
<configuration>
<file>${project.build.directory}/rpm/${rpm.name}/RPMS/noarch/${rpm.name}-${project.version}-${rpm.release}.noarch.rpm</file>
<groupId>${project.groupId}.rpms</groupId>
<artifactId>${rpm.name}</artifactId>
<version>${project.version}-${rpm.release}</version>
<classifier>noarch</classifier>
<packaging>rpm</packaging>
</configuration>
</execution>
</executions>
</plugin>
Using a groupId of ${project.groupId}.rpms rather than just ${project.groupId} allows all rpms built this way to live outside of the main branch of the repository, which is useful in our situation.
Using a version of ${project.version}-${rpm.release} rather than just ${project.version} allows the release to be incorporated into the name.
And the noarch classifier gets that into the name as well.
I am currently working on a multi-module project with the following structure.
root
-module A
-module B
What I want to do is to execute module B (The main method of the module) after the compiling of module B (Module B depends on module A). But I need to do this with a customized command.
Ex -
mvn runb
I know that the exec maven plugin can be used to run a project using maven. What I don't understand is how to create a custom command (phase) in maven. Is there anyway to achieve this without writing a maven plugin?
I referred various guides such as https://community.jboss.org/wiki/CreatingACustomLifecycleInMaven trying to achieve this. But they need to create components.xml and lifecycle.xml files under src/resources/META-INF. I don't understand how to apply that file structure to my project since it is a multi-module project where each module has seperate src directories.
(I'm using maven 3)
You cannot create a custom lifecycle without writing a Maven plugin.
And without hacking Maven itself, at least as of Maven 3.0.5, it is not possible to add a custom phase to Maven through a plugin. The phases are loaded up by the core of Maven from its configuration before any plugins are processed.
If you really have your heart set on using one command to do what you want, writing a plugin is the only way. With some pluginGroup mappings in your settings.xml, this can be made simpler (you can specify mvn my:plugin rather than mvn com.mygroupid:plugin).
But if you are willing to have a slightly more verbose syntax on the command line, what you want could be achieved through profiles and the exec maven plugin.
Add a profile to module B that uses the exec plugin to run itself.
Something like this:
<project>
...
<profiles>
<!-- This profile uses local webapp security instead of the BlueCoat device, useful for testing -->
<profile>
<id>execb</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
<executions>
<execution>
<id>runb</id>
<goals>
<goal>java</goal>
</goals>
<phase>verify</phase> <!-- Anything after package phase -->
<configuration>
<!-- Exec plugin configuration goes here -->
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>
You'll need to configure the exec plugin depending on how you run your JAR, more info here.
What this does is run the exec plugin as part of module B's build, but only if the execb profile is activated.
Now, when you just want to build your project (without any exec), build like normal (e.g. mvn install).
When you want to build and run, use the command line:
mvn install -Pexecb
and it will do the exec as part of the build.
I have a multimodule maven project. Project layout is described below:
PARENT
|-CHILD1
|-CHILD2
Parent project has pom packaging type and declares CHILD1 and CHILD2 projects as modules. Also PARENT project declares profile dev which declares some property.
CHILD1 project has jar packaging type and "overrides" PARENT dev profile by adding some dependency(dependency on commons-collections for example).
CHILD2 project has war packaging type and has dependency on CHILD1 project. Also CHILD2 "overrides" parent dev profile by adding another dependency(dependency on commons-io for example, I mean dependency that is not related with that one in project CHILD1).
Then when I run mvn clean install -Pdev maven doesn't put commons-collections.jar(dependency that is declared in CHILD1 project) to WEB-INF/lib of CHILD2 project, but commons-io.jar is there.
So, the question is: Why does not maven put dependencies from profiles that are declared in dependent projects of target project if target project declares another set of dependencies in that profile?
Actually I have much more projects and much more dependencies that varies in different profiles. And I want to declare project specific dependencies in that project pom.xml(supposing that declaring profile in project will "override" parent profile declaration)
I am assuming that you want to be able to test locally when developing, test your changes against a staging environment and finally deploy to production.
The critical thing that you need to keep in mind is that when an artifact gets deployed to the local/remote repository, the active profiles is not part of what gets deployed, so when you add dependencies via profiles things become very dangerous as you have no way of knowing if the webapp was built with the DEV profile active or the PROD profile active, and then when that built artifact gets deployed into production you could be royally screwed over.
So the short of this is that you ensure that your artifacts are independent of deployment environment.
This means that, for example, you will pick up configuration from:
files on the classpath
system properties
jndi entries
So for example, if deploying to Tomcat, you might put a configuration.properties into $CATALINA_HOME/lib
Your webapp on startup will use getClass().getResource('/configuration.properties') to resolve the properties file and fail to start-up if the file is missing (fail-fast)
you can let your unit/integration tests use a different config by putting a test version of configuration.properties in src/test/resources.
You use the same principle for the <scope>provided</scope> style dependencies of your application. In otherwords a dependency that the container is contracted with providing should be provided by the container. So you might build the production version of tomcat/jetty for yourself using Maven also and add in the required dependencies into that assembly. This would be things like the production version uses a MySQL database, so you need to add the mysql-jdbc driver into to $CATALINA_HOME/lib. It is relatively easy to do this with the assembly plugin as you are really just repacking a zip with some bits included and others excluded.
When testing locally you will want to make use of the helper plugins' run goals such as jetty:run and tomcat:run. The solution here is that there is nothing wrong with giving these plugins dependencies via profiles because you are not affecting the dependencies of the artifact you are only affecting the plugin's classpath.
e.g.
<project>
<!-- ... some stuff .. -->
<profiles>
<profile>
<id>DEV</id>
<build>
<plugins>
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty-maven-plugin</artifactId>
<dependencies>
<dependency>
<groupId>commons-dbcp</groupId>
<artifactId>commons-dbcp</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.18</version>
</dependency>
</dependencies>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>
You can also configure system properties or classpath additions to pull in the required configuration file.
The net result of all this is that the artifact remains environment independent and you can test easily against the various environments
Hope this answers your question (even if sideways)
i'm using flyway maven plugin to migrate a database:
<build>
[...]
<plugin>
<groupId>com.googlecode.flyway</groupId>
<artifactId>flyway-maven-plugin</artifactId>
<version>1.6.1</version>
<configuration>
<user>${hibernate.connection.username}</user>
<password>${hibernate.connection.password}</password>
<driver>${driver}</driver>
<url>${url}</url>
</configuration>
</plugin>
I have three environments (dev, pre, pro) and a profile for each. Every environment sets their own properties, so i can use flyway setting concrete profile and make my migrations to DB what i want.
Flyway has a clean goal, this goal allow drops all objects in the schema without dropping the schema itself.
There is some way to desactivate this goal only in one of my profiles? (in prod obviusly :P)
You can override call of the flyway plugin in the prod profile using the none phase: http://thomaswabner.wordpress.com/2010/02/16/howto-disable-inherited-maven-plugin/