How to maintain a minimal/isolated classpath during Maven Surefire tests, when additional test dependencies are in use? - maven-3

Let's say I am developing a MQ client wrapper library W, based on an existing MQ library suite M. M has m-client and m-server artifacts, and my W uses m-client imported via Maven as a compile-scope dependency.
Now when I write e2e tests for W using Maven Surefire, I can easily add m-server as a test-scope dependency; spawn a temporary M server alongside my test; configure and run my W agaist it; and tear down M server when done.
However at runtime, modules of the M suite are known to "interact" in "unpredictable" ways; varying its internal behaviors, depending on the M classes/resources currently available in the classpath.
For example, m-server has a hard dependency on another module m-extras, and m-client also has an optional dependency on m-extras; so m-client may behave differently when m-extras is present in the runtime/classpath.
W's intended/production usage environment will not have m-server or m-extras; just m-client pulled in by W. I would like to keep my e2e test runtime as close to the production environment as possible, i.e. free of the side effects caused by m-extras. However, because I use m-server which mandatorily pulls in m-extras, I currently have no way of isolating the (W, m-client) "runtime context" within my test (because the test has a flat classpath).
I currently see two workarounds:
Remove m-server from dependencies (so it and m-extras won't be present in W's test-runtime classpath, at all); and launch a separate process running M (m-server) for the duration of the test. This comes with the complexities of having to separately resolve a dependency graph for M, compose its classpath, and the whole process management overhead (plus the risk of leaving behind stray processes in case the test is force-stopped or crashes).
Within the test, spawn M using a separate classloader that has m-server (and its tree) in its classpath (or, do the same for W); still, this requires resolving M's/W's dependency graph for composing the loader's classpath. I looked at the environment and system properties available within a Surefire-driven test to see if Maven/Surefire includes the test and compile-scoped libraries in a "separable" way (so that I can launch either W (client) or M (server) with its own constrained classloader), but could not find anything helpful; the test's classpath is also "flatly resolved" with compile and test dependencies mixed up, so splitting the main classpath into a "compile context" and "test context" also does not seem to be possible.
What would be the proper way to compose and run my test, so that the server dependencies (esp. m-extras) will be transparent to the client - while keeping things simple, stable and maintainable?

Related

Paket + FAKE + swapping dependencies in CI tool

I'm messing about with some FAKE and Paket (on F#) and Jenkins, not really sure I know what I'm doing but I know what I WANT to do.
The short description is I want the build server to build a whole family of related services against a referenced package, but the package comes in different flavours (but share the same basic namespace/module names).
The long description;
I have a family of family of services that sit on top of an external
API.
i.e.
they all are reference some external package and access it through modules etc.
e.g.
ServiceA.fsprj
...
let f (x : ExternalApi.Foo) = ....
---------------
ServiceB.fsprj
...
let g (x : ExternalApi.Foo) = ....
The developer will probably develop against the most common flavour, lets say ExternalApiVanilla.
The developer will be using Paket, and Fake for build tools, and Jenkins.
When the code is checked in though I want the build service to attempt to build it against the vanilla flavour...but also against chocolate, strawberry and banana.
The flavours are not "versions" in the sense of a version number, they are distinct products, with their own nuget packages. So I think (somehow) I want to parametise a jenkins folder with all the jobs in with the name of the api package, pass that into the build script and then get the build script to swap out whatever the engineer has referenced and reference the parameter.
Of course some compilations will fail, we have to develop different variants of services to handle some of the variants of API, but 90% of our stuff works on all versions, we just need an automated way to check the build and then create new variants of services and jobs, to handle them.
as an aside, we are doing some things with C# and cake/nuget, but controlling the versioning by passing the nuget folder in and forcing the build to find specific versions of 1 flavour...I understadn this, though I wouldnt be able to write it, but I want to go 1 step further and replace the reference itself with a different one.
——————-
i’ll try looking at the paket.dependencies/paket references files in the build script, remove the existing reference, and add the jenkins defined ones from a shell and paket and aee what happens, dont especially like it, im dependent on the format of these files and i was hoping this would be mainstream
I have solved this, at least in the context of cake + nuget (and the same solution will apply), by simply search replacing the package reference (using XDocument) in the cake script with a reference parameter set up in the job parameters.
I'll now implement it in the fake version of this build, though I may simply drop paket all
together.

Ant: Is it possible to create a dynamic ant script?

So, at work, I frequently have to create virtually identical ant scripts. Basically the application we provide to our clients is designed to be easily extensible, and we offer a service of designing and creating custom modules for it. Because of the complexity of our application, with lots of cross dependencies, I tend to develop the module within our core dev environment, compile it using IntelliJ, and then run a basic ant script that does the following tasks:
1) Clean build directory
2) Create build directory and directory hierarchy based on package paths.
3) Copy class files (and source files to a separate sources directory).
4) Jar it up.
The thing is, to do this I need to go through the script line by line and change a bunch of property names, so it works for the new use case. I also save all the scripts in case I need to go back to them.
This isn't the worst thing in the world, but I'm always looking for a better way to do things. Hence my idea:
For each specific implementation I would provide an ant script (or other file) of just properties. Key-value pairs, which would have specific prefixes for each key based on what it's used for. I would then want my ant script to run the various tasks, executing each one for the key-value pairs that are appropriate.
For example, copying the class files. I would have a property with a name like "classFile.filePath". I would want the script to call the task for every property it detects that starts with "classFile...".
Honestly, from my current research so far, I'm not confident that this is possible. But... I'm super stubborn, and always looking for new creative options. So, what options do I have? Or are there none?
It's possible to dynamically generate ANT scripts, for example the following does this using an XML input file:
Use pure Ant to search if list of files exists and take action based on condition
Personally I would always try and avoid this level of complexity. Ant is not a programming language.
Looking at what you're trying to achieve it does appear you could benefit from packaging your dependencies as jars and using a Maven repository manager like Nexus or Artifactory for storage. This would simplify each sub-project build. When building projects that depend on these published libraries you can use a dependency management tool like Apache ivy to download them.
Hope that helps your question is fairly broad.

How to functionally test dependent Grails applications

I am currently working on a distributed system consisting of two Grails apps (3.x), let's call them A and B, where A depends on B. For my functional tests I am wondering: How can I automatically start B when I am running the test suite of A? I am thinking about something like JUnit rules, but I could not find any docs on how to programmatically start/manage Grails apps.
As a side note, for nice and clean IDE integration I do not want to launch B as part of my build test phase, but as part of my test suite setup.
A couple of months later and more deeply in the topic of Microservices I would suggest not to consider system tests as candidates for one single project - while I would still keep my unit- and service-level tests (i.e. API testing with mocked collaborators) in one project with the affected service, I would probably spin up a system landscape via gradle and docker and then run an end-to-end test suite in the form of UI tests.

Robot Framework use cases

Robot framework is keyword base testing framework. I have to test remote server so
i need to do some prerequisite steps like
i)copy artifact on remote machine
ii)start application server on remote
iii) run test on remote server
Before robot framework we do it using ant script
I can run only test script with robot . But Can we do all task using robot scripting if yes what is advantage of this?
Yes, you could do this all with robot. You can write a keyword in python that does all of those steps. You could then call that keyword in the suite setup step of a test suite.
I'm not sure what the advantages would be. What you're trying to do are two conceptually different tasks: one is deployment and one is testing. I don't see any advantage in combining them. One distinct disadvantage is that you then can't run your tests against an already deployed system. Though, I guess your keyword could be smart enough to first check if the application has been deployed, and only deploy it if it hasn't.
One advantage is that you have one less tool in your toolchain, which might reduce the complexity of your system as a whole. That means people can run your tests without first having installed ant (unless your system also needs to be built with ant).
If you are asking why you would use robot framework instead of writing a script to do the testing. The answer would be using the framework provides all the metrics and reports you would otherwise script for yourself.
Choosing a frame work makes your entire QA easier to manage, save your effort to write code for the parts that are common to the QA process, so you can focus on writing code to test your product.
Furthermore, since there is an ecosystem around the framework, you can probably find existing code to do just about everything you may need, and get answers to how to do something instead of changing your script.
Yes, you can do this with robot, decently easily.
The first two can be done easily with SSHLibrary, and the third one depends. Do you mean for the Robot Framework test case to be run locally on the other server? That can indeed be done with configuration files to define what server to run the test case on.
Here are the commands you can use from SSHLibrary of Robot Framework.
copy artifact on remote machine
Open Connection
Login or Login With Private Key
Put Directory or Put File
start application server on remote
Execute Command
For Running Tests on Remote Machine(Assuming Setup is there on machine)
Execute Command (use pybot path_to_test_file)
You may experience connections losts,but once tests are triggered they will be running on remote machine

Multiple classifiers in Maven

Being a Maven newbie, I want to know if its possible to use multiple classifiers at once; in my case it would be for generating different jars in a single run. I use this command to build my project:
mvn -Dclassifier=bootstrap package
Logically I would think that this is possible:
mvn -Dclassifier=bootstrap,api package
I am using Maven 3.0.4
Your project seems like a candidate for refactoring into a couple of what Maven calls "modules". This involves splitting the code into separate projects within a single directory tree, where the topmost level is normally a parent or aggregator POM with <packaging>pom</packaging> and a <modules/> list containing the sub-project directory names.
Then, I'd advise putting the API interfaces/exceptions/whatnot into an api/ subdirectory with its own pom.xml, and putting the bootstrap classes into a bootstrap/ subdirectory with its own pom.xml. The top-level pom.xml would then list the modules like this:
<modules>
<module>api</module>
<module>bootstrap</module>
</module>
Once you've refactored the project, you will probably want to add a dependency from the bootstrap module to the api module, since I'm guessing the bootstrap will depend on interfaces/etc. from the api.
Now, you should be able to go into the top level of the directory structure and simply call:
mvn clean install
This approach is good because it forces you to think about how different use cases are supported in your code, and it makes dependency cycles between classes harder to miss.
If you want an example to follow, have a look at one of my github projects: Aprox.
NOTE: If you have many modules dependent on the api module, you might want to list it in the top-level pom.xml in the <dependencyManagement/> section, so you can leave off the version in submodule dependency declarations (see Introduction to the Dependency Mechanism).
UPDATE: Legacy Considerations
If you can't refactor the codebase for legacy reasons, etc. then you basically have two options:
Construct a series of pom.xml files in an empty multimodule structure, and use the build-helper-maven-plugin along with source includes/excludes to fragment the codebase and allocate the classes to different modules out of a single source tree.
Maybe use a plugin like the assembly plugin to carve up the target/classes directory (${project.build.directory}) and allocate classes to the different jars. In this scenario, each assembly descriptor requires an <id/> and by default this value becomes the classifier for that assembly jar. Under this plan, the "main" jar output will still be the monolithic one created by the Maven build. If you don't want this, you can use a separate execution of the assembly plugin, and in the configuration use <appendAssemblyId>false</appendAssemblyId>. If the output of that assembly is a jar, then it will effectively replace the old output from the jar plugin. If you decide to pursue this approach, you might want to read the assembly plugin documents to get as much exposure to different examples as you can.
Also, I should note that in both cases you would be stuck with manipulating the list of things produced by using a set of profiles in the pom in order to turn on/off different parts of the build. I'd highly recommend making the default, un-qualified build one that produces everything. This makes it more likely for things like the release plugin to catch everything you want to release, and up-rev versions, etc. appropriately.
These solutions are usually what I promote as migration steps when you can't refactor the codebase all at once. They are especially useful when migrating from something like an Ant build that produces multiple jars out of a single source tree.

Resources