We are using jenkins for CI and have some complex jobs interdependent.I just want to represent graphically or in any other way the dependencies of jobs.I investigated and installed dependency graph with graphviz representation but its giving too complex figures.Any one know any other plugins?
Consider some view plugins :
https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin (shows you dependency chains)
https://wiki.jenkins-ci.org/display/JENKINS/Delivery+Pipeline+Plugin (shows you dependency chains that affected previous runs)
Jenkins recently added first class support for workflow. Written and supported by Kohsuke.
Related
I use Weld on a Java SE application, and I was wondering if generate the Jandex index at build with a plugin will improve the startup.
For now, I didn't notice a performance improvement.
But i found 2 plugins for generating the index:
https://github.com/smallrye/jandex
and
https://github.com/wildfly/jandex-maven-plugin
Any idea witch one to use?
Thanks!
For now, I didn't notice a performance improvement.
This would only be noticeable for large deployments. The reasoning behind it is that having a Jandex index makes it possible to skip the discovery phase via reflection and instead allow Weld to browse a pre-built index.
That being said, there is no harm in using Jandex even on smaller deployments, it's just that the difference won't really be noticeable.
Any idea which one to use?
Short answer is - as of Jul 2022, if you want the maven plugin variant, use the WildFly one. If you want core artifact, use the one with org.jboss.jandex artifact group ID.
Longer answer is - the SmallRye one is the original repo, recently migrated from https://github.com/wildfly/jandex to https://github.com/smallrye/jandex.
It holds the sources of what used to be org.jboss.jandex artifact group ID (and is now io.smallrye). It also has maven plugin module but there is no public release yet and it will be a major version bump as well. Therefore, going forward, there will be an artifact groud ID change but otherwise it will retain the same artifact ID and it will all be hosted in one repo.
I'm developing a private webapp in JSF which is available over the internet and now reached a stage where I wanted to introduce CI (Which I'm fairly new to) into the whole process. My current project setup looks like this:
myApp-persistence: maven project that handles DB access (DAOs and hibernate stuff)
myApp-core: maven project, that includes all the Java code (Beans and Utils). It has a dependency on myApp-persistence.jar
myApp-a: maven project just with frontend code (xhtml, css, JS). Has a dependency on myApp-core.jar
myApp-b: maven project just with frontend code (xhtml, css, JS). Has a dependency on myApp-core.jar
myApp-a and myApp-b are independent from each other, they are just different instances of the core for two different platforms and only display certain components differently or call different bean-methods.
Currently I'm deploying manually, i.e. use the eclipse built-in export as war function and then manually upload it to the deployments dir of my wildfly server on prod. I'm using BitBucket for versioning control and just recently discovered pipelines in BitBucket and implemented one for each repository (every project is a separate repo). Now myApp-persistence builds perfectly fine because all dependencies are accessible via the public maven repo but myApp-core (hence myApp-a and myApp-b, too) fails of course because myApp-persistence isn't published on the central maven repo.
Is it possible to tell BitBucket somehow to use the myApp-persistence.jar in the corresponding repo on BitBucket?
If yes, how? And can I also tell BitBucket to deploy directly to prod in case the build including tests ran fine?
If no, what would be a best practice to do that? I was thinking of using a second dev server (already available, so no big deal) as a CI server but then still I would need some advise or recommendations on which tools (Jenkins, artifactory, etc.) to use.
One important note maybe: I'm the only person working on this project so this might seem like an overkill but for me the process of setting that up is quite some valuable experience. That said, I'm not necessarily looking for the quickest solution but for the most professional and convenient solution.
From my point of view, you can find the solution in this post-https://christiangalsterer.wordpress.com/2015/04/23/continuous-integration-for-pull-requests-with-jenkins-and-stash/. It guides you step by step how to set up everything. The post is from 2015 but the process and idea are still the same. Hope it helps.
I'm trying to get into Groovy scripting in Jenkins, but there seems to be no docs about this and the API is kind of split between package hudson and jenkins. I understand that Hudson is Jenkins' former name, and my first guess is that the devs didn't rename the old packages, but used the new name for newer code - effectively creating a mess. Is this true or am I missing something?
You are somewhat right. Jenkins originated from Hudson , so left the old packages and classes as it is for 2 of the obvious reasons:
To support the old legacy classes and codes written/used all over world. Other wise it would be difficult for every developer to either change or use something like #deprecated #SuppressWarnings
It will take more time to change/restructure all the classes for Jenkins. Even Huge Java community also avoid such things and they only restructure the code when it requires the most. Like they did in Dictionary or Vector classes or even Collections framework during generics implementions.
You can also refer this page for some more answers: How to choose between Hudson and Jenkins?
I have a build that's currently using the old build flow plugin that I'm trying to convert to pipeline.
This build can be massively parallelized (many units of work can run on many different nodes) but we only want to extract the source code once at the beginning, preferably with the Pipeline script from SCM option. I'm at a loss to understand how I can share the source extract (which apparently is on the master) with all of the "downstream" nodes that will be used by the pipeline script.
For build flow we extracted to a well-known location on a shared file system and all of the downstream jobs invoked by the flow were passed (or could derive) that location. That always felt icky & I was hoping that pipeline would have solved this problem but I can't find anything to suggest that it has. What am I missing?
I believe the official recommendation for this is to make bundles of the source and then use "stash" and "unstash" to make them available to deeper steps of your pipeline script.
See https://www.cloudbees.com/blog/parallelism-and-distributed-builds-jenkins
Keep in mind that this doesn't do anything to help with line-endings. If you have builds that span OSs with different line endings you either need to make OS-specific stashes, or just checkout to a safe label in each downstream step.
After further research it seems like the External Workspace Manager Plugin does what I'm looking for.
Hi im using jenkins but i need to ask if it is a code ispector ?
Or does it has more functions that I can use because I cant find them and its important ?
Jenkins itself is not a code inspector. It is more of a Continuous Integration server that provides a plugin API so that it can be expanded to do different things. There are alot of plugins available, so it really depends on your language and what you want your inspector to do ( static analysis, code coverage, etc )
A quick search provided this page Jenkins Plugins
I have Jenkins installed ( I do not have network access to it at the moment ) and it has a plugins page you can go to to see the available plugins. That would probably be the most up to date list.
We currently use Jenkins with a number of plugins to do builds and analysis on every commit. If you want to just run an analysis, you may want a standalone analysis tool.