Option to auto-generate Dockerfile and other deployment tooling in IntelliJ? - docker

I'm exploring the initial steps of containerising a Tomcat-based Java project using Docker. With IntelliJ as my preferred IDE, I have successfully:
written a proof-of-concept Servlet;
set up a build artefact to create the resulting WAR;
with the IntelliJ Docker plugin and one of the official Tomcat Docker images, set up a container configuration that includes the WAR contents as one of its mount points;
deployed the container to Docker locally through IntelliJ and confirmed that I can successfully hit the Servlet through my local browser.
So in terms of the basic development cycle, I'm up and running.
But when I eventually come to external deployment (and even at some point during the development process), I will need to add libraries and resources and generate a truly self-contained container: in other words, I will need to go from the simple deployment that the IntelliJ plugin is currently doing of an "image with mount points" to having a full-fledged Dockerfile with all the relevant configuration specified, including my mounts effectively being translated into instructions to copy in the relevant content.
Now my question: how do people generally achieve this? Is there tooling built into IntelliJ that will assist with this? In the container deployment configuration settings in IntelliJ (where the mount points, base image etc are specified), there doesn't seem to be an option to configure resources to copy, for example (or an option to "copy into standalone container rather than mount from host FS"). Am I missing a tool/option somewhere, or is the scripting of the Docker file essentially a manual process? Or am I just barking up the wrong tree with my whole approach? I'd appreciate any advice on the process that people generally use for this!

Jib by Google
I think, Jib would provide, what you need. It also provides plugins for both Maven and Gradle, and the respective plugin can be triggered in IntelliJ via Run/Debug Configuration (see the example at the very bottom).
What is Jib?
Jib builds optimized Docker and OCI images for your Java applications without a Docker daemon - and without deep mastery of Docker best-practices. It is available as plugins for Maven and Gradle and as a Java library.
What does Jib do?
Jib handles all steps of packaging your application into a container image. You don't need to know best practices for creating Dockerfiles or have Docker installed.
Jib organizes your application into distinct layers; dependencies, resources, and classes; and utilizes Docker image layer caching to keep builds fast by only rebuilding changes. Jib's layer organization and small base image keeps overall image size small, which improves performance and portability.
Configuration
You can check the documentation. It contains a lot of information about different kind of configuration options regarding the creation and deploying a Docker image. Where you can also simply make use of environment variables.
For projects with Maven:
https://github.com/GoogleContainerTools/jib/tree/master/jib-maven-plugin
Build your image
Build to Docker daemon
Build an image tarball
For projects with Gradle:
https://github.com/GoogleContainerTools/jib/tree/master/jib-gradle-plugin
Same options as for Maven
Regarding your question, check this part for example: adding Arbitrary Files to the Image
In the container deployment configuration settings in IntelliJ (where the mount points, base image etc are specified), there doesn't seem to be an option to configure resources to copy, for example (or an option to "copy into standalone container rather than mount from host FS").
Demo
For demonstration purposes, I've created a simple project with Maven, where I also used the base image tomcat:9.0.36-jdk8-openjdk, which is also optional by the way - see Jib WAR Projects:
Servlet:
#WebServlet(urlPatterns = {"/hello-world"})
public class HelloWorld extends HttpServlet {
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
response.setContentType("text/html");
PrintWriter out = response.getWriter();
out.println("Hello World");
}
}
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>mvn-jib-example</artifactId>
<version>1.0</version>
<packaging>war</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<failOnMissingWebXml>false</failOnMissingWebXml>
</properties>
<dependencies>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>4.0.1</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<finalName>servlet-hello-world</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>2.5.0</version>
<configuration>
<allowInsecureRegistries>true</allowInsecureRegistries>
<from>

</from>
<to>

<auth>
<username>registry_username</username>
<password>registry_password</password>
</auth>
<tags>
<tag>latest</tag>
</tags>
</to>
<container>
<appRoot>/usr/local/tomcat/webapps/ROOT</appRoot>
</container>
<extraDirectories>
<paths>
<path>
<from>./src/main/resources/extra-stuff</from>
<into>/path/in/docker/image/extra-stuff</into>
</path>
<path>
<from>/absolute/path/to/other/stuff</from>
<into>/path/in/docker/image/other-stuff</into>
</path>
</paths>
</extraDirectories>
</configuration>
</plugin>
</plugins>
</build>
</project>
Executing the following goals will create the docker image on the fly:
mvn clean package jib:dockerBuild
Confirm that the image was created:
docker image ls
Starting a container from the image
docker run --rm -p 8082:8080 registry.localhost/hello-world:latest
Result:
Deployment
To deploy the image to an external docker registry, you can check the sections below:
Authentication Methods
Using Specific Credentials
IDE
Last but not least, since you are working with IntellIJ IDEA, you can simply create a RUN/Debug configuration to automate the image creation and deployment via button button, e.g. one configuration for building the image, one for deploying it to localhost and one for deploying to extern registry and so on.
Here an example for maven (see):

The project that I am doing right now is using Spring-boot which actually has embedded Tomcat inside. And I use Docker Gradle plugin(https://plugins.gradle.org/plugin/com.bmuschko.docker-spring-boot-application) to build and push Docker image to registry which can be docker hub or AWS ECR. The combination is playing well with IntelliJ as it is Gradle task anyway.
Because it is Spring-boot, the plugin can build image based on any basic JRE image(I use https://hub.docker.com/_/adoptopenjdk) with minimum configuration. Do not need to write your own Dockerfile at all.
docker {
def registryHost = 'xxx.dkr.ecr.us-west-2.amazonaws.com'
springBootApplication {
baseImage = "${registryHost}/caelus:springboot-jdk14-openj9"
images = ["${registryHost}/caelus:app"]
ports = [8080,8081]
jvmArgs =['-Djdk.httpclient.allowRestrictedHeaders=content-length']
}
}

I advice these questions:
How to use docker in the development phase of a devops life cycle?
How to deploy java application in a cloud instance from the scratch to an advanced architecture?
What code-repository should the Dockerfile get committed to?
As a summary, IntelliJ, Eclipse, VStudio are just IDEs, so they are not an option for deployment in real environments environments.
If you are talking about external deployment, you need a kind of site to store your docker images and at minimum a continuous integration server(Jenkins, Travis, Bamboo, Circle CI, buddy.works)
Basic Flow
Architect, Sysadmin, senior developer or something with infrastructure skills, must create the Dockerfile an other required files.
Developer does not need to worry about docker, volumes, ports, etc. Developer only needs to develop code(java in your case).
Developer perform a git push
Your continuous integration server detects this event and start docker build....
After docker build, the continuous integration server, push the new created docker image to you Docker Hub Repository
Using some configurations, your continuous integration server knows where the deployment is required (external deployment as you say). Example could be the next classic environment : testing or staging. In this case deployment is just the download of the requested docker image.
If Quality Assurance team and automated tests, ensure that everything is fine, your continuous integration server, performs last step: deploy docker image in your production environment, or as I said, just the docker image download.
Your questions
is the scripting of the Docker file essentially a manual process?
As I explained, Dockerfile is cornerstone of all. It's creation is manually and funny or challenging if you need a surgery or your are an artisan like :
tomcat user configuration at container start
tomcat advance variables configuration
any advanced tomcat configuration in which a human is required, but you want to automate it.
Java war inside a tomcat/webapps is a very common requirement, so you will find a lot of Dockerfiles or you could use the generated by your IntelliJ if it meets your requirements.
Fell free to contact me if you don't find a Dockerfile for you java app.

Related

Use multiple tags when creating Docker image using Spring Boot Maven Plugin

I am using the Spring Boot Maven Plugin to create Docker images. They are tagged with latest, but I would like to have 2 tags added to it.
This is my current config:
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

</configuration>
</plugin>
I would like to have latest and a certain build number (which will come from the Azure DevOps pipeline).
Is this possible with the Maven plugin? I could not find any info in the docs about it.
Is this possible with the Maven plugin? I could not find any info in the docs about it.
Indeed, just as the usuario pointed that the Spring Boot Maven Plugin does not support tagged multiple tags when creating Docker image at this moment.
Add option to create tags for the built image
As workaround, we could use other plugin, like jib-maven-plugin:
-Djib.to.tags=a,b,c
You could check this thread for some more details.

Q: How can I save an artifact into Nexus Repository using a groovy pipeline?

My question is about saving artifacts into a repository. Especially, I am trying to upload into the Nexus Repository artifacts and release versions after the execution of a build pipeline for a Maven project (through Jenkins).
The only way that I want to do so, is just by using a pipeline written in Groovy so to integrate with Jenkins.
Note: I want the artifact version number to be always the same and the version number to change dynamically (not manually).
Is there a command or code generally which enables me to do that?
You are on the wrong level, this should happen in maven.
In pom.xml you need. (more here)
<distributionManagement>
<snapshotRepository>
<id>nexus-snapshots</id>
<url>http://localhost:8081/nexus/content/repositories/snapshots</url>
</snapshotRepository>
</distributionManagement>
and then in the plugins section
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.8.2</version>
<executions>
<execution>
<id>default-deploy</id>
<phase>deploy</phase>
<goals>
<goal>deploy</goal>
</goals>
</execution>
</executions>
</plugin>
and you should be able to just do mvn clean deploy from your pipeline.
EDIT
There is another way with Nexus Artifact Uploader plugin
nexusArtifactUploader {
nexusVersion('nexus2')
protocol('http')
nexusUrl('localhost:8080/nexus')
groupId('sp.sd')
version("2.4.${env.BUILD_NUMBER}")
repository('NexusArtifactUploader')
credentialsId('44620c50-1589-4617-a677-7563985e46e1')
artifact {
artifactId('nexus-artifact-uploader')
type('jar')
classifier('debug')
file('nexus-artifact-uploader.jar')
}
artifact {
artifactId('nexus-artifact-uploader')
type('hpi')
classifier('debug')
file('nexus-artifact-uploader.hpi')
}
}
As #hakamairi already said, it is not recommended to re-upload artifacts with the same version to Nexus repository, Maven is built around the idea that an artifact's GAV always corresponds to a unique artifact.
However, if you want to allow re-deployment, you need to set the deployment policy of a release repository to "allow redeploy", then you can redeploy the same version. You cannot do that without allowing on repository side.
And for deploying to Nexus repo, you can use either Nexus Platform Plugin or Nexus Artifact Uploader.
ADDITIONAL SOLUTION THAT ALSO WORKS
I executed it manually and I exported the result of Nexus call. The result was the following command. This command need to be inserted inside the Jenkins pipeline as a Groovy code:
nexusPublisher nexusInstanceId: 'nexus', nexusRepositoryId: 'maven-play-ground', packages: [[$class: 'MavenPackage', mavenAssetList: [[classifier: '', extension: '', filePath: '**PATH_NAME_OF_THE_ARTIFACT**.jar']], mavenCoordinate: [artifactId: '**YOUR_CUSTOM_ARTIFACT_ID**', groupId: 'maven-play-ground', packaging: 'jar', version: '1.0']]], tagName: '**NAME_OF_THE_FILE_IN_THE_REPOSITORY**' }
In the field of filePath we need to insert the path and the name of the artifact.jar file.
In the field of artifactId we need to insert the custom (in this occasion for mine artifact) artifact id
In the field of tagName we need to insert the custom name of the directory from inside the Nexus Repository
This is a solution that can be done automatically without manual changes and edits. Once we have created the directory in Nexus repository this is going to be executed without any issue and without the need of changing the version number.
Note: also we need to enable re-deploy feature from inside the Nexus Repository settings.

travis-ci failing to deploy to sonatype

I've started using travis-ci to automate my builds. I have several open source projects and they all deploy to nexus sonatype from where they go to maven central. They're all Java projects that use Maven to build and github as a repo.
I've been doing this manually for years and I have appropriate keys and logins and my pom is compatible etc.
Implementing the first one was easy enough, it is a single module project and it builds and deploys just fine. Then I did a second one, a multi module project and got that working in much the same way. My third, however, is baffling me.
The maven build on this thing is a bit tricky but it does run fine locally and I even have it running the actual build on travis successfully. But the deploy doesn't work.
The problem is that when it tries to connect to nexus sonatype I get an authorisation error:
Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project madura-bundles:
Failed to deploy artifacts: Could not transfer artifact nz.co.senanque:madura-bundles:pom:4.5.6 from/to sonatype-nexus-staging (https://oss.sonatype.org/service/local/staging/deploy/maven2/):
Failed to transfer file: https://oss.sonatype.org/service/local/staging/deploy/maven2/nz/co/senanque/madura-bundles/4.5.6/madura-bundles-4.5.6.pom.
Return code is: 401, ReasonPhrase: Unauthorized.
It looks like I have not set up my sonatype credentials correctly. But I have set it up the same way as I did for the other two projects. Specifically I go into Nexus Sonatype and get my Access User Token and add those to my environment (SONATYPE_USERNAME and SONATYPE_PASSWORD, I deleted both of these and re-entered them in case it was a typo). I also add references to those in my local maven settings file:
...
<server>
<id>ossrh</id>
<username>${env.SONATYPE_USERNAME}</username>
<password>${env.SONATYPE_PASSWORD}</password>
</server>
...
The local maven settings file is a file in my project and the .travis.yml maven commands refer to it. The travis.yml file has a deploy section identical to the other two (working) projects, except I have been adding extra bits to try and make it work. But none of the differences there look relevant. The working deploys look like this:
deploy:
provider: script
script: "mvn versions:set -DnewVersion=${TRAVIS_TAG} && mvn clean deploy -B -U -P release --settings travis/settings.xml"
on:
tags: true
so this is only going to kick off if the repo has been tagged and it uses the tag as the version number. In the other projects this works fine, but not in the one I'm trying to get working. The tag does trigger the deploy as it should, but the deploy fails.
Does anyone know why I get the deploy on one project but not another? Thanks for any help.
Okay, I figured it out. The problem is that the parent pom of the failing project does not have a release profile, the parent pom of the working project does have one. The release profile in both cases looks like this:
<profile>
<id>release</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.5</version>
<executions>
<execution>
<id>sign-artifacts</id>
<phase>verify</phase>
<goals>
<goal>sign</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.sonatype.plugins</groupId>
<artifactId>nexus-staging-maven-plugin</artifactId>
<version>1.6.3</version>
<extensions>true</extensions>
<configuration>
<serverId>ossrh</serverId>
<nexusUrl>https://oss.sonatype.org/</nexusUrl>
<autoReleaseAfterClose>true</autoReleaseAfterClose>
</configuration>
</plugin>
</plugins>
</build>
</profile>
It is needed to sign the generated artifacts (jar files, javadoc files etc) with the gpg plugin and to deploy them to nexus. The deploy to nexus is attempted without this but because it didn't have the reference to serverId:ossrh it doesn't pick up credentials from the maven settings file and therefore I get an authorization error.
The release profile needs to be on the parent project and all the module projects. I had added it to the modules but forgot the parent.

How Jenkins plugins works

I had a situation where I have developed my own Jenkins plugin for the first time. The main purpose of the plugin is to publish a message to Google Cloud Platform. All the code that I have written in Jenkins is working fine in the local environment from eclipse. But when I am using the same code in Jenkins it is causing some dependency errors. Any help is really appreciated.
Thank you.
Note: Jenkins and Eclipse are on the same machine
How Jenkins resolves its dependencies is really a concern here for me.
Eclipse uses the M2eclipse plugin to add your dependencies to the classpath when running your plugin from Eclipse.
Jenkins only resolves dependencies between plugins. Furthermore Jenkins expects the .hpi packages to be self-contained, i.e. containing all JAR dependencies you need. mvn package should copy the jars of all your dependencies and put them in the .hpi file in the WEB-INF/lib folder.
In your specific case it seems that the Google Cloud implementation expects some implementation of a channel service provider on the classpath, so you should add a dependency on grpc-okhttp or grpc-netty, so they get packaged into the .hpi file as well.
Sometimes there could be a choice of class loader issue so please add follwing lines of code before calling classes of Google.
Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
Also add the following code in Jenkins plugin pom.xml to specify Jenkins that the dependencies in the pom.xml should be loaded first rather than Jenkins dependencies.
<pluginManagement>
<plugins>
<plugin>
<groupId>org.jenkins-ci.tools</groupId>
<artifactId>maven-hpi-plugin</artifactId>
**<configuration>
<pluginFirstClassLoader>true</pluginFirstClassLoader>
</configuration>**
</plugin>
</plugins>
</pluginManagement>

How to stop maven-deploy-plugin:deploy-file deploying source?

I have a WAR project that also produces some extra artefacts that I want to deploy to my artifact repo. So I have configured executions under the deploy plugin to deploy each of the extra artefacts
<execution>
<id>deploy-exe</id>
<phase>deploy</phase>
<goals>
<goal>deploy-file</goal>
</goals>
<configuration>
<file>target/${project.build.finalName}.exe</file>
<packaging>exe</packaging>
<!-- pom, sources and javadoc already deployed by project. Release repo will fail redeployment -->
<generatePom>false</generatePom>
<sources/>
<javadoc/>
</configuration>
</execution>
But each execution will try and deploy the javadoc and sources for the project, even though I have tried to explicitly switch them off for the execution. NB I want javadoc and sources for the project, but I only want them deployed once (by deploy mojo).
This isn't a big deal until it comes to release time at which point my build breaks because it tries to deploy the javadoc and source for the deploy mojo as well as each of the deploy-file mojo executions to a release repo that doesn't allow redeploy of artifacts.
Is it possible to configure the maven-deploy-plugin to not deploy source & javadoc for the deploy-file mojo?

Resources