Jib - where to copy webapp folder inside image? - docker

I am creating docker image using google's Jib maven plugin, image gets created successfully and backend services are working fine but my webapp folder is not part of that image. Before jib i was creating a zip containing everything (including webapp folder in the root of that zip along with executable jar) which was working fine.
Now the image created by jib has classes, libs, resources in the app root. How and where should i copy webapp folder ?

It worked for me by using external directories provided by maven jib plugin.
<extraDirectories>
<paths>
<path>webapp</path>
<path>
<from>webapp</from>
<into>/app/webapp</into>
</path>
</paths>
</extraDirectories>

I am currently using running a spring-boot version 2.4.10 and the application is packed as a jar.
My project have JSPs at:
src/main/webapp/WEB-INF/jsp
This is important because it allows me to run the application as an executable jar: java -jar ./target/app.jar --spring.profiles.active=prod
jib-plugin doesn't copy the src/main/webapp directory to the container image by default, so we need to add it manually by including the following configuration.
<extraDirectories>
<paths>
<path>
<from>src/main/webapp/WEB-INF</from>
<into>/app/resources/META-INF/resources/WEB-INF</into>
</path>
</paths>
</extraDirectories>
I provide jib-plugin a custom entrypoint.sh
The entrypoint.sh is located at src/main/jib
#!/bin/sh
echo "The application will start in ${APPLICATION_SLEEP}s..." && sleep ${APPLICATION_SLEEP}
exec java ${JAVA_OPTS} -noverify -XX:+AlwaysPreTouch \
-Djava.security.egd=file:/dev/./urandom -cp /app/resources/:/app/classes/:/app/libs/* \
"com.demo.application.Application" "$#"
My final jib-plugin configuration is the following:
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>${jib-maven-plugin.version}</version>
<configuration>
<from>

</from>
<to>

<tags>
<tag>latest</tag>
<tag>${project.version}</tag>
</tags>
</to>
<container>
<entrypoint>
<shell>bash</shell>
<option>-c</option>
<arg>/entrypoint.sh</arg>
</entrypoint>
<ports>
<port>8080</port>
</ports>
<environment>
<SPRING_OUTPUT_ANSI_ENABLED>ALWAYS</SPRING_OUTPUT_ANSI_ENABLED>
<APPLICATION_SLEEP>0</APPLICATION_SLEEP>
</environment>
<creationTime>USE_CURRENT_TIMESTAMP</creationTime>
</container>
<extraDirectories>
<paths>
<path>src/main/jib</path>
<path>
<from>src/main/webapp/WEB-INF</from>
<into>/app/resources/META-INF/resources/WEB-INF</into>
</path>
</paths>
<permissions>
<permission>
<file>/entrypoint.sh</file>
<mode>755</mode>
</permission>
</permissions>
</extraDirectories>
</configuration>
<!-- Make JIB plugin run during the packing life cycle -->
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
</executions>
</plugin>

The above didn't work for me but the below did.
<extraDirectories>
<paths>
<path>
<from>../path/to/frontend/app/dist</from>
<into>/app/resources/static</into>
</path>
</paths>
</extraDirectories>

Related

Docker cp maven package into running container by maven

For local development I'd like to copy a maven war package into a docker container right after mvn package has created the package. How can I accomplish this?
My workflow as of right now is with a specific dockerid:
$ mvn clean package
$ docker cp the.war dockerid:/usr/local/tomcat/webapps/the.war
A Tomcat server in the docker container recognizes the new war and starts again.
I tried adding the maven-antrun-plugin. However, the war is not deployed, whether I use it in the install or package phase. I get no error or warning with e.g. mvn clean package. However, the.war file is not copied into the docker container.
Below the dockerid is hardcoded momentarily.
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<version.jdk>1.8</version.jdk>
<version.maven.compiler>3.6.1</version.maven.compiler>
...
</properties>
...
<dependency>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>3.1.0</version>
</dependency>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<phase>package</phase>
<configuration>
<target>
<exec executable="docker">
<arg value="cp"/>
<arg value="${basedir}/target/the.war"/>
<arg value="dockerid:/usr/local/tomcat/webapps/the.war"/>
</exec>
</target>
</configuration>
</execution>
</executions>
</plugin>

maven built dockerized project that runs a node application

I have a project that is built via maven, its a dockerized project for a node application.
I want to be able to customize my CMD/EntryPoint based on the maven build arguments.
I know that when we do docker run and provide it the arguments it is accepted and that works fine.
but I want to do the same thing from maven commandline.
Is there a way to let docker run know the argument passed in the maven commandline?
or even better can I edit the dockerfile and read commandline args of maven and use in the dockerfile ENTRYPOINT?
Thanks in advance,
Minakshi
Based on this, you can either use maven-resources-plugin to replace instances of ${...} with the values set in maven before you build the docker file.
Example:
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>3.0.2</version>
<executions>
<execution>
<id>filter-dockerfile</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}</outputDirectory>
<resources>
<resource>
<directory>src/main/docker</directory>
<filtering>true</filtering>
</resource>
</resources>
</configuration>
</execution>
</executions>
This assume your docker file is under src/main/docker/ path. The replaced docker file will be copied on ${project.build.directory} path.
Or based on this comment, you could pass arguments to docker file.
Example:
On your maven docker plugin
<configuration>
<buildArgs>
<artifactId>${project.artifactId}</artifactId>
<groupId>${project.groupId}</groupId>
</buildArgs>
</configuration>
Then access those properties as ARGs on docker file
ARG artifactId
ARG groupId
ENV ARTIFACT_ID=${artifactId} GROUP_ID=${groupId}
Hope this help answer you question.
Thank you for the responses
I used the resource filtering in maven to solve my problem:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<userdefvariable></userdefvariable> // variable that you want to pass along
</properties>
<build>
<resources>
<resource>
<directory>src/main/resource</directory> // path to the file (can be anything)
<filtering>true</filtering> // must be true this is what does replacement
</resource>
</resources>
</build>
add to maven commands: "resources:resources -Duserdefvariable=value"
//this setup generates a file in target folder after running the mvn commands, where the variable is injected with the value given by the user.
in Dockerfile now you can instead put in a command to run the file:
CMD ["sh", "path to the script in target folder"]
// in this script should be the commands that you want to use

Maven dockerfile plugin not able to tag the image

I am trying to integrate maven dockerfile plugin with my project. I have multiple modules under my maven project. I have modified the pom.xml for the module I want to build and tag images as below. Running mvn dockerfile:build command builds a creates a docker-info.jar under the target folder. I am not sure where the images are being built and when I try to run the mvn dockerfile:tag command I see the below error.
Failed to execute goal com.spotify:dockerfile-maven-plugin:1.4.4:tag
(default-cli) on project drs-web: The parameters 'repository' for goal
com.spotify:dockerfile-maven-plugin:1.4.4:tag are missing or invalid
Pom.xml:
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>${docker.maven.plugin.version}</version>
<executions>
<execution>
<id>build</id>
<goals>
<goal>build</goal>
</goals>
<configuration>
<buildArgs>
<WAR_FILE>${project.build.finalName}.war</WAR_FILE>
</buildArgs>
</configuration>
</execution>
<execution>
<id>tag</id>
<goals>
<goal>tag</goal>
</goals>
<configuration>
<repository>XXX/XXX-api</repository>
<tag>${project.version}</tag>
</configuration>
</execution>
</executions>
</plugin>
Dockerfile:
FROM tomcat:9.0.10-jre8-slim
ENV CATALINA_HOME /usr/local/tomcat
MAINTAINER XXX
EXPOSE 8080
ADD target/${WAR_FILE} ${CATALINA_HOME}/webapps/XXX-api.war
To fix the error you should use the same parameters in two sections of your pom.xml. You didn't define the repository's name for the build goal:
<configuration>
<repository>XXX/XXX-api</repository>
</configuration>
The fact that docker-info.jar was created in your Target directory most likely means that the creation of the docker image completed successfully.
The image should be put to your Docker registry with the name "XXX/XXX-api", and you can check it from a console with the command:
docker image ls
P.S. You can avoid generation of docker-info.jar by adding the following parameter to the configuration section of dockerfile-maven-plugin:
<configuration>
<skipDockerInfo>true</skipDockerInfo>
</configuration>

Cannot find 'dockerFileDir' in class io.fabric8.maven.docker.config.ImageConfiguration

I am building docker through fabric8 maven plugin. I am getting the below error when build mvn clean package docker:build
Unable to parse configuration of mojo io.fabric8:docker-maven-plugin:0.21.0:build for parameter dockerFileDir: Cannot find 'dockerFileDir' in class io.fabric8.maven.docker.config.ImageConfiguration -> [Help 1]
Plugin configuration
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.21.0</version>
<configuration>
<dockerHost>unix:///var/run/docker.sock</dockerHost>
<verbose>true</verbose>
<images>

</images>
</configuration>
</plugin>
Anyone help me about this issue
Thanks in Advance
dockerFileDir as well as tags should be within build tag:


Dynamically create the version number within the Ambari's metainfo.xml file using maven build processes

I don’t want to hardcode my service version into metainfo.xml, Can I do it?
<service>
<name>DUMMY_APP</name>
<displayName>My Dummy APP</displayName>
<comment>This is a distributed app.</comment>
<version>0.1</version> --------------This I don't want to hardcode, Can I doit?
<components>
...
</components>
</service>
I am using maven as my build tool.
This can be done by using maven's resource filtering. Three steps are required:
Define a maven property that will hold the version number
Add that maven property in between filter tags in the metainfo.xml file
Notate in the pom that the metainfo.xml file needs to be filtered.
For example lets assume you want to use the same version as the projects maven version identifier, ${project.version}, as your version in the metainfo.xml file. You would replace <version>0.1</version> with <version>${project.version}</version>. Then in your pom file you would need to list that metainfo.xml file as needing to be filtered. The procedure for this step will vary depending on how you are bundling the custom Ambari service (rpm, assembly, etc.). In general whichever plugin you are using when you list the sources (content to include in the bundle) you will need to specify the path to the metainfo.xml file and make sure filtering is set to true.
Now lets assume you want to create an rpm that will install your artifact. It would look something like this:
Your project structure should be as follows:
--src
--main
--resources
--configuration
--scripts
metainfo.xml
Your pom file would look like this:
<?xml version="1.0"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<version>0.0.1-SNAPSHOT</version>
<artifactId>com.example.project</artifactId>
<packaging>pom</packaging>
<properties>
<hdp.name>HDP</hdp.name>
<hdp.version>2.3</hdp.version>
<stack.dir.prefix>/var/lib/ambari-server/resources/stacks</stack.dir.prefix>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>rpm-maven-plugin</artifactId>
<version>2.1.2</version>
<extensions>true</extensions>
<executions>
<execution>
<id>generate-hdp-rpm</id>
<phase>package</phase>
<goals>
<goal>attached-rpm</goal>
</goals>
<configuration>
<classifier>hdp</classifier>
<needarch>true</needarch>
<sourceEncoding>UTF-8</sourceEncoding>
<distribution>blah</distribution>
<group>something/Services</group>
<packager>company</packager>
<vendor>company</vendor>
<name>SERVICENAME-ambari-hdp</name>
<defineStatements>
<!-- I use the below line to prevent compiled python scripts from failing the build -->
<defineStatement>_unpackaged_files_terminate_build 0</defineStatement>
<defineStatement>platform_stack_directory ${stack.dir.prefix}/${hdp.name}/${hdp.version}</defineStatement>
</defineStatements>
<requires>
<require>ambari-server</require>
</requires>
<mappings>
<mapping>
<directory>${stack.dir.prefix}/${hdp.name}/${hdp.version}/services/SERVICENAME</directory>
<filemode>755</filemode>
<username>root</username>
<groupname>root</groupname>
</mapping>
<mapping>
<directory>${stack.dir.prefix}/${hdp.name}/${hdp.version}/services/SERVICENAME</directory>
<directoryIncluded>false</directoryIncluded>
<filemode>755</filemode>
<username>root</username>
<groupname>root</groupname>
<sources>
<source>
<location>src/main/resources/metainfo.xml</location>
<filter>true</filter>
</source>
</sources>
</mapping>
<mapping>
<directory>${stack.dir.prefix}/${hdp.name}/${hdp.version}/services/SERVICENAME/configuration</directory>
<filemode>755</filemode>
<username>root</username>
<groupname>root</groupname>
</mapping>
<mapping>
<directory>${stack.dir.prefix}/${hdp.name}/${hdp.version}/services/SERVICENAME/configuration</directory>
<directoryIncluded>false</directoryIncluded>
<filemode>755</filemode>
<username>root</username>
<groupname>root</groupname>
<sources>
<source>
<location>src/main/resources/configuration</location>
</source>
</sources>
</mapping>
<mapping>
<directory>${stack.dir.prefix}/${hdp.name}/${hdp.version}/services/SERVICENAME/package</directory>
<filemode>755</filemode>
<username>root</username>
<groupname>root</groupname>
</mapping>
<mapping>
<directory>${stack.dir.prefix}/${hdp.name}/${hdp.version}/services/SERVICENAME/package/scripts</directory>
<filemode>755</filemode>
<username>root</username>
<groupname>root</groupname>
</mapping>
<mapping>
<directory>${stack.dir.prefix}/${hdp.name}/${hdp.version}/services/SERVICENAME/package/scripts</directory>
<directoryIncluded>false</directoryIncluded>
<filemode>755</filemode>
<username>root</username>
<groupname>root</groupname>
<sources>
<source>
<location>src/main/resources/scripts</location>
</source>
</sources>
</mapping>
</mappings>
</configuration>
</execution>
<!-- You may have multiple executions if you want to create rpms for stacks other then HDP -->
</executions>
</plugin>
</plugins>
</build>
<dependencies>
<!-- List any dependencies you need -->
</dependencies>
This will create an rpm that when installed will add your service to the HDP 2.3 stack. After installing the rpm you would have to restart ambari-server to make sure the new service definition is picked up.
Update:
To create additional RPMs for other stacks, you will need to:
Duplicate the execution block in the rpm-maven-plugin section of the pom.
Change the id element of the new execution to be unique.
Modify the mappings to reflect the directory/file structure you want for the other stack.

Resources