Building & running a Dockerfile in IntelliJ - docker

My Problem is the following:
If i type this in the command console, it works:
docker build -f src/main/docker/Dockerfile.jvm -t hello . & docker run --name hello --rm -p 8080:8080 hello
But if i try to use it with the "Run-Option" in IntelliJ, it doesnt work.
My command above has 9 Steps like the IntelliJ one, but it seems that the 5th fails. Here is the config:
Here the output from the failed build:
Here from the successful one:
It doesnt even create the Image-Tag like my manual command does.
And last but not least here is the Dockerfile:
FROM fabric8/java-alpine-openjdk11-jre:latest
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
ENV AB_ENABLED=jmx_exporter
# Be prepared for running in OpenShift too
RUN adduser -G root --no-create-home --disabled-password 1001 \
&& chown -R 1001 /deployments \
&& chmod -R "g+rwX" /deployments \
&& chown -R 1001:root /deployments
COPY target/lib/* /deployments/lib/
COPY target/*-runner.jar /deployments/app.jar
EXPOSE 8080
# run with user 1001
USER 1001
ENTRYPOINT [ "/deployments/run-java.sh" ]
Where is the key Difference? I can stick with the manual one, but the Run-Config would be smoother

There are only 3 files in your build context. It seems weird. You might want to specify the "Context folder" option

Related

getting "exec ./bin/activemq: no such file or directory" on docker image run

Using below Dockerfile
FROM docker.io/eclipse-temurin:11-jre
ENV ACTIVEMQ_HOME /opt/activemq
RUN mkdir -p /opt/activemq && chmod 755 /opt/activemq
COPY apache-activemq-5.17.1/. /opt/activemq/
RUN apt update -y && apt upgrade -y
RUN addgroup --system activemq && adduser --system --home $ACTIVEMQ_HOME --uid 10001 --group activemq&& chown -R activemq:activemq $ACTIVEMQ_HOME && chown -h activemq:activemq $ACTIVEMQ_HOME
USER 10001
WORKDIR $ACTIVEMQ_HOME
CMD ["./bin/activemq","console","-Djetty.host=0.0.0.0"]
EXPOSE 61616 8161
Build Docker Image using docker build -t 123:11 .
When I try to run the image using docker run -it 123:11 getting exec ./bin/activemq: no such file or directory.
Same worked on one server and not working on other server.
Tried to overwrite with --entrypoint /bin/bash and verified the files were copied successfully.
Any reason it is working on one server but not on other?
I'm using Docker Desktop on Windows servers.

Docker compose throws - adduser: group 'www-data' in use

I have a docker-composer.yml file which used to work just fine a couple of months ago, but now when I run it throws an error.
First, this is my file structure.
.data/db
logs
mariadb
nginx
php7-fpm
src/public
.env
.gitignore
README
docker-compose.yml
The only mention of the error i.e www-data is in two of the files. php7-fpm/Dockerfile and nginx/Dockerfile
Here is the content of these files:
php-fpm/Dockerfile
....
RUN apt-get update && apt-get install -y procps
RUN usermod -u 1000 www-data
USER www-data
WORKDIR /var/www
nginx/Dockerfile
FROM nginx:alpine
COPY ./config/nginx.conf /etc/nginx/
COPY ./sites /etc/nginx/sites-available
RUN apk update \
&& apk upgrade \
&& apk add --no-cache bash \
&& adduser -D -H -u 1000 -s /bin/bash www-data
ARG PHP_UPSTREAM_CONTAINER=php-fpm
ARG PHP_UPSTREAM_PORT=9000
# Set upstream conf and remove the default conf
RUN echo "upstream php-upstream { server ${PHP_UPSTREAM_CONTAINER}:${PHP_UPSTREAM_PORT}; }" > /etc/nginx/conf.d/upstream.conf \
&& rm /etc/nginx/conf.d/default.conf
CMD ["nginx"]
The docker-compose.yml file is a generic one, there is no tampering with user groups, but here is a pastebin for anyone who wants to take a look.
https://pastebin.com/ivRfPvZz
This is the partial output from the docker-compose up -d command.
Image for service php-fpm was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building nginx
Step 1/8 : FROM nginx:alpine
alpine: Pulling from library/nginx
Digest: sha256:17bd1698318e9c0f9ba2c5ed49f53d690684dab7fe3e8019b855c352528d57be
Status: Downloaded newer image for nginx:alpine
---> ea1193fd3dde
Step 2/8 : COPY ./config/nginx.conf /etc/nginx/
---> 65c115482d37
Step 3/8 : COPY ./sites /etc/nginx/sites-available
---> 1fbe81620355
Step 4/8 : RUN apk update && apk upgrade && apk add --no-cache bash && adduser -D -H -u 1000 -s /bin/bash www-data
---> Running in c631ccdf63f2
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
v3.9.4-61-g22a1991b6a [http://dl-cdn.alpinelinux.org/alpine/v3.9/main]
v3.9.4-57-gb40ea6190b [http://dl-cdn.alpinelinux.org/alpine/v3.9/community]
OK: 9776 distinct packages available
(1/1) Upgrading libbz2 (1.0.6-r6 -> 1.0.6-r7)
OK: 27 MiB in 37 packages
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/2) Installing readline (7.0.003-r1)
(2/2) Installing bash (4.4.19-r1)
Executing bash-4.4.19-r1.post-install
Executing busybox-1.29.3-r10.trigger
OK: 29 MiB in 39 packages
adduser: group 'www-data' in use
Service 'nginx' failed to build: The command '/bin/sh -c apk update && apk upgrade && apk add --no-cache bash && adduser -D -H -u 1000 -s /bin/bash www-data' returned a non-zero code: 1
You can see the error is:
adduser: group 'www-data' in use
Service 'nginx' failed to build: The command '/bin/sh -c apk update && apk upgrade && apk add --no-cache bash && adduser -D -H -u 1000 -s /bin/bash www-data' returned a non-zero code: 1
but I don't know how to fix this.
See this, when you use FROM nginx:alpine, in fact it sames with using nginx:1.17.1-alpine because they are just different tags for same image id.
But several month ago, when you use nginx:alpine, latest maybe others, E.g. nginx:1.14.2-alpine, so when rebuild using the same dockerfile, the base image indeed changed. I strongly suggest you use an explicit tag not latest as base image to assure definiteness.
Finally, what happened for several month ago?
Use nginx:1.14.2-alpine, maybe not this version, just an example:
$ docker run --rm -it nginx:1.14.2-alpine cat /etc/group | grep www-data
You can see there is no www-data group in the image, so you can use next to add a new user also a new group with the name www-data:
adduser -D -H -u 1000 -s /bin/bash www-data
Use nginx:1.17.1-alpine, which currently same as nginx:alpine:
$ docker run --rm -it nginx:1.17.1-alpine cat /etc/group | grep www-data
www-data:x:82:
You can see there is a default www-data group in this image, don't know how it generates, in a word, the image update bring something difference.
So, as already a www-data group there, what you have to do is change your command to next to join a existed group:
adduser -D -H -u 1000 -s /bin/bash www-data -G www-data
You can find the DockerFile inside laradock/nginx folder. Just change the line
&& adduser -D -H -u 1000 -s /bin/bash www-data
to
&& adduser -D -H -u 1000 -s /bin/bash www-data -G www-data
This specifies the group that the user is a member of. Once done, build and bring your containers up with
docker-compose build --no-cache nginx
docker-compose up -d
I'm having this issue with Alpine 3.14, where the www-data group already exists in the image.
Adding (delgroup www-data || true) before the line with adduser in it will fix the problem.
&& apk upgrade \
&& apk add --no-cache bash \
&& (delgroup www-data || true) \
&& adduser -D -H -u 1000 -s /bin/bash www-data
The parentheses with the || true will ensure that the command won't fail if the group does not exist, so it is backward compatible.

Why "docker build" fails on local image

Probably I'm missing something obvious, but could someone please explain the following:
When I pull and run an image, e.g docker pull dgraziotin/lamp && docker run -t -i -p 80:80 -p 3306:3306 --name osxlamp dgraziotin/lamp - it works just fine
Now I want to play with Dockerfile and build it manually on my computer (I can do this, right?)
So I download the source files from Github https://github.com/dgraziotin/osx-docker-lamp, cd to unpacked folder and run docker build -t test .
The building process starts but I see lot of weird errors like "Package php5-mysql is not available". I tried different images with the same result. How to properly build local images?
UPD:
Dockerfile
FROM phusion/baseimage:latest
MAINTAINER Daniel Graziotin <daniel#ineed.coffee>
ENV REFRESHED_AT 2016-03-29
# based on tutumcloud/tutum-docker-lamp
# MAINTAINER Fernando Mayo <fernando#tutum.co>, Feng Honglin <hfeng#tutum.co>
ENV DOCKER_USER_ID 501
ENV DOCKER_USER_GID 20
ENV BOOT2DOCKER_ID 1000
ENV BOOT2DOCKER_GID 50
# Tweaks to give Apache/PHP write permissions to the app
RUN usermod -u ${BOOT2DOCKER_ID} www-data && \
usermod -G staff www-data && \
useradd -r mysql && \
usermod -G staff mysql
RUN groupmod -g $(($BOOT2DOCKER_GID + 10000)) $(getent group $BOOT2DOCKER_GID | cut -d: -f1)
RUN groupmod -g ${BOOT2DOCKER_GID} staff
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get -y install supervisor wget git apache2 libapache2-mod-php5 mysql-server php5-mysql pwgen php-apc php5-mcrypt zip unzip && \
echo "ServerName localhost" >> /etc/apache2/apache2.conf
# needed for phpMyAdmin
run php5enmod mcrypt
# Add image configuration and scripts
ADD start-apache2.sh /start-apache2.sh
ADD start-mysqld.sh /start-mysqld.sh
ADD run.sh /run.sh
RUN chmod 755 /*.sh
ADD supervisord-apache2.conf /etc/supervisor/conf.d/supervisord-apache2.conf
ADD supervisord-mysqld.conf /etc/supervisor/conf.d/supervisord-mysqld.conf
# Remove pre-installed database
RUN rm -rf /var/lib/mysql
# Add MySQL utils
ADD create_mysql_users.sh /create_mysql_users.sh
RUN chmod 755 /*.sh
# Add phpmyadmin
RUN wget -O /tmp/phpmyadmin.tar.gz https://files.phpmyadmin.net/phpMyAdmin/4.6.0/phpMyAdmin-4.6.0-all-languages.tar.gz
RUN tar xfvz /tmp/phpmyadmin.tar.gz -C /var/www
RUN ln -s /var/www/phpMyAdmin-4.6.0-all-languages /var/www/phpmyadmin
RUN mv /var/www/phpmyadmin/config.sample.inc.php /var/www/phpmyadmin/config.inc.php
ENV MYSQL_PASS:-$(pwgen -s 12 1)
# config to enable .htaccess
ADD apache_default /etc/apache2/sites-available/000-default.conf
RUN a2enmod rewrite
# Configure /app folder with sample app
RUN mkdir -p /app && rm -fr /var/www/html && ln -s /app /var/www/html
ADD app/ /app
#Environment variables to configure php
ENV PHP_UPLOAD_MAX_FILESIZE 10M
ENV PHP_POST_MAX_SIZE 10M
# Add volumes for the app and MySql
VOLUME ["/etc/mysql", "/var/lib/mysql", "/app" ]
EXPOSE 80 3306
CMD ["/run.sh"]
SOLVED As I understood many of custom images contain outdated/invalid code and must be avoided as much as possible. We should rely on official well known and supported images.
Unrelated to the exact problem, but your Dockerfile could use some rework based on Best Practices for writing Dockerfiles.
I'd like to point out the ADD vs COPY best practice and the deprecated MAINTAINER Instruction (you should use LABEL maintainer="Daniel Graziotin ").
Also on the part where you add phpmyadmin it's useless to use RUN instead of ADD if you don't extract and delete the archive in the same layer (using multiline arguments). This can also be found under the ADD vs COPY best practices.
Other than that I can say this is a pretty solid Dockerfile! Sad it won't work because of the application...

How to access docker daemon from container with other user than root

I'm trying to run a Jenkins container that builds docker images. I've started last week with docker and I'm a bit confused with the use of volumes from host and how users are handled.
I've been searching on internet and I've found a git issue were someone posted a solution to have access to the docker daemon from the container. Basically, the idea is to mound inside the Jenkins container the volumes that contain the docker bin folder and the docker.sock from the host like this:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
I've done that and it works but only if I'm root. When I started to learn docker, I followed the example in a blog where, instead of directly using a jenkins image, the author copied the Dockerfiles from the jenkins image itself and its dependencies to explain the process. As part of the process, a jenkins user is created and it is the one in used when starting the container. My problem now is that I cannot make the jenkins user have access to the docker.sock mounted as it belongs to root and the group docker in the host. I tried adding the user docker in the Dockerfile but I still get a permission denied error from a Jenkins job when accessing the docker.sock. If I inspect the mounted /var/run/docker.sock inside the container I can see that docker.sock belongs to group user instead of docker so I don't know exactly what's going on when the directory is mounted. I haven't worked much with Linux so my guess is that the user docker doesn't exist when the directory is mounted and that it then uses a default user but I may probably be completely wrong.
Another thing I still don't get is, if I create a container specifically to be used as a Jenkins container and nothing else is supposed to be run there, what's the purpose of creating a specific jenkins user? Is there any reason why I cannot use directly the user root?
This is the Dockerfile I use. Thanks.
FROM centos:7
# Yum workaround to stalled mirror
RUN sed -i -e 's/enabled=1/enabled=0/g' /etc/yum/pluginconf.d/fastestmirror.conf
RUN rm -f /var/lib/rpm/__*
RUN rpm --rebuilddb -v -v
RUN yum clean all
# see https://bugs.debian.org/775775
# and https://github.com/docker-library/java/issues/19#issuecomment-70546872
ENV CA_CERTIFICATES_JAVA_VERSION 20140324
RUN yum -v install -y \
wget \
zip \
which \
openssh-client \
unzip \
java-1.8.0-openjdk-devel \
git \
&& yum clean all
#RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure
# Install Tini
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
# SET Jenkins Environment Variables
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
ENV JENKINS_VERSION 2.22
ENV JENKINS_SHA 5b89b6967e7af8119c52c7e86223b47665417a22
ENV JENKINS_UC https://updates.jenkins-ci.org
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
# SET Java variables
ENV JAVA_HOME /usr/lib/jvm/java/jre
ENV PATH /usr/lib/jvm/java/bin:$PATH
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN useradd -d "$JENKINS_HOME" -u 1000 -m -s /bin/bash jenkins
#Not working. Folder not yet mounted?
#RUN DOCKER_GID=$(stat -c '%g' /var/run/docker.sock) && \
#Using gid from host
RUN groupadd -for -g 50 docker && \
usermod -aG docker jenkins
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Install Jenkins
RUN curl -fL http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war -o /usr/share/jenkins/jenkins.war \
&& echo "$JENKINS_SHA /usr/share/jenkins/jenkins.war" | sha1sum -c -
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
# Prep Jenkins Directories
RUN chown -R jenkins "$JENKINS_HOME" /usr/share/jenkins/ref
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN chown -R jenkins:jenkins /var/cache/jenkins
# Expose Ports for web and slave agents
EXPOSE 8080
EXPOSE 50000
# Copy in local config files
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
COPY jenkins.sh /usr/local/bin/jenkins.sh
COPY plugins.sh /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/jenkins.sh
# Install default plugins
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/plugins.sh /tmp/plugins.txt
# Add ssh key
RUN eval "$(ssh-agent -s)"
RUN mkdir /usr/share/jenkins/ref/.ssh && \
chmod 700 /usr/share/jenkins/ref/.ssh && \
ssh-keyscan github.com > /usr/share/jenkins/ref/.ssh/known_hosts
COPY id_rsa /usr/share/jenkins/ref/.ssh/id_rsa
COPY id_rsa /usr/share/jenkins/ref/.ssh/id_rsa.pub
COPY hudson.tasks.Maven.xml /usr/share/jenkins/ref/hudson.tasks.Maven.xml
RUN chown -R jenkins:jenkins /usr/share/jenkins/ref && \
chmod 600 /usr/share/jenkins/ref/.ssh/id_rsa && \
chmod 600 /usr/share/jenkins/ref/.ssh/id_rsa.pub && \
chmod 600 /usr/share/jenkins/ref/hudson.tasks.Maven.xml
COPY id_rsa /root/.ssh/id_rsa
COPY id_rsa /root/.ssh/id_rsa.pub
# ssh keys for root. To use root as the user
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Switch to the jenkins user
USER jenkins
# Tini as the entry point to manage zombie processes
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
Apparently the issue was in the gid. For some reason I thought the docker gid of the group in the host was 50 but actually it was actually 100. When I changed it to be 100, the jenkins job started to work.
I still don't know why docker.sock shows it belongs to group user instead of docker in the container though. If I do cat /etc/group in the container I see
root:x:0:
...
users:x:100:
...
jenkins:x:1000:
docker:x:100:jenkins
and in the host
root:x:0:
lp:x:7:lp
nogroup:x:65534:
staff:x:50:docker
docker:x:100:docker
dockremap:x:101:dockremap

How to create a Jenkins job and/or user from a dockerfile?

I am trying to set up a customised Jenkins 2 server from a dockerfile.
I use the official image and I want to be able to add things that I need like custom jobs and an admin user.
This is my dockerfile so far:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.19.2}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=32b8bd1a86d6d4a91889bd38fb665db4090db081
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
# could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha1sum -c -
ENV JENKINS_UC https://updates.jenkins.io
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
# for main web interface:
EXPOSE 8080
# will be used by attached slave agents:
EXPOSE 50000
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
USER ${user}
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.txt /usr/share/jenkins/plugins.txt
COPY plugins.sh /usr/local/bin/plugins.sh
COPY install-plugins.sh /usr/local/bin/install-plugins.sh
# Add the command line tools
COPY jenkins-cli.jar "$JENKINS_HOME"
# Create jobs
ARG job_name_1="my_super_job"
#ARG job_name_2="my_ultra_job"
# create the jobs folder recursively
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/workspace/
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastFailedBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastStableBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastSuccessfulBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastUnstableBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastUnsuccessfulBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/legacyIds
#RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_2}
## add the custom configs to the container
COPY ${job_name_1}_config.xml "$JENKINS_HOME"/jobs/${job_name_1}/config.xml
USER root
#RUN chmod 600 "$JENKINS_HOME"/jobs/${job_name_1}/config.xml
RUN java -jar /var/jenkins_home/jenkins-cli.jar -s http://localhost:8080 create-job my_super_job < /var/jenkins_home/jobs/my_super_job/config.xml
#COPY ${job_name_2}_config.xml "$JENKINS_HOME"/jobs/${job_name_2}/config.xml
# --Install plugins--
# Notice: Deprecated method which however works with a 'plugins.txt' file
#USER root
#RUN chmod 600 /usr/share/jenkins/plugins.txt
#RUN chmod 600 /usr/local/bin/install-plugins.sh
#RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
# Notice: Recommended method with open case on Github [https://github.com/jenkinsci/docker/issues/348]
# Notice: Select whichever plugins you want
#RUN /usr/local/bin/install-plugins.sh \
#dashboard-view:2.9.10 \
#pipeline-stage-view:2.2 \
#parameterized-trigger:2.32 \
#bitbucket:1.1.5 \
#git:3.0.0 \
#github:1.22.4
# --Install plugins--
I have tried to create a job on build time by first launching a container, creating the job manually, saving the config.xml file, and then copying it in the image from the Dockerfile. Moreover, I am trying to replicate the files/folder structure when a job is being created.
But it is not working. The job is not appearing in Jenkins.
I also tried to use the jenkins-cli.jar, but as I understood , there must be a live Jenkins server to connect to and execute anything which is not the case at build time.
Finally, I suppose creating an admin user in build time must be way more complicated that creating a job...
So, does anyone have any experience on this?

Resources