How to create different Jenkins2 images , unlocked and with preloaded plugins? - jenkins

I launch a new Jenkins2 container based on the official Jenkins image.
But, it needs the initial setup. The random generated unlock string must be entered and the admin user/pass must be set. Then the plugins must be installed.
I want to be able to set these up from the dockerfile.
I made a list of the plugins that I want to be installed during build, but how do I confront the other two?
Basically, I want to be able to create different images uniquely configured and ready to be used via a container.

Plugins
Installation of plugins (as per the documentation):
# Dockerfile
USER root
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
# (...)
USER jenkins
If you wish to generate a plugins.txt, before running the above, based on your current manual jenkins setup run the following:
JENKINS_HOST=<user>:<passwd>#<hostname>:<port>
curl -sSL "http://$JENKINS_HOST/pluginManager/api/xml?depth=1&xpath=/*/*/shortName|/*/*/version&wrapper=plugins" | perl -pe 's/.*?<shortName>([\w-]+).*?<version>([^<]+)()(<\/\w+>)+/\1 \2\n/g'|sed 's/ /:/' > plugins.txt
_
# plugins.txt (example)
ace-editor:1.1
git-client:2.1.0
workflow-multibranch:2.9.2
script-security:1.24
durable-task:1.12
pam-auth:1.3
credentials:2.1.8
bitbucket:1.1.5
ssh-credentials:1.12
credentials-binding:1.10
mapdb-api:1.0.9.0
workflow-support:2.10
resource-disposer:0.3
workflow-basic-steps:2.3
email-ext:2.52
ws-cleanup:0.32
ssh-slaves:1.11
workflow-job:2.9
docker-commons:1.5
matrix-project:1.7.1
plain-credentials:1.3
workflow-scm-step:2.3
scm-api:1.3
matrix-auth:1.4
icon-shim:2.0.3
ldap:1.13
pipeline-build-step:2.3
subversion:2.7.1
ant:1.4
branch-api:1.11.1
pipeline-input-step:2.5
bouncycastle-api:2.16.0
workflow-cps:2.23
docker-slaves:1.0.5
cloudbees-folder:5.13
pipeline-stage-step:2.2
workflow-api:2.6
pipeline-stage-view:2.2
workflow-aggregator:2.4
github:1.22.4
token-macro:2.0
pipeline-graph-analysis:1.2
authentication-tokens:1.3
handlebars:1.1.1
gradle:1.25
git:3.0.0
external-monitor-job:1.6
structs:1.5
mercurial:1.57
antisamy-markup-formatter:1.5
jquery-detached:1.2.1
mailer:1.18
workflow-cps-global-lib:2.4
windows-slaves:1.2
workflow-step-api:2.5
docker-workflow:1.9
github-branch-source:1.10
pipeline-milestone-step:1.1
git-server:1.7
github-organization-folder:1.5
momentjs:1.1.1
build-timeout:1.17.1
github-api:1.79
workflow-durable-task-step:2.5
pipeline-rest-api:2.2
junit:1.19
display-url-api:0.5
timestamper:1.8.7
Disable Security & Admin user
This can be sorted by passing --env JAVA_OPTS="-Djenkins.install.runSetupWizard=false" to the docker run command.
Example:
docker run -d --name myjenkins -p 8080:8080 -p 50000:50000 --env JAVA_OPTS="-Djenkins.install.runSetupWizard=false" jenkins:latest

Related

Cronjob Not Running via Crontab -e

I am trying to run a script daily that connects to my ESXi host, deletes all snapshots of all my VMs, then creates new snapshots each day. I am attempting to do this by running the script within a docker container using the VMWare PowerCLI docker image (https://hub.docker.com/r/vmware/powerclicore) on my docker VM running Ubuntu.
I am able to successfully run this script by running the following command in terminal:
/usr/bin/docker run --rm -it --name=powerclicore --entrypoint="/usr/bin/pwsh" -v /home/<redacted>/config/powercli:/scripts vmware/powerclicore /scripts/VMSnapshot.ps1
However, after adding the above command to my cronfile via crontab -e, my job is not running.
GNU nano 4.8 /tmp/crontab.EZkliu/crontab
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 0 * * * /usr/bin/docker run --rm -it --name=powerclicore --entrypoint="/usr/bin/pwsh" -v /home/<redacted>/config/powercli:/scripts vmware/powerclicore /scripts/VMSnapshot.ps1
Am I doing this wrong? A second pair of eyes would be much appreciated!
#KazikM. After some troubleshooting, I was finally able to get this to work using your recommendation. I created a Bash script that called the Docker command I was trying to put int crontab, then just called the Bash script using crontab: 0 0 * * * /bin/bash /home/<redacted>/config/powerclicore/VMSnapshot-bash.sh. Thanks again for your help!

Use environment variables in wildlfy datasource definition (file)

I want to repackage my WAR application as self containing docker-image - currently still deploying as war to wildfly 19.
Since I don´t want to have the database password and/or URL be part of the docker image I want it to be configurable from outside - as environment variable.
So my current docker image includes a wildfly datasource definition as -ds.xml file with env placeholders since according to
https://blog.imixs.org/2017/03/17/use-environment-variables-wildfly-docker-container/
and other sources this should be possible.
My DS file is
<datasources xmlns="http://www.jboss.org/ironjacamar/schema">
<datasource jndi-name="java:jboss/datasources/dbtDS" pool-name="benchmarkDS">
<driver>dbt-datasource.ear_com.mysql.jdbc.Driver_5_1</driver>
<connection-url>${DB_CONNECTION_URL,env.DB_CONNECTION_URL}</connection-url>
<security>
<user-name>${DB_USERNAME,env.DB_USERNAME}</user-name>
<password>${DB_PASSWORD,env.DB_PASSWORD}</password>
</security>
<pool>[...]</pool>
</datasource>
</datasources>
But starting the docker container leads always to not recognizing the environment variables:
11:00:38,790 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (JCA PoolFiller) IJ000610: Unable to fill pool: java:jboss/datasources/dbtDS: javax.resource.ResourceException: IJ031084: Unable to create connection
at org.jboss.ironjacamar.jdbcadapters#1.4.22.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:345)
[...]
Caused by: javax.resource.ResourceException: IJ031083: Wrong driver class [com.mysql.jdbc.Driver] for this connection URL []
at org.jboss.ironjacamar.jdbcadapters#1.4.22.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:323)
last line says, that DS_CONNECTION_URL seems to be empty - tried several combinations - believe me.
Wrong driver class [com.mysql.jdbc.Driver] for this connection URL []
I´m starting my container with
docker run --name="dbt" --rm -it -p 8080:8080 -p 9990:9990 -e DB_CONNECTION_URL="jdbc:mysql://127.0.0.1:13306/dbt?serverTimezone=UTC" -e DB_USERNAME="dbt" -e DB_PASSWORD="_dbt" dbt
I even modified the standalone.sh to output environments and DB_CONNECTION_URL IS there.
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/jboss/wildfly
JAVA: /usr/lib/jvm/java/bin/java
DB_CONNECTION_URL: jdbc:mysql://127.0.0.1:13306/dbt?serverTimezone=UTC JAVA_OPTS: -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED --add-exports=jdk.unsupported/sun.reflect=ALL-UNNAMED
=========================================================================
11:00:34,362 INFO [org.jboss.modules] (main) JBoss Modules version 1.10.1.Final
11:00:34,854 INFO [org.jboss.msc] (main) JBoss MSC version 1.4.11.Final
11:00:34,863 INFO [org.jboss.threads] (main) JBoss Threads version 2.3.3.Final
[...]
So what am I doing wrong to enable wildfly to replace placeholders in my DS file??
They seem to be processed - since they evaluate to empty. But they should contain something...
Any suggestions appreciated.
Current Dockerfile
[...] building step above [...]
FROM jboss/wildfly:20.0.1.Final
USER root
RUN yum -y install zip wget && yum clean all
RUN sed -i 's/echo " JAVA_OPTS/echo " DB_CONNECTION_URL: $DB_CONNECTION_URL JAVA_OPTS/g' /opt/jboss/wildfly/bin/standalone.sh && \
cat /opt/jboss/wildfly/bin/standalone.sh
RUN sed -i 's/<spec-descriptor-property-replacement>false<\/spec-descriptor-property-replacement>/<spec-descriptor-property-replacement>true<\/spec-descriptor-property-replacement><jboss-descriptor-property-replacement>true<\/jboss-descriptor-property-replacement><annotation-property-replacement>true<\/annotation-property-replacement>/g' /opt/jboss/wildfly/standalone/configuration/standalone.xml
USER jboss
COPY --from=0 /_build/dbt-datasource.ear /opt/jboss/wildfly/standalone/deployments/
ADD target/dbt.war /opt/jboss/wildfly/standalone/deployments/
Answere to myself - perhaps good to know for others later:
Placeholder in -ds.xml files are NOT supported(!).
I added the same datasource definition in the standalone.xml by patching with sed and now it works without further modification more or less out of the box.

Run Artifactory as Docker container response 404

I created docker container with this command:
docker run --name artifactory -d -p 8081:8081 \
-v /jfrog/artifactory:/var/opt/jfrog/artifactory \
-e EXTRA_JAVA_OPTIONS='-Xms128M -Xmx512M -Xss256k -XX:+UseG1GC' \
docker.bintray.io/jfrog/artifactory-oss:latest
and started artifactory, but the response I get is 404 - not found
If u access http://99.79.191.172:8081/artifactory u see it
If you follow the Artifactory Docker install documentation, you'll see you also need to expose port 8082 for the new JFrog Router, which is now handling the traffic coming in to the UI (and other services as needed).
This new architecture is from Artifactory 7.x. By setting latest as the repository tag, you don't have full control of what version you are running...
So your command should look like
docker run --name artifactory -p 8081:8081 -d -p 8082:8082 \
-v /jfrog/artifactory:/var/opt/jfrog/artifactory \
docker.bintray.io/jfrog/artifactory-oss:latest
For controlling the configuration (like the Java options you want), it's recommended to use the Artifactory system.yaml configuration. This file is the best way to control all aspects of the Artifactory system configuration.
I start my instance with
sudo groupadd -g 1030 artifactory
sudo useradd -u 1030 -g artifactory artifactory
sudo chown artifactory:artifactory /daten/jfrog -R
docker run \
-d \
--name artifactory \
-v /daten/jfrog/artifactory:/var/opt/jfrog/artifactory \
--user "$(id -u artifactory):$(id -g artifactory)" \
--restart always \
-p 8084:8081 -p 9082:8082 releases-docker.jfrog.io/jfrog/artifactory-oss:latest
This is my /daten/jfrog/artifactory/etc/system.yaml (I changed nothing manually)
## #formatter:off
## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE
## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character.
configVersion: 1
## NOTE: JFROG_HOME is a place holder for the JFrog root directory containing the deployed product, the home directory for all JFrog products.
## Replace JFROG_HOME with the real path! For example, in RPM install, JFROG_HOME=/opt/jfrog
## NOTE: Sensitive information such as passwords and join key are encrypted on first read.
## NOTE: The provided commented key and value is the default.
## SHARED CONFIGURATIONS
## A shared section for keys across all services in this config
shared:
## Java 11 distribution to use
#javaHome: "JFROG_HOME/artifactory/app/third-party/java"
## Extra Java options to pass to the JVM. These values add to or override the defaults.
#extraJavaOpts: "-Xms512m -Xmx2g"
## Security Configuration
security:
## Join key value for joining the cluster (takes precedence over 'joinKeyFile')
#joinKey: "<Your joinKey>"
## Join key file location
#joinKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/join.key>"
## Master key file location
## Generated by the product on first startup if not provided
#masterKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/master.key>"
## Maximum time to wait for key files (master.key and join.key)
#bootstrapKeysReadTimeoutSecs: 120
## Node Settings
node:
## A unique id to identify this node.
## Default auto generated at startup.
#id: "art1"
## Default auto resolved by startup script
#ip:
## Sets this node as primary in HA installation
#primary: true
## Sets this node as part of HA installation
#haEnabled: true
## Database Configuration
database:
## One of mysql, oracle, mssql, postgresql, mariadb
## Default Embedded derby
## Example for postgresql
#type: postgresql
#driver: org.postgresql.Driver
#url: "jdbc:postgresql://<your db url, for example: localhost:5432>/artifactory"
#username: artifactory
#password: password
I see this in router-request.log
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43740","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":3608608,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.56803042Z","level":"info","msg":"","request_Uber-Trace-Id":"664d23ea1941d9b0:410817c2c69f2849:31b50a1adccb9846:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43734","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":4000683,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.567751867Z","level":"info","msg":"","request_Uber-Trace-Id":"23967a8743252dd8:436e2a5407b66e64:31cfc496ccc260fa:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43736","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":4021195,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.567751873Z","level":"info","msg":"","request_Uber-Trace-Id":"28300761ec7b6cd5:36588fa084ee7105:10fbdaadbc39b21e:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43622","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":3918873,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.567751891Z","level":"info","msg":"","request_Uber-Trace-Id":"6d57920d087f4d0f:26b9120411520de2:49b0e61895e17734:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43742","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":2552815,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.569112324Z","level":"info","msg":"","request_Uber-Trace-Id":"d4a7bb216cf31eb:5c783ae80b95778f:fd11882b03eb63f:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43730","DownstreamContentSize":45,"DownstreamStatus":200,"Duration":18106757,"RequestMethod":"POST","RequestPath":"/artifactory/api/auth/loginRelatedData","StartUTC":"2021-12-30T11:49:19.557661286Z","level":"info","msg":"","request_Uber-Trace-Id":"d4a7bb216cf31eb:640bf3bca741e43b:28f0abcfc40f203:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43726","DownstreamContentSize":169,"DownstreamStatus":200,"Duration":19111069,"RequestMethod":"GET","RequestPath":"/artifactory/api/crowd","StartUTC":"2021-12-30T11:49:19.557426794Z","level":"info","msg":"","request_Uber-Trace-Id":"664d23ea1941d9b0:417647e0e0fd0911:55e80b7f7ab0724e:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43724","DownstreamContentSize":496,"DownstreamStatus":200,"Duration":19308753,"RequestMethod":"GET","RequestPath":"/artifactory/api/securityconfig","StartUTC":"2021-12-30T11:49:19.557346739Z","level":"info","msg":"","request_Uber-Trace-Id":"6d57920d087f4d0f:7bdba564c07f8bc5:71b1b99e1e406d5f:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43728","DownstreamContentSize":2,"DownstreamStatus":200,"Duration":19140699,"RequestMethod":"GET","RequestPath":"/artifactory/api/saml/config","StartUTC":"2021-12-30T11:49:19.557516365Z","level":"info","msg":"","request_Uber-Trace-Id":"23967a8743252dd8:2f9035e56dd9f0c5:4315ec00a6b32eb4:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43732","DownstreamContentSize":148,"DownstreamStatus":200,"Duration":18907203,"RequestMethod":"GET","RequestPath":"/artifactory/api/httpsso","StartUTC":"2021-12-30T11:49:19.557786692Z","level":"info","msg":"","request_Uber-Trace-Id":"28300761ec7b6cd5:2767cf480f6ebd73:2c013715cb58b384:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
I've to change the port to 8084 (it's already occupied) But I run into 404 as well.
Who knows how to solve it ?

Dockerize 'at' scheduler

I want to put at daemon (atd) in separate docker container for running as external environment independent scheduler service.
I can run atd with following Dockerfile and docker-compose.yml:
$ cat Dockerfile
FROM alpine
RUN apk add --update at ssmtp mailx
CMD [ "atd", "-f" ]
$ cat docker-compose.yml
version: '2'
services:
scheduler:
build: .
working_dir: /mnt/scripts
volumes:
- "${PWD}/scripts:/mnt/scripts"
But problems are:
1) There is no built-in option to reditect atd logs to /proc/self/fd/1 for showing them via docker logs command. at just have -m option, which sends mail to user.
Is it possible to redirect at from user mail to /proc/self/fd/1 (maybe some compile flags) ?
2) Now I add new task via command like docker-compose exec scheduler at -f test.sh now + 1 minute. Is it a good way ? I think a better way is to find a file where at stores a queue, add this file as volume, update it externally and just send docker restart after file change.
But I can't find where at stores its data on alpine linux ( I just found /var/spool/atd/.SEQ where at stores id of last job ). Anyone knows where at stores its data ?
Also will be glad to hear any advices regarding at dockerization.
UPD. I found where at stores its data on alpine, it's /var/spool/atd folder. When I create a task via at command it creates here executable file with name like a000040190a2ff and content like
#!/bin/sh
# atrun uid=0 gid=0
# mail root 1
umask 22
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; export PATH
HOSTNAME=e605e8017167; export HOSTNAME
HOME=/root; export HOME
cd /mnt/scripts || {
echo 'Execution directory inaccessible' >&2
exit 1
}
#!/usr/bin/env sh
echo "Hello world"
UPD2: the difference between running at with and without -m option is third string of generated script
with -m option:
#!/bin/sh
# atrun uid=0 gid=0
# mail root 1
...
without -m :
#!/bin/sh
# atrun uid=0 gid=0
# mail root 0
...
According official man
The user will be mailed standard error and standard output from his
commands, if any. Mail will be sent using the command
/usr/sbin/sendmail
and
-m
Send mail to the user when the job has completed even if there was no
output.
I tried to run schedule simple Hello World script and found that no mail was sent:
# mail -u root
No mail for root

How to know if my program is completely started inside my docker with compose

In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)

Resources