When building Jenkins in Docker plugins fail to install - jenkins

I have a Dockerfile for a custom Jenkins master like so:
FROM jenkins
MAINTAINER me
USER root
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
RUN apt-get update \
&& apt-get install -y sudo \
&& apt-get install -y vim \
&& rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
# COPY plugins.txt /usr/share/jenkins/plugins.txt
# RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--handlerCountStartup=100 --handlerCountMax=300"
RUN /usr/local/bin/install-plugins.sh git:2.6.0
Everything works fine until the RUN /usr/local/bin/install-plugins.sh git:2.6.0 line. I get an error installing the plugins:
Creating initial locks...
Analyzing war...
Downloading plugins...
Downloading plugin: git from https://updates.jenkins.io/download/plugins/git/2.6.0/git.hpi
Downloading plugin: git-plugin from https://updates.jenkins.io/download/plugins/git-plugin/2.6.0/git-plugin.hpi
Failed to download plugin: git or git-plugin
WAR bundled plugins:
Installed plugins:
*:
Some plugins failed to download!
Not downloaded: git
The command '/bin/sh -c /usr/local/bin/install-plugins.sh git:2.6.0' returned a non-zero code: 1
Am I doing something wrong or is this an issue with Jenkins/Docker?

For those who are pulling the jenkins image from dockerHub, dont pull:
docker pull jenkins
or
docker pull jenkinsci/jenkins
rather pull the latest version using:
docker pull jenkins/jenkins
This is the latest one according to https://jenkins.io/blog/2018/12/10/the-official-Docker-image/

Your Dockerfile works for me, installs all plugins and builds the image successfully:
Analyzing war...
Downloading plugins...
Downloading plugin: git from https://updates.jenkins.io/download/plugins/git/2.6.0/git.hpi
> git depends on workflow-scm-step:1.14.2,mailer:1.17,matrix-project:1.7.1,ssh-credentials:1.12,parameterized-trigger:2.4;resolution:=optional,scm-api:1.2,token-macro:1.11;resolution:=optional,promoted-builds:2.27;resolution:=optional,credentials:2.1.4,git-client:1.21.0
Downloading plugin: workflow-scm-step from https://updates.jenkins.io/download/plugins/workflow-scm-step/latest/workflow-scm-step.hpi
...
Removing intermediate container 4f895c203944
Successfully built 31d58d1f586f
Try docker build --no-cache in case there's an issue with one of the layers in your image cache, or set up an automated build on Docker Hub and build it on Docker's servers.

I recall having problems installing with that script myself. Instead, I used the following:
RUN install-plugins.sh \
disable-failed-job \
disk-usage \
greenballs \
...
And hopefully it doesn't make a difference for this, but I have my plugin install inside of the root portion of my Dockerfile, before dropping back to running commands as USER jenkins.

Dockerfile
FROM jenkins/jenkins:latest
ENV CURL_OPTIONS -sSfLk
ENV JENKINS_OPTS --httpPort=-1

The curl timeouts for downloading plugins were insufficient in some cases, that was just fixed for image 2.19.1, and it is now configurable too using CURL_CONNECTION_TIMEOUT and other options

I had the same problem on OS X.
In my case the problem was caused by a bad DNS configuration (obtained by DHCP). When I changed the DNS to Googles DNS 8.8.8.8 it all worked perfectly.
I encountered error messages such as:
Failed to resolve host name "ftp.icm.edu.pl". Perhaps you need to configure HTTP proxy

I had a very similar issue and the solution for me was to specify the proxy within the Docker file prior to plugin install. Below is the snippet of my Dockerfile
FROM jenkins:latest
MAINTAINER Jose Estrada
USER root
ENV JAVA_OPTS="--handlerCountStartup=100 --handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war -Dhttps.proxyHost=proxy-wsa.esl.cisco.com -Dhttps.proxyPort=80"
ENV http_proxy <PROXY Settings>
ENV https_proxy <PROXY Settings>
RUN /usr/local/bin/install-plugins.sh cisco-spark-notifier:latest

This could be a DNS issue. Please restart docker daemon and try. (sudo service docker restart)

Related

Docker CMD and WORKDIR getting converted to RUN

I've created a docker image with Artifactory and Terraform to be used by pods in a K8s Cluster but it wont persist, the pod gets deleted immediately after spinning up and wasn't able to execute the job it was assigned to. Upon checking the pushed image in Artifactory converts the WORKDIR and CMD to RUN, is there anything I'm missing?
Here's the Dockerfile:
FROM alpine:3.16.2
LABEL maintainer="Platform Engineering"
# install dependencies
RUN apk add terraform
RUN apk add curl
RUN apk add tree
RUN curl -fL https://install-cli.jfrog.io | sh
RUN curl --location --output /usr/local/bin/release-cli "https://release-cli-downloads.s3.amazonaws.com/latest/release-cli-linux-amd64"
RUN chmod +x /usr/local/bin/release-cli
# check version of installed dependencies
RUN terraform -v
RUN jf -v
RUN release-cli -v
# target ci workspace under /tmp directory
WORKDIR /tmp/ci-workspace
CMD ["/bin/sh"]
Here's how the layers look like in Artifactory:
Tried rebuilding in Windows and other machine and installing one binary at a time, nothing worked.

Using docker to create CI server agents

I'm trying to set up a local GoCD CI server using docker for both the base server and agents. I can get everything running fine, but issues spring up when I try make sure the agent containers have everything installed in them that I need to build my projects.
I want to preface this with I'm aware that I might not be using these technologies correctly, but I don't know much better atm. If there are better ways of doing things, I'd love to learn.
To start, I'm using the official GoCD docker image and that works just fine.
Creating a blank agent also works just fine.
However, one of my projects requires node, yarn and webpack to be build (good ol' react site).
Of course a standard agent container has nothing but the agent installed on it so I've had a shot using a Dockerfile to install all the tech I need to build my projects.
FROM gocd/gocd-agent-ubuntu-18.04:v19.11.0
SHELL ["/bin/bash", "-c"]
USER root
RUN apt-get update
RUN apt-get install -y git curl wget build-essential ca-certificates libssl-dev htop openjdk-8-jre python python-pip
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y yarn
# This user is created in the base agent image
USER go
ENV NVM_DIR /home/go/.nvm
ENV NODE_VERSION 10.17.0
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default \
&& npm install -g webpack webpack-cli
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/v$NODE_VERSION/bin:$PATH
This is the current version of this file, but I've been through many many iterations of frustrations where an globally installed npm package is never on the path and thus not conveniently available.
The docker build works fine, its just that in this iteration of the Dockerfile, webpack is not found when the agent tries running a build.
My question is:
Is a Dockerfile the right place to do things like install yarn, node, webpack etc... ?
If so, how can I ensure everything I install through npm is actually available?
If not, what are the current best practices about this?
Any help, thoughts and anecdotes are fully welcomed and appreciated!
Cheers~!
You should separate gocd-server and gocd-agent to various containers.
Pull images:
docker pull gocd/gocd-server:v18.10.0 docker pull
gocd/gocd-agent-alpine-3.8:v18.10.0
Build and run them, check if it's ok. Then connect into bash in agent container
docker exec -it gocd-agent bash
Install the binaries using the alpine package manager.
apk add --no-cache nodejs yarn
Then logout and update the container image. Now you have an image with needed packeges. Also read this article.
You have two options with gocd agents.
The first one is the agent use docker, and create other containers, for any purpose that the pipeline needs. So you can have a lot of agents with this option, and the rules or definitions occurs in the pipeline. The agent only execute.
The second one, is an agent with al kind of program installed you needed. I use this one. For this case, you use a Dockerfile with all, and generate the image for all the agents.
For example i have an agent with gcloud, kubectl, sonar scanner and jmeter, who test with sonar before the deploy, then deploy in gcp, and for last step, it test with jmeter after the deploy.

Install jenkins plugin from github in dockerfile

I have a simple Dockerfile where I install Jenkins and some plugins:
FROM jenkins/jenkins:2.169-alpine
USER root
RUN apk update \
&& apk add --no-cache curl docker jq tzdata \
&& rm -rf /var/cache/apk/*
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
There is now a jenkins plugin with a custom patch I would need to include. There's already a PR open for it but it's been months it was not merged and I can't wait anymore, so I'd like to add a step to install a plugin from a branch of my github repo.
I found out that after the jenkins-cli.jar is available (so, not at build time), one can install a plugin in hpi format doing:
java -jar /var/jenkins_home/war/WEB-INF/jenkins-cli.jar \
-auth user:password \
-s http://localhost:8080 install-plugin file://<HPI_PATH>
but it cannot work at build time.
If not possible in the dockerfile, is there an alternative?
First build the plugin.hpi locally and then Use copy or Add in Dockerfile to add plugin into jenkins docker images in build step.
example
Add https://updates.jenkins-ci.org/download/plugins/sonar/2.8.1/sonar.hpi /var/jenkins_home/plugins/
Or
Install sonar plugin using local build hpi file.
Copy sonar.hpi /var/jenkins_home/plugins/
After digging PR, I found the solution here https://github.com/jenkinsci/docker/pull/799
This is not an install from github but it will work out
So you just need to add at the end of your dockerfile (being root and not jenkins user):
RUN /usr/local/bin/install-plugins.sh plugin-name:plugin-version:hpi-url

How to setup GitLab CE CI to user docker images for runners

I've now tried for several days to get a runner working on a docker container. I have a Debian running system with GitLab, gitlab-runner and docker installed. I want to use docker as a container for my runners, because shell executors are installing all things on my CI maschine...
What I have done until now: I installed docker like it is described in the GitLab CE docs and run this command:
gitlab-runner register -n \
--url DOMAIN \
--registration-token TOKEN \
--executor docker \
--description "docker-builder" \
--docker-image "gliderlabs/alpine" \
--docker-privileged
then I created a test repo to look if it is working, with this .gitlab-ci-yml
variables:
# GIT_STRATEGY: fetch # re-uses the project workspace
GIT_CHECKOUT: "false" # don't checkout the working copy to a revision related to the CI pipeline
GIT_DEPTH: "3"
cache:
paths:
- node_modules/
stages:
- deploy
before_script:
- apt-get update
- apt-get install -y -qq sshpass
- ls -la
# ======================= Jobs=======================
# Teporaly disable jobs by adding a . (dot) before the job name
ftp-upload:
stage: deploy
# environment: Production
except:
- testing
script:
- rm ./package-lock.json
- npm install
- ls -la
- sshpass -V
- export SSHPASS=$PASSWORD
- sshpass -e scp -o stricthostkeychecking=no -r . $USERNAME#$HOST:/Test
only:
- master
# ===================== ./Jobs ======================
but I get an error in the GitLab CI console:
Running with gitlab-runner 11.1.0 (081978aa)
on docker-builder 5ce3c211
Using Docker executor with image gliderlabs/alpine ...
Pulling docker image gliderlabs/alpine ...
Using docker image sha256:74a78e860d7b39aa694197a70d4467019b611b80c21d886fcd1bfc04d2e767d4 for gliderlabs/alpine ...
Running on runner-5ce3c211-project-3-concurrent-0 via srvvgit001...
Cloning repository for master with git depth set to 3...
Cloning into '/builds/additive/test'...
Skipping Git checkout
Skipping Git submodules setup
Checking cache for default...
Successfully extracted cache
/bin/sh: eval: line 64: apt-get: not found
$ apt-get update
ERROR: Job failed: exit code 127
I don't know much about those docker containers but them seems good for reuse without modifying my CI system. It looks here that it is installing another alpine image/container, but have I not said GitLab runner to use an existing one?
Hopefully, there is someone that can easier explain to me how this works... I really have tried anything google gave me.
The Docker image you are using is a Alpine image, which is a minimal Linux distribution.
Alpine Linux is not using apt for package management but apk.
The problem is in your .gitlab-ci-yml's before_script section where you are trying to run apt.
To solve your issue, replace the use of apt by apk:
before_script:
- apk update
- apk add sshpass
...
Read more about the Alpine Linux package management here.

Invoke Ansible playbook in Jenkins

I have jenkins build and I am trying to invoke a ansible playbook file for an s3 upload. When I execute a post-build-script for invoking an ansible playbook file, I am ending with below error.
Cannot run program "ansible-playbook" (in directory "/var/jenkins_home/workspace/mybuild"): error=2, No such file or directory
Below screenshot is ansible post build script configuration.
FYI: There is a file(ansibledemo.yml) in my build folder. I tried giving absolute path(/var/jenkins_home/workspace/mybuild/ansibledemo.yml). Still no go.
When I try running ansible-playbook myplaybook.yml directly in jenkins image(terminal) I am ending up with bash: ansible-playbook: command not found
When I tried installing ansible in my jenkins server, I couldn't execute any installation commands. Please see the below screenshot.
Ansible is not install on your Jenkins machine, first you need to install the ansible on the jenkins machine:
On Ubuntu/Debian:
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
On CentOS/RedHat:
sudo yum install epel-release
sudo yum install ansible
After that you will be able to run the ansible-playbook.
You can try to install using pip version as an alternative and try, Please see the below steps,
$ virtualenv venv
$ source venv/bin/activate
$ pip install ansible-container[docker,openshift]
You can see more options to install in docs: https://docs.ansible.com/ansible-container/installation.html
But always it is a good option to keep a separate vm / docker like "ansible-controller" and use that as a slave to jenkins, So that you don't need ansible plugins in ansible. And jenkins will be always stable without much load
Download package information from the configured sources.
# apt update
Install ansible
# apt install ansible
That's it.
If you run official jenkins container (based on debian) than repo with ansible build in already and you don't need "apt-add-repository". But you could install apt-add-repository by installing software-properties-common for further using.
dpkg -S apt-add-repository tells that this packet belongs to software-properties-common.
Error appears because the author of container always tries to make it as light as possible and remove package information.
You don't need sudo, because you become root in container by default. You become another user only if you mention it in intentionally.
Please, add information that you work in container to your question.

Resources