Modifying docker image to work with gitlab-ci - docker

I have a gitlab pages site that is built inside a docker container. I found a base image that contains 95% of what I want. With my old ci, I was installing extra packages before the build step. I want to create a new image with these packages installed, and use that image. I was able to build and run this image, but it no longer runs in gitlab actions. I'm not sure why.
Git repo: https://gitlab.com/hybras/hybras.gitlab.io
CI Config:
image: klakegg/hugo:asciidoctor-ci # old
image: registry.gitlab.com/hybras/hybras.gitlab.io # new
variables:
GIT_SUBMODULE_STRATEGY: recursive
SSHPASS: $SSHPASS
pages:
script:
- gem install asciidoctor-html5s # remove these package installs in the new version
- apk add openssh sshpass # remove these package installs in the new version
- hugo
- sshpass -p $SSHPASS scp -r public/* $HOST_AND_DIR
artifacts:
paths:
- public
only:
- master
Successful Job: Installs the packages, builds the site, scp's it to my mirror
Dockerfile:
FROM klakegg/hugo:asciidoctor
RUN gem install asciidoctor-html5s --no-document \
&& apk --no-cache add openssh sshpass
CMD hugo
Failed CI Run: Error: unknown command "sh" for "hugo"

As it can be seen in your pipeline, there is an error:
Error: unknown command "sh" for "hugo"
It means that you haven't hugo installed on the image you've been using. In order to solve this problem, you can use the official docker image of Hugo. For this purpose, you have to use its image or install hugo on the image you are using.
1. Using Hugo image:
Add the following lines to the stage you want to use hugo:
<pipeline_name>:
image: klakegg/hugo
2. Install Hugo on existing image
Check out this page to find out how to install the Hugo on different distribution of your base image.
For example you can install Hugo on ubuntu using the following command:
<pipeline_name>:
before_scripts:
- sudo apt-get install hugo
3. Install Hugo in your Dockerfile
RUN apk add hugo # Or something like this!

Related

How to cache deployment image in gitlab-ci?

I'm creating a gitlab-ci deployment stage that requires some more libraries than existing in my image. In this example, I'm adding ssh (in real world, I want to add many more libs):
image: adoptopenjdk/maven-openjdk11
...
deploy:
stage: deploy
script:
- which ssh || (apt-get update -y && apt-get install -y ssh)
- chmod 600 ${SSH_PRIVATE_KEY}
...
Question: how can I tell gitlab runner to cache the image that I'm building in the deploy stage, and reuse it for all deployment runs in future? Because as written, the library installation takes place for each and every deployment, even if nothing changed between runs.
GitLab can only cache files/directories, but because of the way apt works, there is no easy way to tell it to cache installs you've done this way. You also cannot "cache" the image.
There are two options I see:
Create or use a docker image that already includes your dependencies.
FROM adoptopenjdk/maven-openjdk11
RUN apt update && apt install -y foo bar baz
Then build/push the image the image to dockerhub, then change the image: in the yaml:
image: membersound/maven-openjdk11-with-deps:latest
OR simply choose an image that already has all the dependencies you want! There are many useful docker images out there with useful tools installed. For example octopusdeploy/worker-tools comes with many runtimes and tools installed (java, python, AWS CLI, kubectl, and much more).
attempt to cache the deb packages and install from the deb packages. (beware this is ugly)
Commit a bash script as so to a file like install-deps.sh
#!/usr/bin/env bash
PACKAGES="wget jq foo bar baz"
if [ ! -d "./.deb_packages" ]; then
apt update && apt --download-only install -y ${PACKAGES}
cp /var/cache/apt/archives/*.deb ./.deb_packages
fi
apt install -y ./.deb_packages/*.deb
This should cause the debian files to be cached in the directory ./.deb_packages. You can then configure gitlab to cache them so you can use them later.
my_job:
before_script:
- install-deps.sh
script:
- ...
cache:
paths:
- ./.deb_packages

How to setup GitLab CE CI to user docker images for runners

I've now tried for several days to get a runner working on a docker container. I have a Debian running system with GitLab, gitlab-runner and docker installed. I want to use docker as a container for my runners, because shell executors are installing all things on my CI maschine...
What I have done until now: I installed docker like it is described in the GitLab CE docs and run this command:
gitlab-runner register -n \
--url DOMAIN \
--registration-token TOKEN \
--executor docker \
--description "docker-builder" \
--docker-image "gliderlabs/alpine" \
--docker-privileged
then I created a test repo to look if it is working, with this .gitlab-ci-yml
variables:
# GIT_STRATEGY: fetch # re-uses the project workspace
GIT_CHECKOUT: "false" # don't checkout the working copy to a revision related to the CI pipeline
GIT_DEPTH: "3"
cache:
paths:
- node_modules/
stages:
- deploy
before_script:
- apt-get update
- apt-get install -y -qq sshpass
- ls -la
# ======================= Jobs=======================
# Teporaly disable jobs by adding a . (dot) before the job name
ftp-upload:
stage: deploy
# environment: Production
except:
- testing
script:
- rm ./package-lock.json
- npm install
- ls -la
- sshpass -V
- export SSHPASS=$PASSWORD
- sshpass -e scp -o stricthostkeychecking=no -r . $USERNAME#$HOST:/Test
only:
- master
# ===================== ./Jobs ======================
but I get an error in the GitLab CI console:
Running with gitlab-runner 11.1.0 (081978aa)
on docker-builder 5ce3c211
Using Docker executor with image gliderlabs/alpine ...
Pulling docker image gliderlabs/alpine ...
Using docker image sha256:74a78e860d7b39aa694197a70d4467019b611b80c21d886fcd1bfc04d2e767d4 for gliderlabs/alpine ...
Running on runner-5ce3c211-project-3-concurrent-0 via srvvgit001...
Cloning repository for master with git depth set to 3...
Cloning into '/builds/additive/test'...
Skipping Git checkout
Skipping Git submodules setup
Checking cache for default...
Successfully extracted cache
/bin/sh: eval: line 64: apt-get: not found
$ apt-get update
ERROR: Job failed: exit code 127
I don't know much about those docker containers but them seems good for reuse without modifying my CI system. It looks here that it is installing another alpine image/container, but have I not said GitLab runner to use an existing one?
Hopefully, there is someone that can easier explain to me how this works... I really have tried anything google gave me.
The Docker image you are using is a Alpine image, which is a minimal Linux distribution.
Alpine Linux is not using apt for package management but apk.
The problem is in your .gitlab-ci-yml's before_script section where you are trying to run apt.
To solve your issue, replace the use of apt by apk:
before_script:
- apk update
- apk add sshpass
...
Read more about the Alpine Linux package management here.

How to use Dockerfile in Gitlab CI

Using gitlab-ci for my node/react app, I'm trying to use phusion/passenger-nodejs as the base docker image
I can specify this easily in .gitlab-ci.yml:
image: phusion/passenger-nodejs:latest
variables:
HOME: /root
cache:
paths:
- node_modules/
stages:
- build
- test
- deploy
set_environment:
stage: build
script:
- npm install
tags:
- docker
test_node:
stage: test
script:
- npm install
- npm test
tags:
- docker
However, Phusion Passenger expects you to make configuration changes, e.g. python support, using their special init process, etc. in the Dockerfile.
#FROM phusion/passenger-ruby24:<VERSION>
#FROM phusion/passenger-jruby91:<VERSION>
FROM phusion/passenger-nodejs:<VERSION>
#FROM phusion/passenger-customizable:<VERSION>
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# If you're using the 'customizable' variant, you need to explicitly opt-in
# for features.
#
# N.B. these images are based on https://github.com/phusion/baseimage-docker,
# so anything it provides is also automatically on board in the images below
# (e.g. older versions of Ruby, Node, Python).
#
# Uncomment the features you want:
#
# Ruby support
#RUN /pd_build/ruby-2.0.*.sh
#RUN /pd_build/ruby-2.1.*.sh
#RUN /pd_build/ruby-2.2.*.sh
#RUN /pd_build/ruby-2.3.*.sh
#RUN /pd_build/ruby-2.4.*.sh
#RUN /pd_build/jruby-9.1.*.sh
# Python support.
RUN /pd_build/python.sh
# Node.js and Meteor standalone support.
# (not needed if you already have the above Ruby support)
RUN /pd_build/nodejs.sh
# ...put your own build instructions here...
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Is there a way to use a Dockerfile with gitlab-ci? Is there a good work around other than apt-get install and adding shell scripts?
Yes, create a second Gitlab repository where you place your Dockerfile in. There you add a gitlab-ci.yml file with a script command that builds you modified image and push it to your private registry or the Gitlab embedded Docker registry, eg:
script:
docker build . -t http://myregistry:5000/mymodified image
docker push http://myregistry:5000/mymodified
Inside your other Gitlab repository, change the image: line accordingly:
image: http://myregistry:5000/mymodified
Information on the Gitlab embedded Docker registry can be found here -> here

Gitlab.com runners: How do I install and run software from an external repos?

I'm pretty new to Gitlab.com's CI and to docker.
I have a simple python pelican static blog that builds with a simple .gitlab-ci.yml
image: python:2.7-alpine
pages:
script:
- pip install -r requirements.txt
- pelican -s publishconf.py
artifacts:
paths:
- public
So I see that it specifies a python docker image, uses pip to install various python scripts, then runs pelican all within that image.
Now my issue is that I want to run a my own version of pelican. I modified my requirements.txt file to look for my own branch of pelican, but this fails
beautifulsoup4
markdown
smartypants
typogrify
git+https://github.com/jerryasher/pelican.git#hidden-cats
pelican-fontawesome
pelican-gist
pelican-jsfiddle
pelican-neighbors
Now when it builds, Gitlab's Runner tells me:
Running with gitlab-ci-multi-runner 1.9.0 (82714ae)
Using Docker executor with image python:2.7-alpine ...
Pulling docker image python:2.7-alpine ...
Running on runner-e11ae361-project-1654117-concurrent-0 via runner-e11ae361-machine-1484613050-ce975c76-digital-ocean-4gb...
Cloning repository...
Cloning into '/builds/jerrya/ashercodes'...
Checking out 532f8b38 as master...
$ pip install -r requirements.txt
Collecting git+https://github.com/jerryasher/pelican.git#hidden-cats (from -r requirements.txt (line 5))
Cloning https://github.com/jerryasher/pelican.git (to hidden-cats) to /tmp/pip-72xxqt-build
Error [Errno 2] No such file or directory while executing command git clone -q https://github.com/jerryasher/pelican.git /tmp/pip-72xxqt-build
Cannot find command 'git'
ERROR: Build failed: exit code 1
Okay,
Git doesn't seem to be present. Indeed prior to the above attempt, I had added a line (that failed) to the .gitlab-ci.yml script saying to use git to clone that repo locally, and that also failed, because ... no git.
(The docker image I am using python:2.7-alpine also seems to have no apt-get.)
Do I need to build my own docker image containing git and python and anything else that I require, or is there some "usual" way to have a Gitlab.com runner pull in an external program from either a git repo, or some typical linux package repository?
And if I can't do this, is that in this case the fault of the runner, or the fault of the docker image?
You can just install git (and any other package) if you need it. Your own image will be faster but it's not needed.
pages:
script:
- apk --update add git openssh
- pip install -r requirements.txt
...

When building Jenkins in Docker plugins fail to install

I have a Dockerfile for a custom Jenkins master like so:
FROM jenkins
MAINTAINER me
USER root
RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state
RUN apt-get update \
&& apt-get install -y sudo \
&& apt-get install -y vim \
&& rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
# COPY plugins.txt /usr/share/jenkins/plugins.txt
# RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--handlerCountStartup=100 --handlerCountMax=300"
RUN /usr/local/bin/install-plugins.sh git:2.6.0
Everything works fine until the RUN /usr/local/bin/install-plugins.sh git:2.6.0 line. I get an error installing the plugins:
Creating initial locks...
Analyzing war...
Downloading plugins...
Downloading plugin: git from https://updates.jenkins.io/download/plugins/git/2.6.0/git.hpi
Downloading plugin: git-plugin from https://updates.jenkins.io/download/plugins/git-plugin/2.6.0/git-plugin.hpi
Failed to download plugin: git or git-plugin
WAR bundled plugins:
Installed plugins:
*:
Some plugins failed to download!
Not downloaded: git
The command '/bin/sh -c /usr/local/bin/install-plugins.sh git:2.6.0' returned a non-zero code: 1
Am I doing something wrong or is this an issue with Jenkins/Docker?
For those who are pulling the jenkins image from dockerHub, dont pull:
docker pull jenkins
or
docker pull jenkinsci/jenkins
rather pull the latest version using:
docker pull jenkins/jenkins
This is the latest one according to https://jenkins.io/blog/2018/12/10/the-official-Docker-image/
Your Dockerfile works for me, installs all plugins and builds the image successfully:
Analyzing war...
Downloading plugins...
Downloading plugin: git from https://updates.jenkins.io/download/plugins/git/2.6.0/git.hpi
> git depends on workflow-scm-step:1.14.2,mailer:1.17,matrix-project:1.7.1,ssh-credentials:1.12,parameterized-trigger:2.4;resolution:=optional,scm-api:1.2,token-macro:1.11;resolution:=optional,promoted-builds:2.27;resolution:=optional,credentials:2.1.4,git-client:1.21.0
Downloading plugin: workflow-scm-step from https://updates.jenkins.io/download/plugins/workflow-scm-step/latest/workflow-scm-step.hpi
...
Removing intermediate container 4f895c203944
Successfully built 31d58d1f586f
Try docker build --no-cache in case there's an issue with one of the layers in your image cache, or set up an automated build on Docker Hub and build it on Docker's servers.
I recall having problems installing with that script myself. Instead, I used the following:
RUN install-plugins.sh \
disable-failed-job \
disk-usage \
greenballs \
...
And hopefully it doesn't make a difference for this, but I have my plugin install inside of the root portion of my Dockerfile, before dropping back to running commands as USER jenkins.
Dockerfile
FROM jenkins/jenkins:latest
ENV CURL_OPTIONS -sSfLk
ENV JENKINS_OPTS --httpPort=-1
The curl timeouts for downloading plugins were insufficient in some cases, that was just fixed for image 2.19.1, and it is now configurable too using CURL_CONNECTION_TIMEOUT and other options
I had the same problem on OS X.
In my case the problem was caused by a bad DNS configuration (obtained by DHCP). When I changed the DNS to Googles DNS 8.8.8.8 it all worked perfectly.
I encountered error messages such as:
Failed to resolve host name "ftp.icm.edu.pl". Perhaps you need to configure HTTP proxy
I had a very similar issue and the solution for me was to specify the proxy within the Docker file prior to plugin install. Below is the snippet of my Dockerfile
FROM jenkins:latest
MAINTAINER Jose Estrada
USER root
ENV JAVA_OPTS="--handlerCountStartup=100 --handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war -Dhttps.proxyHost=proxy-wsa.esl.cisco.com -Dhttps.proxyPort=80"
ENV http_proxy <PROXY Settings>
ENV https_proxy <PROXY Settings>
RUN /usr/local/bin/install-plugins.sh cisco-spark-notifier:latest
This could be a DNS issue. Please restart docker daemon and try. (sudo service docker restart)

Resources