How to cache deployment image in gitlab-ci? - docker

I'm creating a gitlab-ci deployment stage that requires some more libraries than existing in my image. In this example, I'm adding ssh (in real world, I want to add many more libs):
image: adoptopenjdk/maven-openjdk11
...
deploy:
stage: deploy
script:
- which ssh || (apt-get update -y && apt-get install -y ssh)
- chmod 600 ${SSH_PRIVATE_KEY}
...
Question: how can I tell gitlab runner to cache the image that I'm building in the deploy stage, and reuse it for all deployment runs in future? Because as written, the library installation takes place for each and every deployment, even if nothing changed between runs.

GitLab can only cache files/directories, but because of the way apt works, there is no easy way to tell it to cache installs you've done this way. You also cannot "cache" the image.
There are two options I see:
Create or use a docker image that already includes your dependencies.
FROM adoptopenjdk/maven-openjdk11
RUN apt update && apt install -y foo bar baz
Then build/push the image the image to dockerhub, then change the image: in the yaml:
image: membersound/maven-openjdk11-with-deps:latest
OR simply choose an image that already has all the dependencies you want! There are many useful docker images out there with useful tools installed. For example octopusdeploy/worker-tools comes with many runtimes and tools installed (java, python, AWS CLI, kubectl, and much more).
attempt to cache the deb packages and install from the deb packages. (beware this is ugly)
Commit a bash script as so to a file like install-deps.sh
#!/usr/bin/env bash
PACKAGES="wget jq foo bar baz"
if [ ! -d "./.deb_packages" ]; then
apt update && apt --download-only install -y ${PACKAGES}
cp /var/cache/apt/archives/*.deb ./.deb_packages
fi
apt install -y ./.deb_packages/*.deb
This should cause the debian files to be cached in the directory ./.deb_packages. You can then configure gitlab to cache them so you can use them later.
my_job:
before_script:
- install-deps.sh
script:
- ...
cache:
paths:
- ./.deb_packages

Related

Modifying docker image to work with gitlab-ci

I have a gitlab pages site that is built inside a docker container. I found a base image that contains 95% of what I want. With my old ci, I was installing extra packages before the build step. I want to create a new image with these packages installed, and use that image. I was able to build and run this image, but it no longer runs in gitlab actions. I'm not sure why.
Git repo: https://gitlab.com/hybras/hybras.gitlab.io
CI Config:
image: klakegg/hugo:asciidoctor-ci # old
image: registry.gitlab.com/hybras/hybras.gitlab.io # new
variables:
GIT_SUBMODULE_STRATEGY: recursive
SSHPASS: $SSHPASS
pages:
script:
- gem install asciidoctor-html5s # remove these package installs in the new version
- apk add openssh sshpass # remove these package installs in the new version
- hugo
- sshpass -p $SSHPASS scp -r public/* $HOST_AND_DIR
artifacts:
paths:
- public
only:
- master
Successful Job: Installs the packages, builds the site, scp's it to my mirror
Dockerfile:
FROM klakegg/hugo:asciidoctor
RUN gem install asciidoctor-html5s --no-document \
&& apk --no-cache add openssh sshpass
CMD hugo
Failed CI Run: Error: unknown command "sh" for "hugo"
As it can be seen in your pipeline, there is an error:
Error: unknown command "sh" for "hugo"
It means that you haven't hugo installed on the image you've been using. In order to solve this problem, you can use the official docker image of Hugo. For this purpose, you have to use its image or install hugo on the image you are using.
1. Using Hugo image:
Add the following lines to the stage you want to use hugo:
<pipeline_name>:
image: klakegg/hugo
2. Install Hugo on existing image
Check out this page to find out how to install the Hugo on different distribution of your base image.
For example you can install Hugo on ubuntu using the following command:
<pipeline_name>:
before_scripts:
- sudo apt-get install hugo
3. Install Hugo in your Dockerfile
RUN apk add hugo # Or something like this!

How can I run programs with OpenCL on AMD inside a gitlab-ci docker executor

I have a self-hosted gitlab for still private projects and a dedicated physical node for testing with an AMD GPU. On this node there is already a gitlab-ci runner with docker executor.
Is there a way to execute programms with OpenCL and access to the AMD GPU within the docker-containers, which are created by the gitlab-ci runner?
All I found until now, were Nvidia and CUDA related infos to solve this problem (for example this How can I get use cuda inside a gitlab-ci docker executor), but I haven't found anything useful for the case with OpenCL and AMD.
Found the solution by myself in the meantime. It was easier then expected.
The docker-image for the gitlab-ci pipeline only need the amd gpu driver from the amd website (https://www.amd.com/en/support).
Example-Dockerfile to build the docker images:
from ubuntu:18.04
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y gcc g++ opencl-headers ocl-icd-opencl-dev curl apt-utils unzip tar curl xz-utils wget clinfo
RUN cd /tmp &&\
curl --referer https://drivers.amd.com/drivers/linux -O https://drivers.amd.com/drivers/linux/amdgpu-pro-20.30-1109583-ubuntu-18.04.tar.xz &&\
tar -Jxvf amdgpu-pro-20.30-1109583-ubuntu-18.04.tar.xz &&\
cd amdgpu-pro-20.30-1109583-ubuntu-18.04/ &&\
./amdgpu-install -y --headless --opencl=legacy
Based on your used gpu and linux version you need potentially another file then the one in this example. Its also possible that the file doesn't exist anymore on the website and you have to checkout the newest file.
Beside this there is only a little modification in the gitlab-runner config (/etc/gitlab-runner/config.toml) necessary.
Add in the docker-runner: devices = ["/dev/dri"]:
[[runners]]
...
[runners.docker]
...
devices = ["/dev/dri"]
And restart the gitlab runner again with gitlab-runner restart.
After this its possible to execute opencl-code inside of the gitlab-ci docker runner.

Using docker to create CI server agents

I'm trying to set up a local GoCD CI server using docker for both the base server and agents. I can get everything running fine, but issues spring up when I try make sure the agent containers have everything installed in them that I need to build my projects.
I want to preface this with I'm aware that I might not be using these technologies correctly, but I don't know much better atm. If there are better ways of doing things, I'd love to learn.
To start, I'm using the official GoCD docker image and that works just fine.
Creating a blank agent also works just fine.
However, one of my projects requires node, yarn and webpack to be build (good ol' react site).
Of course a standard agent container has nothing but the agent installed on it so I've had a shot using a Dockerfile to install all the tech I need to build my projects.
FROM gocd/gocd-agent-ubuntu-18.04:v19.11.0
SHELL ["/bin/bash", "-c"]
USER root
RUN apt-get update
RUN apt-get install -y git curl wget build-essential ca-certificates libssl-dev htop openjdk-8-jre python python-pip
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y yarn
# This user is created in the base agent image
USER go
ENV NVM_DIR /home/go/.nvm
ENV NODE_VERSION 10.17.0
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default \
&& npm install -g webpack webpack-cli
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/v$NODE_VERSION/bin:$PATH
This is the current version of this file, but I've been through many many iterations of frustrations where an globally installed npm package is never on the path and thus not conveniently available.
The docker build works fine, its just that in this iteration of the Dockerfile, webpack is not found when the agent tries running a build.
My question is:
Is a Dockerfile the right place to do things like install yarn, node, webpack etc... ?
If so, how can I ensure everything I install through npm is actually available?
If not, what are the current best practices about this?
Any help, thoughts and anecdotes are fully welcomed and appreciated!
Cheers~!
You should separate gocd-server and gocd-agent to various containers.
Pull images:
docker pull gocd/gocd-server:v18.10.0 docker pull
gocd/gocd-agent-alpine-3.8:v18.10.0
Build and run them, check if it's ok. Then connect into bash in agent container
docker exec -it gocd-agent bash
Install the binaries using the alpine package manager.
apk add --no-cache nodejs yarn
Then logout and update the container image. Now you have an image with needed packeges. Also read this article.
You have two options with gocd agents.
The first one is the agent use docker, and create other containers, for any purpose that the pipeline needs. So you can have a lot of agents with this option, and the rules or definitions occurs in the pipeline. The agent only execute.
The second one, is an agent with al kind of program installed you needed. I use this one. For this case, you use a Dockerfile with all, and generate the image for all the agents.
For example i have an agent with gcloud, kubectl, sonar scanner and jmeter, who test with sonar before the deploy, then deploy in gcp, and for last step, it test with jmeter after the deploy.

Missing installed dependencies when docker image is used

Here is my Dockerfile
FROM node:10
RUN apt-get -qq update && apt-get -qq -y install bzip2
RUN yarn global add #bluebase/cli && bluebase plugins:add #bluebase/cli-expo && bluebase plugins:add #bluebase/cli-web
RUN bluebase plugins
When the docker file is built it installs all dependencies, and the last command RUN bluebase plugins outputs the list of plugins installed. But when this image is pushed and used in github actions, bluebase is available globally but no plugins are installed. What am I doing wrong?
Github Workflow
name: Development CI
on:
push:
# Sequence of patterns matched against refs/heads
branches:
- '*' # Push events on all branchs
- '*/*'
- '!master' # Exclude master
- '!next' # Exclude next
- '!alpha' # Exclude alpha
- '!beta' # Exclude beta
jobs:
web-deploy:
container:
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Check BlueBase
run: bluebase #Outputs list of comamnds available with bluebase
- name: Check BlueBase Plugins
run: bluebase plugins #Outputs no plugins installed
This was a tricky problem! Here is the solution that worked for me. I'll try and explain why below.
jobs:
web-deploy:
container:
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Check BlueBase
run: bluebase
- name: Check BlueBase Plugins
run: HOME=/root bluebase plugins
- name: Check web plugin
run: HOME=/root bluebase web:build --help
Background
Firstly the Docker image. The command bluebase plugins:add seems to be very dependent on the $HOME environment variable. Your Docker image is built as the root user, so $HOME is /root. The bluebase plugins:add command installs plugin dependencies at $HOME/.cache/#bluebase so they end up at /root/.cache/#bluebase.
Now the jobs.<id>.container feature. When your container is run there is some rather complicated Docker networking and volume mounts that take place. One of those mounts is -v "/home/runner/work/_temp/_github_home":"/github/home". This mounts local files from the host, including a copy of your checked out repository, into the container. Then it changes $HOME to point to /github/home.
Problem
The reason bluebase plugins doesn't work is because it depends on $HOME pointing to /root but now GitHub Actions has changed it to /github/home.
Solutions
A solution I tried was to install the plugins at /github/home instead of /root in the Docker image.
FROM node:10
RUN apt-get -qq update && apt-get -qq -y install bzip2
RUN mkdir -p /github/home
ENV HOME /github/home
RUN yarn global add #bluebase/cli && bluebase plugins:add #bluebase/cli-expo && bluebase plugins:add #bluebase/cli-web
RUN bluebase plugins
The problem with this is that the volume mount that GitHub Actions creates overwrites the /github/home directory. So then I tried a few tricks like symlinks or moving the .cache/#bluebase directory around to avoid it being clobbered by the mount. None of those worked.
So the only solution seemed to be changing $HOME back to /root. This should NOT be done permanently in the workflow because GitHub Actions depends on HOME=/github/home to work correctly. So the solution is to set it temporarily for each command.
HOME=/root bluebase web:build --help
Takeaway
The main takeaway from this is that any tooling pre-built in a container that relies on $HOME pointing to a specific location may not work correctly when used in the jobs.<container_id>.container syntax.
I do not think the issue with the image, it's easy to confirm on local image and you will see the that the plugin is available in Docker image.
Just try to run
docker build -t plugintest .
#then run the image on local system to verify plugin
docker run -it --rm --entrypoint "/bin/sh" plugintest -c "bluebase plugins"
Seems like the issue with your YML config file.
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
This line runs-on: ubuntu-latest make does not sense, I think it should be
runs-on:ashimsohail/bluebase-image.

Separate Dockerfile for dav and prod

I am new in Docker so please do not blame me :)
Is there a way to create two different Dockerfiles inherits from one?
Example, we have to have 2 environment: develop and production. Theirs base is the same:
FROM gcc
# it's just an example which shows the same base packets for both environment
RUN apt install lib-boost
For "develop" I have to install some utilities like gdb, valgrind etc.
For "production" I have to build an application. It thought to use "multi stage builds", but it runs steps in Dockerfile consistently. How I should do if I do not want to build an application in "develop"?
The first build the base image:
build -t base_image .
And then for each Dockerfile use it?
# for develop
FROM base_image
RUN apt install gdb
# for prod
FROM base_image
RUN make
Here is an example I'm currently using.
Base image Dockerfile:
FROM python:3.6-slim as base
RUN apt update
RUN apt install --no-install-recommends -y git-core build-essential \
&& apt autoclean
# ...
Prod image Dockerfile:
FROM your-registry/base:0.0.0 as prod
# your code
# ...
Hope, it'll be helpful for you.

Resources