Using docker to create CI server agents - docker

I'm trying to set up a local GoCD CI server using docker for both the base server and agents. I can get everything running fine, but issues spring up when I try make sure the agent containers have everything installed in them that I need to build my projects.
I want to preface this with I'm aware that I might not be using these technologies correctly, but I don't know much better atm. If there are better ways of doing things, I'd love to learn.
To start, I'm using the official GoCD docker image and that works just fine.
Creating a blank agent also works just fine.
However, one of my projects requires node, yarn and webpack to be build (good ol' react site).
Of course a standard agent container has nothing but the agent installed on it so I've had a shot using a Dockerfile to install all the tech I need to build my projects.
FROM gocd/gocd-agent-ubuntu-18.04:v19.11.0
SHELL ["/bin/bash", "-c"]
USER root
RUN apt-get update
RUN apt-get install -y git curl wget build-essential ca-certificates libssl-dev htop openjdk-8-jre python python-pip
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y yarn
# This user is created in the base agent image
USER go
ENV NVM_DIR /home/go/.nvm
ENV NODE_VERSION 10.17.0
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default \
&& npm install -g webpack webpack-cli
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/v$NODE_VERSION/bin:$PATH
This is the current version of this file, but I've been through many many iterations of frustrations where an globally installed npm package is never on the path and thus not conveniently available.
The docker build works fine, its just that in this iteration of the Dockerfile, webpack is not found when the agent tries running a build.
My question is:
Is a Dockerfile the right place to do things like install yarn, node, webpack etc... ?
If so, how can I ensure everything I install through npm is actually available?
If not, what are the current best practices about this?
Any help, thoughts and anecdotes are fully welcomed and appreciated!
Cheers~!

You should separate gocd-server and gocd-agent to various containers.
Pull images:
docker pull gocd/gocd-server:v18.10.0 docker pull
gocd/gocd-agent-alpine-3.8:v18.10.0
Build and run them, check if it's ok. Then connect into bash in agent container
docker exec -it gocd-agent bash
Install the binaries using the alpine package manager.
apk add --no-cache nodejs yarn
Then logout and update the container image. Now you have an image with needed packeges. Also read this article.

You have two options with gocd agents.
The first one is the agent use docker, and create other containers, for any purpose that the pipeline needs. So you can have a lot of agents with this option, and the rules or definitions occurs in the pipeline. The agent only execute.
The second one, is an agent with al kind of program installed you needed. I use this one. For this case, you use a Dockerfile with all, and generate the image for all the agents.
For example i have an agent with gcloud, kubectl, sonar scanner and jmeter, who test with sonar before the deploy, then deploy in gcp, and for last step, it test with jmeter after the deploy.

Related

How to cache deployment image in gitlab-ci?

I'm creating a gitlab-ci deployment stage that requires some more libraries than existing in my image. In this example, I'm adding ssh (in real world, I want to add many more libs):
image: adoptopenjdk/maven-openjdk11
...
deploy:
stage: deploy
script:
- which ssh || (apt-get update -y && apt-get install -y ssh)
- chmod 600 ${SSH_PRIVATE_KEY}
...
Question: how can I tell gitlab runner to cache the image that I'm building in the deploy stage, and reuse it for all deployment runs in future? Because as written, the library installation takes place for each and every deployment, even if nothing changed between runs.
GitLab can only cache files/directories, but because of the way apt works, there is no easy way to tell it to cache installs you've done this way. You also cannot "cache" the image.
There are two options I see:
Create or use a docker image that already includes your dependencies.
FROM adoptopenjdk/maven-openjdk11
RUN apt update && apt install -y foo bar baz
Then build/push the image the image to dockerhub, then change the image: in the yaml:
image: membersound/maven-openjdk11-with-deps:latest
OR simply choose an image that already has all the dependencies you want! There are many useful docker images out there with useful tools installed. For example octopusdeploy/worker-tools comes with many runtimes and tools installed (java, python, AWS CLI, kubectl, and much more).
attempt to cache the deb packages and install from the deb packages. (beware this is ugly)
Commit a bash script as so to a file like install-deps.sh
#!/usr/bin/env bash
PACKAGES="wget jq foo bar baz"
if [ ! -d "./.deb_packages" ]; then
apt update && apt --download-only install -y ${PACKAGES}
cp /var/cache/apt/archives/*.deb ./.deb_packages
fi
apt install -y ./.deb_packages/*.deb
This should cause the debian files to be cached in the directory ./.deb_packages. You can then configure gitlab to cache them so you can use them later.
my_job:
before_script:
- install-deps.sh
script:
- ...
cache:
paths:
- ./.deb_packages

How can I run programs with OpenCL on AMD inside a gitlab-ci docker executor

I have a self-hosted gitlab for still private projects and a dedicated physical node for testing with an AMD GPU. On this node there is already a gitlab-ci runner with docker executor.
Is there a way to execute programms with OpenCL and access to the AMD GPU within the docker-containers, which are created by the gitlab-ci runner?
All I found until now, were Nvidia and CUDA related infos to solve this problem (for example this How can I get use cuda inside a gitlab-ci docker executor), but I haven't found anything useful for the case with OpenCL and AMD.
Found the solution by myself in the meantime. It was easier then expected.
The docker-image for the gitlab-ci pipeline only need the amd gpu driver from the amd website (https://www.amd.com/en/support).
Example-Dockerfile to build the docker images:
from ubuntu:18.04
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y gcc g++ opencl-headers ocl-icd-opencl-dev curl apt-utils unzip tar curl xz-utils wget clinfo
RUN cd /tmp &&\
curl --referer https://drivers.amd.com/drivers/linux -O https://drivers.amd.com/drivers/linux/amdgpu-pro-20.30-1109583-ubuntu-18.04.tar.xz &&\
tar -Jxvf amdgpu-pro-20.30-1109583-ubuntu-18.04.tar.xz &&\
cd amdgpu-pro-20.30-1109583-ubuntu-18.04/ &&\
./amdgpu-install -y --headless --opencl=legacy
Based on your used gpu and linux version you need potentially another file then the one in this example. Its also possible that the file doesn't exist anymore on the website and you have to checkout the newest file.
Beside this there is only a little modification in the gitlab-runner config (/etc/gitlab-runner/config.toml) necessary.
Add in the docker-runner: devices = ["/dev/dri"]:
[[runners]]
...
[runners.docker]
...
devices = ["/dev/dri"]
And restart the gitlab runner again with gitlab-runner restart.
After this its possible to execute opencl-code inside of the gitlab-ci docker runner.

Back-off restarting failed container openshift kubernetes

I have a Dockerfile running Kong-api to deploy on openshift. It build okay, but when I check pods I get Back-off restarting failed container. Here is my dockerfile
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y apt-transport-https curl lsb-core
RUN echo "deb https://kong.bintray.com/kong-deb `lsb_release -sc` main" | tee -a /etc/apt/sources.list
RUN curl -o bintray.key https://bintray.com/user/downloadSubjectPublicKey?username=bintray
RUN apt-key add bintray.key
RUN apt-get update && apt-get install -y kong
COPY kong.conf /etc/kong/
RUN kong migrations bootstrap [-c /etc/kong/kong.conf]
EXPOSE 8000 8443 8001 8444
ENTRYPOINT ["kong", "start", "[-c /etc/kong/kong.conf]"]
Where is my wrong? Please help me. Thanks in advance
In order to make the kong start correctly, you need to execute these commands when you have an active Postgres connection:
kong migrations bootstrap && kong migrations up
Also, note that the format of the current Dockerfile is not valid if you would like
to pass options within the ENTRYPOINT you can write it like that:
ENTRYPOINT ["kong", "start","-c", "/etc/kong/kong.conf"]
Also, you need to remove this line:
RUN kong migrations bootstrap [-c /etc/kong/kong.conf]
Note that the format of the above line is not valid as RUN expects a normal shell command so using [] in this case is not correct.
So as you deploy to Openshift there are several ways to achieve what you need.
You can make use of initContainers which allows you to execute the required commands before the actual service is up.
You can check the official helm chart for Kong in order to know how it works or use helm to install Kong itself.

Why dockered centos doesn't recognize pip?

I want to create a container with python and few packages over centos. I've tried to run several commands inside raw centos container. Everything worked fine I've installed everything I want. Then I created Dockerfile with the same commands executed via RUN and I'm getting /bin/sh: pip: command not found What could be wrong? I mean the situation at all. Why everything could be executed in the command line but not be executed with RUN? I've tried both variants:
RUN command
RUN command
RUN pip install ...
and
RUN command\
&& command\
&& pip install ...
Commands that I execute:
from centos
run yum install -y centos-release-scl\
&& yum install -y rh-python36\
&& scl enable rh-python36 bash\
&& pip install django
UPD: Full path to the pip helped. What's wrong?
You need to install pip first using
yum install python-pip
or if you need python3 (from epel)
yum install python36-pip
When not sure, ask yum:
yum whatprovides /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : #System
Matched from:
Filename : /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : updates
Matched from:
Filename : /usr/bin/pip
python2-pip-18.0-4.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : fedora
Matched from:
Filename : /usr/bin/pip
This output is from Fedora29, but you should get similar result in Centos/RHEL
UPDATE
From comment
But when I execute same commands from docker run -ti centos everything
is fine. What's the problem?
Maybe your PATH is broken somehow? Can you try full path to pip?
As it has already been mentioned by #rkosegi, it must be a PATH issue. The following seems to work:
FROM centos
ENV PATH /opt/rh/rh-python36/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN yum install -y centos-release-scl
RUN yum install -y rh-python36
RUN scl enable rh-python36 bash
RUN pip install django
I "found" the above PATH by starting a centos container and typing the commands one-by-one (since you've mentioned that it is working).
There is a nice explanation on this, in the slides of BMitch which can be found here: sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#24
Q: Why doesn't RUN work?
Why am I getting ./build.sh is not found?
RUN cd /app/srcRUN ./build.sh
The only part saved from a RUN is the filesystem (as a new layer).
Environment variables, launched daemons, and the shell state are all discarded with the temporary container when pid 1 exits.
Solution: merge multiple lines with &&:
RUN cd /app/src && ./build.sh
I know this was asked a while ago, but I just had this issue when building a Docker image, and wasn't able to find a good answer quickly, so I'll leave it here for posterity.
Adding the scl enable command wouldn't work for me in my Dockerfile, so I found that you can enable scl packages without the scl command by running:
source /opt/rh/<package-name>/enable.
If I remember correctly, you won't be able to do:
RUN source /opt/rh/<package-name>/enable
RUN pip install <package>
Because each RUN command creates a different layer, and shell sessions aren't preserved, so I just ran the commands together like this:
RUN source /opt/rh/rh-python36/enable && pip install <package>
I think the scl command has issues running in Dockerfiles because scl enable <package> bash will open a new shell inside your current one, rather than adding the package to the path in your current shell.
Edit:
Found that you can add packages to your current shell by running:
source scl_source enable <package>

Docker usage with Odoo 10.0

I need to know how to setup a Docker to implement a container that could help me run an Odoo 10.0 ERP environment in it.
I'm looking for references or some setup guides, even I don't mind if you can paste the CLI below. I'm currently developing in a Ubuntu OS.
Thanks in Advance.......!!!
#NaNDuzIRa This is quite simple. I suggest that when you want to learn how to do something even if you need it very fast to look into the "man page" of the tool that you are trying to use to package your application. In this case, it is Docker.
Create a file name Dockerfile or dockerfile
Now that you know the OS flavor you want to use. Include that at the beginning of the "Dockerfile"
Then, you can add how you want to install your application in the OS.
Finally, you include the installation steps of Odoo for which i have added a link at the bottom of this post.
#OS of the image, Latest Ubuntu
FROM ubuntu:latest
#Privilege raised to install the application or package as a root user
USER root
#Some packages that will be used for the installation
RUN apt update && apt -y install wget
#installing Odoo
RUN wget -O - https://nightly.odoo.com/odoo.key | apt-key add -
RUN echo "deb http://nightly.odoo.com/10.0/nightly/deb/ ./" >> /etc/apt/sources.list.d/odoo.list
RUN apt-get -y update && apt-get -y install odoo
References
Docker
Dockerfile
Odoo

Resources