Gitlab running stages in docker image - docker

I am new to GitLab and I am trying to set up CI pipeline where I want to run and terraform scripts from inside the docker image just to make sure I have all the necessary base images already installed and built.
Currently the CI pipeline I came up with some investigation on official docs looks something like this:
stages:
- terraform:check
.base-terraform:
image:
name: "hashicorp/terraform"
entrypoint: [""]
before_script:
- export GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS}
- terraform version
tf-fmt:
stage: terraform:check
extends: .base-terraform
script:
- make fmt
needs: []
With the above YAML file, it downloads the latest version of Terraform(>=0.15) which I don't want, and plus I need other dependencies like make etc etc. So I was thinking if there is any way where I can build my own custom image use that as extends in stages so that all the necessary dependencies are available for the CI/CD to run.
My DockerFile looks like:
FROM python:3.7-slim
RUN apt-get update && apt-get install -y curl git-core curl build-essential openssl libssl-dev unzip wget
RUN wget -O terraform.zip https://releases.hashicorp.com/terraform/0.13.0/terraform_0.13.0_linux_amd64.zip && \
unzip terraform.zip && \
mv terraform /usr/local/bin/
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl && \
chmod +x ./kubectl && \
mv ./kubectl /usr/local/bin/kubectl
WORKDIR /src
COPY . .
Am I thinking in the right direction? Or is there a much simpler way in gitlab? I have accomplished such tasks with buildkite as I was using buildkite as my CI tool in my past experience.

Related

Dockerfile - Cannot copy files that are NOT located in the same directory of the Dockerfile

My structure is as the following
Code
package.json
all the other directories and files of my project
Dockerfile
.gitlab-ci.yml
Read Me
when I'm trying to create a docker image using Dockerfile the image doesn't work because it always says that Code folder does not exist - it doesn't show the error in the implementation but shows the error when I run the image.
Here is my dockerfile
FROM node:18.1.0-buster-slim
RUN apt-get update && apt-get -y upgrade
WORKDIR /code
EXPOSE 8443
COPY /code/package.json /code/package.json // I believe the error is here
RUN npm install
COPY /code/. /code // I believe the error is here
# Jave Config
RUN echo 'deb http://ftp.debian.org/debian stretch-backports main' | tee /etc/apt/sources.list.d/stretch-backports.list
RUN apt-get update && apt-get -y upgrade && \
apt-get install -y openjdk-11-jre-headless && \
apt-get clean;
CMD ["node","app.js"]
I have also tried to put the dockerfile inside Code folder which is more appropriate but I did not know how to reference that in gitlab-ci.yml. The only solution worked to me is when extracting all the files and folders inside Code and but them on the root folder, but this solution is not fit my current structure.
This is also my yaml config in case you would like to have a look.
stages:
- build-docker-image
docker-build:
# Use the official docker image.
image: docker:20.10.16
stage: build-docker-image
only:
- dev
tags:
- builder
services:
- name: docker:20.10.16-dind
before_script:
- something here...
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
You try to copy /code/package.json with an absolute path. You should use a relative path for the source. I like using relative paths for the destination as well, like this
FROM node:18.1.0-buster-slim
RUN apt-get update && apt-get -y upgrade
WORKDIR /code
EXPOSE 8443
COPY code/package.json .
RUN npm install
COPY code/. .
# Jave Config
RUN echo 'deb http://ftp.debian.org/debian stretch-backports main' | tee /etc/apt/sources.list.d/stretch-backports.list
RUN apt-get update && apt-get -y upgrade && \
apt-get install -y openjdk-11-jre-headless && \
apt-get clean;
CMD ["node","app.js"]

Building of a Docker image with Qt5 compiled with MinGW works in a container run from "docker:latest" image, but fails in GitLab CI

I want to prepare a docker image with Qt5 with MinGW. Part of the process is building Qt 5.14.0 with MinGW and that is the part where it fails.
Building on my machine.
There weren't any problems when I pulled the docker:latest image on my PC, ran container from it and built my image in this container. It worked fine.
Building in GitLab CI pipeline.
When I pushed the Dockerfile in Gitlab, where it is built in container from the same docker:latest image, it fails to build Qt with the following error message:
Could not find qmake spec ''.
Error processing project file: /root/src/qt-everywhere-src-5.14.0
Screenshot of the failure
CI script:
stages:
- deploy
variables:
CONTAINER_NAME: "qt5-mingw"
PORT: "5000"
image: docker:latest
build-snapshot:
stage: deploy
tags:
- docker
- colo
environment:
name: snapshot
url: https://somedomain.com/artifactory/#/artifacts/qt5-mingw
before_script:
- docker login -u ${ARTIFACT_USER} -p ${ARTIFACT_PASS} somedomain.com:${PORT}
script:
- docker build -f Dockerfile -t ${CONTAINER_NAME} .
- export target_version=$(docker inspect --format='{{index .Config.Labels "com.domain.version" }}' ${CONTAINER_NAME})
- docker tag ${CONTAINER_NAME} dsl.domain.com:${PORT}/${CONTAINER_NAME}:${target_version}
- docker tag dsl.domain.com:${PORT}/${CONTAINER_NAME}:${target_version} dsl.domain.com:${PORT}/${CONTAINER_NAME}:latest
- docker push dsl.domain.com:${PORT}/${CONTAINER_NAME}:${target_version}
- docker push dsl.domain.com:${PORT}/${CONTAINER_NAME}:latest
after_script:
- docker logout dsl.domain.com:${PORT}
- docker rmi ${CONTAINER_NAME}
except:
- master
- tags
The Dockerfile:
FROM debian:buster-slim
########################
# Install what we need
########################
# Custom Directory
ENV CUSTOM_DIRECTORY YES
ENV WDEVBUILD /temp/build
ENV WDEVSOURCE /temp/src
ENV WDEVPREFIX /opt/windev
# Custom Version
ENV CUSTOM_VERSION NO
ENV QT_SERIES 5.14
ENV QT_BUILD 0
ENV LIBJPEGTURBO_VERSION 2.0.3
ENV LIBRESSL_VERSION 3.0.2
ENV OPENSSL_VERSION 1.1.1c
ENV UPX_VERSION 3.95
# SSL Choice
ENV USE_OPENSSL YES
# Exclude Static Qt
ENV BUILD_QT32_STATIC NO
ENV BUILD_QT64_STATIC NO
# Copy directory with qt_build script
COPY rootfs /
# install tools
RUN apt-get update \
&& apt-get install -y bash \
cmake \
coreutils \
g++ \
git \
gzip \
libucl1 \
libucl-dev \
make \
nasm \
ninja-build \
perl \
python \
qtchooser \
tar \
wget \
xz-utils \
zlib1g \
zlib1g-dev \
&& apt-get install -y binutils-mingw-w64-x86-64 \
mingw-w64-x86-64-dev \
g++-mingw-w64-x86-64 \
gcc-mingw-w64-x86-64 \
binutils-mingw-w64-i686 \
mingw-w64-i686-dev \
g++-mingw-w64-i686 \
gcc-mingw-w64-i686 \
&& rm -rf /temp \
&& rm -rf /var/lib/apt/lists/*
# Build Qt with mingw and the step where it fails.
RUN /opt/windev/bin/qt_build \
LABEL com.domain.version="1.0.0"
LABEL vendor="Someone"
LABEL com.domain.release-date="2020-01-21"
Debugging process so far:
The version of the docker:latest is the same in both cases.
The version of MinGW is the same in both cases.
I tried also with Qt 5.12.6 and the result is the same.
I have found it. I think the answer is here.
The package libseccomp2 is 2.3.1 on the CI Runner machine and 2.4.1 on my PC. But Qt versions after 5.10 are using system call that has been added in 2.3.3, so that's why it can be build on my PC and can't be built on the runner.
Reamrak: It doesn't matter that it is build in container run from docker:latest image, because the Docker Daemon is mounted when the container is started, so apparently it continue to use some features of the host and the docker work is not completely containerized.

Dockerfile - Hide --build-args from showing up in the build time

I have the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
git \
make \
python-pip \
python2.7 \
python2.7-dev \
ssh \
&& apt-get autoremove \
&& apt-get clean
ARG password
ARG username
ENV password $password
ENV username $username
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
I use the following commands to build the image from this Dockerfile:
docker build -t myimage:v1 --build-arg password="somepassoword" --build-arg username="someuser" .
However, in the build log the username and password that I pass as --build-arg are visible.
Step 8/8 : RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
---> Running in 650d9423b549
Collecting git+http://someuser:somepassword#org.bitbucket.com/scm/do/repo.git
How to hide them? Or is there a different way of passing the credentials in the Dockerfile?
Update
You know, I was focusing on the wrong part of your question. You shouldn't be using a username and password at all. You should be using access keys, which permit read-only access to private repositories.
Once you've created an ssh key and added the public component to your repository, you can then drop the private key into your image:
RUN mkdir -m 700 -p /root/.ssh
COPY my_access_key /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
And now you can use that key when installing your Python project:
RUN pip install git+ssh://git#bitbucket.org/you/yourproject.repo
(Original answer follows)
You would generally not bake credentials into an image like this. In addition to the problem you've already discovered, it makes your image less useful because you would need to rebuild it every time your credentials changed, or if more than one person wanted to be able to use it.
Credentials are more generally provided at runtime via one of various mechanisms:
Environment variables: you can place your credentials in a file, e.g.:
USERNAME=myname
PASSWORD=secret
And then include that on the docker run command line:
docker run --env-file myenvfile.env ...
The USERNAME and PASSWORD environment variables will be available to processes in your container.
Bind mounts: you can place your credentials in a file, and then expose that file inside your container as a bind mount using the -v option to docker run:
docker run -v /path/to/myfile:/path/inside/container ...
This would expose the file as /path/inside/container inside your container.
Docker secrets: If you're running Docker in swarm mode, you can expose your credentials as docker secrets.
It's worse than that: they're in docker history in perpetuity.
I've done two things here in the past that work:
You can configure pip to use local packages, or to download dependencies ahead of time into "wheel" files. Outside of Docker you can download the package from the private repository, giving the credentials there, and then you can COPY in the resulting .whl file.
pip install wheel
pip wheel --wheel-dir ./wheels git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
docker build .
COPY ./wheels/ ./wheels/
RUN pip install wheels/*.whl
The second is to use a multi-stage Dockerfile where the first stage does all of the installation, and the second doesn't need the credentials. This might look something like
FROM ubuntu:16.04 AS build
RUN apt-get update && ...
...
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install \
python2.7
COPY --from=build /usr/lib/python2.7/site-packages/ /usr/lib/python2.7/site-packages/
COPY ...
CMD ["./app.py"]
It's worth double-checking in the second case that nothing has gotten leaked into your final image, because the ARG values are still available to the second stage.
For me, I created a bash file call set-up-cred.sh.
Inside set-up-cred.sh
echo $CRED > cred.txt;
Then, in Dockerfile,
RUN bash set-up-cred.sh;
...
RUN rm cred.txt;
This is for hiding echoing credential variables.

How to combine Dockerfiles in gitlab ci?

I have this gitlab-ci.yml to build my SpringBoot app:
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS clean compile
only:
- /^release.*/
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
- "cat target/site/coverage/jacoco-ut/index.html"
only:
- /^release.*/
Now, i need to run another JOB on the Test Stage: Integration Tests. My app runs the integration tests on Headless Chrome with an in memory database, all i need to do on windows is: mvn integration-test
I've found a Dockerfile that has the Headless Chrome ready, so i need to combine the maven:latest image with this new image https://hub.docker.com/r/justinribeiro/chrome-headless/
How can i do that?
You can write a new docker file by choosing maven:latest as the base image. (That means all the maven latest image dependencies are there). You can refer this link to how to write a docker file.
Since the base image of the maven:latest is a debian image and docker file that contains Dockerfile that has the Headless Chrome is also a debian image so all the OS commands are same. So you can write a docker file like following where the base image is maven:latest and rest is same as here.
FROM maven:latest
LABEL name="chrome-headless" \
maintainer="Justin Ribeiro <justin#justinribeiro.com>" \
version="2.0" \
description="Google Chrome Headless in a container"
# Install deps + add Chrome Stable + purge all the things
RUN apt-get update && apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
--no-install-recommends \
&& curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list \
&& apt-get update && apt-get install -y \
google-chrome-beta \
fontconfig \
fonts-ipafont-gothic \
fonts-wqy-zenhei \
fonts-thai-tlwg \
fonts-kacst \
fonts-symbola \
fonts-noto \
ttf-freefont \
--no-install-recommends \
&& apt-get purge --auto-remove -y curl gnupg \
&& rm -rf /var/lib/apt/lists/*
# Add Chrome as a user
RUN groupadd -r chrome && useradd -r -g chrome -G audio,video chrome \
&& mkdir -p /home/chrome && chown -R chrome:chrome /home/chrome \
&& mkdir -p /opt/google/chrome-beta && chown -R chrome:chrome /opt/google/chrome-beta
# Run Chrome non-privileged
USER chrome
# Expose port 9222
EXPOSE 9222
# Autorun chrome headless with no GPU
ENTRYPOINT [ "google-chrome" ]
CMD [ "--headless", "--disable-gpu", "--remote-debugging-address=0.0.0.0", "--remote-debugging-port=9222" ]
I have checked this and it's working fine. Once you have write the Dockerfile you can build it using dokcer build . from the same repository as Dockerfile. Then you can either push this to docker hub or your own registry where your gitlab runner can access the docker image. Make sure you tag the docker image of your preference as example let's think the tag is and you are pushing to your local repository {your-docker-repo}/maven-with-chrome-headless:1.0.0
Then use that previous tag in your gitlab-ci.yml file as image: {your-docker-repo}/maven-with-chrome-headless:1.0.0
You do not "combine" docker containers. You put different services into different containers and run them all together. Look at kubernetes (it has now generic support in gitlab) or choose simpler solution like docker-compose or docker-swarm.
For integration tests we use docker-compose.
Anyway, if using docker-compose, you will probably fall into the situation that you need so-called docker-in-docker. It depends on the type of worker, you use to run your gitlab jobs. If you use shell executor, everything will be fine. If you are using docker executor, you will have to setup it properly, because you cant call docker from docker without additional manual setup.
If using several containers is not your choice and you definitely want to put all in one container, the recommended way is to use supervisor to launch processes inside container. One of the options is supervisord: http://supervisord.org/

Hex Cannot Be Found on Dockerized Phoenix App when running on Drone

So I currently have a setup wherein I deploy my dockerized Phoenix application to run tests on a self hosted Drone server. Currently the issue arises that no matter what Dockerfile I use(currently alpine-elixir-phoenix or a base elixir image with the following) which installs hex/rebar like below:
# Install Hex+Rebar
RUN mix local.hex --force && \
mix local.rebar --force
I receive the error upon booting in Drone,
Could not find Hex, which is needed to build dependency :phoenix
I have found that by using an older version of alpine-elixir-phoenix:2.0 this issue does not come up which leads me to believe it may be something to do with hex/elixir having updated since then? Additionally if I run the commands to install hex and rebar within the container in Drone once it is instantiated there is no issue. I ran a whoami on the instantiated Drone container and the user is root if that makes a difference. Additionally if I run the container locally and run mix hex.info, it correctly states that hex is installed, however the issue is that on the Drone instantiated container this fails.
Example .drone.yml:
pipeline:
backend_test:
image: bitwalker/alpine-elixir-phoenix
commands:
- cd api
- apk update
- apk add postgresql-client
- MIX_ENV=test mix local.hex --force
- MIX_ENV=test mix local.rebar --force
- MIX_ENV=test mix deps.get
- MIX_ENV=test mix ecto.create
- MIX_ENV=test mix ecto.migrate
- mix test
Example Docker File used(bitwalker/alpine-elixir-phoenix) - https://github.com/bitwalker/alpine-elixir-phoenix/blob/master/Dockerfile
where the same installation of local.hex and local.rebar occurs in the Dockerfile on lines 29 && 30. However upon instantiation of the container it is not found and therefore must be run again in the CMDs.
Furthermore I encountered this problem again but with make and g++ not installing on alpine. I may be doing something incorrect but I cannot see where.
testbuild_env Dockerfile
FROM bitwalker/alpine-erlang:19.2.1b
ENV HOME=/opt/app/ TERM=xterm
# Install Elixir and basic build dependencies
RUN \
echo "#edge http://nl.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
apk update && \
apk --no-cache --update add \
git make g++ curl \
elixir#edge=1.4.2-r0 && \
rm -rf /var/cache/apk/*
# Install Hex+Rebar
RUN mix local.hex --force && \
mix local.rebar --force
ENV DOCKER_BUCKET test.docker.com
ENV DOCKER_VERSION 17.05.0-ce-rc1
ENV DOCKER_SHA256 4561742c2174c01ffd0679621b66d29f8a504240d79aa714f6c58348979d02c6
RUN set -x \
&& curl -fSL "https://${DOCKER_BUCKET}/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz" -o docker.tgz \
&& echo "${DOCKER_SHA256} *docker.tgz" | sha256sum -c - \
&& tar -xzvf docker.tgz \
&& mv docker/* /usr/local/bin/ \
&& rmdir docker \
&& rm docker.tgz \
&& docker -v
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["sh"]
with the following .drone.yml
build:
image: test_buildenv
commands:
- cd api
- apk add make
- apk add g++
- MIX_ENV=test mix local.hex --force
- MIX_ENV=test mix local.rebar --force
- docker login --username USERNAME --password PASSWORD
- mix docker.build # creates a release file after running a dockerfile.build image
- mix docker.release # creates a minimalist image to run the release file that was just created
- mix docker.publish # pushes newly created image to dokcerh
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The problem is that Drone builds its own isolated working environment as an extra layer on top of your docker image, so the ENV settings in your Dockerfile are not available. You need to independently tell Drone the environment info so it knows where hex is installed.
I managed to get this working by setting MIX_HOME in the .drone.yml file:
Dockerfile:
FROM bitwalker/alpine-elixir:1.8.1
RUN mix local.hex --force
.drone.yml:
pipeline:
build:
image: # built image of the above Dockerfile
environment:
MIX_HOME: /opt/app/.mix
commands:
- mix deps.get

Resources