How to combine Dockerfiles in gitlab ci? - docker

I have this gitlab-ci.yml to build my SpringBoot app:
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS clean compile
only:
- /^release.*/
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
- "cat target/site/coverage/jacoco-ut/index.html"
only:
- /^release.*/
Now, i need to run another JOB on the Test Stage: Integration Tests. My app runs the integration tests on Headless Chrome with an in memory database, all i need to do on windows is: mvn integration-test
I've found a Dockerfile that has the Headless Chrome ready, so i need to combine the maven:latest image with this new image https://hub.docker.com/r/justinribeiro/chrome-headless/
How can i do that?

You can write a new docker file by choosing maven:latest as the base image. (That means all the maven latest image dependencies are there). You can refer this link to how to write a docker file.
Since the base image of the maven:latest is a debian image and docker file that contains Dockerfile that has the Headless Chrome is also a debian image so all the OS commands are same. So you can write a docker file like following where the base image is maven:latest and rest is same as here.
FROM maven:latest
LABEL name="chrome-headless" \
maintainer="Justin Ribeiro <justin#justinribeiro.com>" \
version="2.0" \
description="Google Chrome Headless in a container"
# Install deps + add Chrome Stable + purge all the things
RUN apt-get update && apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
--no-install-recommends \
&& curl -sSL https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb https://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list \
&& apt-get update && apt-get install -y \
google-chrome-beta \
fontconfig \
fonts-ipafont-gothic \
fonts-wqy-zenhei \
fonts-thai-tlwg \
fonts-kacst \
fonts-symbola \
fonts-noto \
ttf-freefont \
--no-install-recommends \
&& apt-get purge --auto-remove -y curl gnupg \
&& rm -rf /var/lib/apt/lists/*
# Add Chrome as a user
RUN groupadd -r chrome && useradd -r -g chrome -G audio,video chrome \
&& mkdir -p /home/chrome && chown -R chrome:chrome /home/chrome \
&& mkdir -p /opt/google/chrome-beta && chown -R chrome:chrome /opt/google/chrome-beta
# Run Chrome non-privileged
USER chrome
# Expose port 9222
EXPOSE 9222
# Autorun chrome headless with no GPU
ENTRYPOINT [ "google-chrome" ]
CMD [ "--headless", "--disable-gpu", "--remote-debugging-address=0.0.0.0", "--remote-debugging-port=9222" ]
I have checked this and it's working fine. Once you have write the Dockerfile you can build it using dokcer build . from the same repository as Dockerfile. Then you can either push this to docker hub or your own registry where your gitlab runner can access the docker image. Make sure you tag the docker image of your preference as example let's think the tag is and you are pushing to your local repository {your-docker-repo}/maven-with-chrome-headless:1.0.0
Then use that previous tag in your gitlab-ci.yml file as image: {your-docker-repo}/maven-with-chrome-headless:1.0.0

You do not "combine" docker containers. You put different services into different containers and run them all together. Look at kubernetes (it has now generic support in gitlab) or choose simpler solution like docker-compose or docker-swarm.
For integration tests we use docker-compose.
Anyway, if using docker-compose, you will probably fall into the situation that you need so-called docker-in-docker. It depends on the type of worker, you use to run your gitlab jobs. If you use shell executor, everything will be fine. If you are using docker executor, you will have to setup it properly, because you cant call docker from docker without additional manual setup.
If using several containers is not your choice and you definitely want to put all in one container, the recommended way is to use supervisor to launch processes inside container. One of the options is supervisord: http://supervisord.org/

Related

how to run docker commands inside AKS based vsts agents?

We were able to successfully add the deployment to Azuredevops Agent pool and could execute the pipeline on them by following the [Microsoft docs][1].
I used below docker file to install the software inside the container.
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0 \
maven \
python \
python3 \
docker \
&& rm -rf /var/lib/apt/lists/*
RUN curl -LsS https://aka.ms/InstallAzureCLIDeb | bash \
&& rm -rf /var/lib/apt/lists/*
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./vstsagent/ .
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT ["./start.sh"]
But Now I am confused with below points
How to set Maven and java home directories along with Mavens custom setting.xml and node and gradle custom properties files in side this AKS based agents?
Even though I put Docker software to install within the conatiner, it seems docker is not getting installed. So how we can run docker related tasks in our pipelines like "build image" nad push Image tasks within this aks based build agents?

Gitlab running stages in docker image

I am new to GitLab and I am trying to set up CI pipeline where I want to run and terraform scripts from inside the docker image just to make sure I have all the necessary base images already installed and built.
Currently the CI pipeline I came up with some investigation on official docs looks something like this:
stages:
- terraform:check
.base-terraform:
image:
name: "hashicorp/terraform"
entrypoint: [""]
before_script:
- export GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS}
- terraform version
tf-fmt:
stage: terraform:check
extends: .base-terraform
script:
- make fmt
needs: []
With the above YAML file, it downloads the latest version of Terraform(>=0.15) which I don't want, and plus I need other dependencies like make etc etc. So I was thinking if there is any way where I can build my own custom image use that as extends in stages so that all the necessary dependencies are available for the CI/CD to run.
My DockerFile looks like:
FROM python:3.7-slim
RUN apt-get update && apt-get install -y curl git-core curl build-essential openssl libssl-dev unzip wget
RUN wget -O terraform.zip https://releases.hashicorp.com/terraform/0.13.0/terraform_0.13.0_linux_amd64.zip && \
unzip terraform.zip && \
mv terraform /usr/local/bin/
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl && \
chmod +x ./kubectl && \
mv ./kubectl /usr/local/bin/kubectl
WORKDIR /src
COPY . .
Am I thinking in the right direction? Or is there a much simpler way in gitlab? I have accomplished such tasks with buildkite as I was using buildkite as my CI tool in my past experience.

Building of a Docker image with Qt5 compiled with MinGW works in a container run from "docker:latest" image, but fails in GitLab CI

I want to prepare a docker image with Qt5 with MinGW. Part of the process is building Qt 5.14.0 with MinGW and that is the part where it fails.
Building on my machine.
There weren't any problems when I pulled the docker:latest image on my PC, ran container from it and built my image in this container. It worked fine.
Building in GitLab CI pipeline.
When I pushed the Dockerfile in Gitlab, where it is built in container from the same docker:latest image, it fails to build Qt with the following error message:
Could not find qmake spec ''.
Error processing project file: /root/src/qt-everywhere-src-5.14.0
Screenshot of the failure
CI script:
stages:
- deploy
variables:
CONTAINER_NAME: "qt5-mingw"
PORT: "5000"
image: docker:latest
build-snapshot:
stage: deploy
tags:
- docker
- colo
environment:
name: snapshot
url: https://somedomain.com/artifactory/#/artifacts/qt5-mingw
before_script:
- docker login -u ${ARTIFACT_USER} -p ${ARTIFACT_PASS} somedomain.com:${PORT}
script:
- docker build -f Dockerfile -t ${CONTAINER_NAME} .
- export target_version=$(docker inspect --format='{{index .Config.Labels "com.domain.version" }}' ${CONTAINER_NAME})
- docker tag ${CONTAINER_NAME} dsl.domain.com:${PORT}/${CONTAINER_NAME}:${target_version}
- docker tag dsl.domain.com:${PORT}/${CONTAINER_NAME}:${target_version} dsl.domain.com:${PORT}/${CONTAINER_NAME}:latest
- docker push dsl.domain.com:${PORT}/${CONTAINER_NAME}:${target_version}
- docker push dsl.domain.com:${PORT}/${CONTAINER_NAME}:latest
after_script:
- docker logout dsl.domain.com:${PORT}
- docker rmi ${CONTAINER_NAME}
except:
- master
- tags
The Dockerfile:
FROM debian:buster-slim
########################
# Install what we need
########################
# Custom Directory
ENV CUSTOM_DIRECTORY YES
ENV WDEVBUILD /temp/build
ENV WDEVSOURCE /temp/src
ENV WDEVPREFIX /opt/windev
# Custom Version
ENV CUSTOM_VERSION NO
ENV QT_SERIES 5.14
ENV QT_BUILD 0
ENV LIBJPEGTURBO_VERSION 2.0.3
ENV LIBRESSL_VERSION 3.0.2
ENV OPENSSL_VERSION 1.1.1c
ENV UPX_VERSION 3.95
# SSL Choice
ENV USE_OPENSSL YES
# Exclude Static Qt
ENV BUILD_QT32_STATIC NO
ENV BUILD_QT64_STATIC NO
# Copy directory with qt_build script
COPY rootfs /
# install tools
RUN apt-get update \
&& apt-get install -y bash \
cmake \
coreutils \
g++ \
git \
gzip \
libucl1 \
libucl-dev \
make \
nasm \
ninja-build \
perl \
python \
qtchooser \
tar \
wget \
xz-utils \
zlib1g \
zlib1g-dev \
&& apt-get install -y binutils-mingw-w64-x86-64 \
mingw-w64-x86-64-dev \
g++-mingw-w64-x86-64 \
gcc-mingw-w64-x86-64 \
binutils-mingw-w64-i686 \
mingw-w64-i686-dev \
g++-mingw-w64-i686 \
gcc-mingw-w64-i686 \
&& rm -rf /temp \
&& rm -rf /var/lib/apt/lists/*
# Build Qt with mingw and the step where it fails.
RUN /opt/windev/bin/qt_build \
LABEL com.domain.version="1.0.0"
LABEL vendor="Someone"
LABEL com.domain.release-date="2020-01-21"
Debugging process so far:
The version of the docker:latest is the same in both cases.
The version of MinGW is the same in both cases.
I tried also with Qt 5.12.6 and the result is the same.
I have found it. I think the answer is here.
The package libseccomp2 is 2.3.1 on the CI Runner machine and 2.4.1 on my PC. But Qt versions after 5.10 are using system call that has been added in 2.3.3, so that's why it can be build on my PC and can't be built on the runner.
Reamrak: It doesn't matter that it is build in container run from docker:latest image, because the Docker Daemon is mounted when the container is started, so apparently it continue to use some features of the host and the docker work is not completely containerized.

Unable to build a docker image in a Bitbucket pipeline

When I try to build an image for my application, an image that relies upon buildkit, I receive an error: failed to dial gRPC: unable to upgrade to h2c, received 403
I can build standard docker images, but if it relies on Buildkit, I get errors
Specifically, the command that fails is:
docker build --ssh default --no-cache -t worker $BITBUCKET_CLONE_DIR/worker
My bitbucket-pipelines.yml is as follows, the first two docker build commands work, and the images are generated, however the third, that relies on buildkit does not.
image: docker:stable
pipelines:
default:
- step:
name: build
size: 2x
script:
- docker build -t alpine-base $BITBUCKET_CLONE_DIR/supporting/alpine-base
- docker build -t composer-xv:latest $BITBUCKET_CLONE_DIR/supporting/composer-xv
- apk add openssh-client
- eval `ssh-agent`
- export DOCKER_BUILDKIT=1
- docker build --ssh default --no-cache -t worker $BITBUCKET_CLONE_DIR/worker
- docker images
services:
- docker
caches:
- docker
My Dockerfile is as follows:
# syntax=docker/dockerfile:1.0.0-experimental
FROM composer:1.7 as phpdep
COPY application/database/ database/
COPY application/composer.json composer.json
COPY application/composer.lock composer.lock
# Install PHP dependencies in 'vendor'
RUN --mount=type=ssh composer install \
--ignore-platform-reqs \
--no-dev \
--no-interaction \
--no-plugins \
--no-scripts \
--prefer-dist
#
# Final image build stage
#
FROM alpine-base:latest as final
ADD application /app/application
COPY --from=phpdep /app/vendor/ /app/application/vendor/
ADD entrypoint.sh /entrypoint.sh
RUN \
apk update && \
apk upgrade && \
apk add \
php7 php7-mysqli php7-mcrypt php7-gd \
php7-curl php7-xml php7-bcmath php7-mbstring \
php7-zip php7-bz2 ca-certificates php7-openssl php7-zlib \
php7-bcmath php7-dom php7-json php7-phar php7-pdo_mysql php7-ctype \
php7-session php7-fileinfo php7-xmlwriter php7-tokenizer php7-soap \
php7-simplexml && \
cd /app/application && \
cp .env.example .env && \
chown nobody:nobody /app/application/.env && \
sed -i 's/;openssl.capath=/openssl.capath=\/etc\/ssl\/certs/' /etc/php7/php.ini && \
sed -i 's/memory_limit = 128M/memory_limit = 1024M/' /etc/php7/php.ini && \
apk del --purge curl wget && \
mkdir -p /var/log/workers && \
mkdir -p /run/php && \
echo "export PS1='WORKER \h:\w\$ '" >> /etc/profile
COPY files/logrotate.d/ /etc/logrotate.d/
CMD ["/entrypoint.sh"]
Bitbucket pipelines don't support DOCKER_BUILDKIT, it seems, see: https://jira.atlassian.com/browse/BCLOUD-17590?focusedCommentId=3019597&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-3019597 . They say they are waiting for this; https://github.com/moby/buildkit/pull/2723 to be fixed...
You could try again as, since July 2022, you have:
Announcing support for Docker BuildKit in Bitbucket Pipelines
(Jayant Gawali, Atlassian Team)
We are happy to announce that one of the top voted features for Bitbucket Pipelines, Docker BuildKit is now available. You can now build Docker images with the BuildKit utility.
With BuildKit you can take advantage of the various features it provides like:
Performance: BuildKit uses parallelism and caching internally to build images faster.
Secrets: Mount secrets and build images safely.
Cache: Mount caches to save re-downloading all external dependencies every time.
SSH: Mount SSH Keys to build images.
Configuring your bitbucket-pipelines.yaml
BuildKit is now available with the Docker Daemon service.
It is not enabled by default and can be enabled by setting the environment variable DOCKER_BUILDKIT=1 in the pipelines configuration.
pipelines:
default:
- step:
script:
- export DOCKER_BUILDKIT=1
- docker build --secret id=mysecret,src=mysecret.txt .
services:
- docker
To learn more about how to set it up please refer to the support documentation and for information on Docker Buildkit, visit: Docker Docs ? Build images with BuildKit.
Please note:
Use multi-stage builds to utilise parallelism.
Caching is not shared across different builds and it’s limited to the build running on the same docker node where the build runs.
With BuildKit, secrets can be mounted securely as shown above.
For restrictions and limitations please refer to the restrictions section of our support documentation.

Cannot execute ansible playbook via docker container

Im executing a pipeline on jenkins that is inside a docker container. This pipeline calls another docker-compose file that executes an ansible playbook. The service that executes the playbook is called agent, and is defined as follows:
agent:
image: pjestrada/ansible
links:
- db
environment:
PROBE_HOST: "db"
PROBE_PORT: "3306"
command: ["probe.yml"]
this is the images it uses:
FROM ubuntu:trusty
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Prevent dpkg errors
ENV TERM=x-term-256color
RUN sed -i "s/http:\/\/archive./http:\/\/nz.archive./g" /etc/apt/sources.list
#Install ansible
RUN apt-get update -qy && \
apt-get install -qy software-properties-common && \
apt-add-repository -y ppa:ansible/ansible && \
apt-get update -qy && \
apt-get install -qy ansible
# Copy baked in playbooks
COPY ansible /ansible
# Add voulme for Ansible Playbooks
Volume /ansible
WORKDIR /ansible
RUN chmod +x /
#Entrypoint
ENTRYPOINT ["ansible-playbook"]
CMD ["site.yml"]
My local machine is Ubuntu 16.04, and when I run docker-compose up agent the plabook is executed successfully. However when Im inside the jenkins container im getting this error on the same command call.
Attaching to todobackend9dev_agent_1
[36magent_1 | [0mERROR! the playbook: site.yml does not appear to be a file
This are the images and compose files for my jenkins container:
FROM jenkins:1.642.1
MAINTAINER Pablo Estrada <pjestradac#gmail.com>
# Suppress apt installation warnings
ENV DEBIAN_FRONTEND=noninteractive
# Change to root user
USER root
# Used to set the docker group ID
# Set to 497 by default, which is the group ID used by AWS Linux ECS Instance
ARG DOCKER_GID=497
# Create Docker Group with GID
# Set default value of 497 if DOCKER_GID set to blank string by Docker Compose
RUN groupadd -g ${DOCKER_GID:-497} docker
# Used to control Docker and Docker Compose versions installed
# NOTE: As of February 2016, AWS Linux ECS only supports Docker 1.9.1
ARG DOCKER_ENGINE=1.10.2
ARG DOCKER_COMPOSE=1.6.2
# Install base packages
RUN apt-get update -y && \
apt-get install apt-transport-https curl python-dev python-setuptools gcc make libssl-dev -y && \
easy_install pip
# Install Docker Engine
RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D && \
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | tee /etc/apt/sources.list.d/docker.list && \
apt-get update -y && \
apt-get purge lxc-docker* -y && \
apt-get install docker-engine=${DOCKER_ENGINE:-1.10.2}-0~trusty -y && \
usermod -aG docker jenkins && \
usermod -aG users jenkins
# Install Docker Compose
RUN pip install docker-compose==${DOCKER_COMPOSE:-1.6.2} && \
pip install ansible boto boto3
# Change to jenkins user
USER jenkins
# Add Jenkins plugins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Compose File:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_ENGINE: ${DOCKER_ENGINE}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
I put a volume in order to access docker socket from my jenkins container. However, for some reason Im not being able to access the site.yml file I need for the playbook even though outside the container the file is available.
Can anyone help me solve this issue?
How sure are you about that volume mount point and your paths?
- jenkins_home:/var/jenkins_home
Have you tried debug via echo? If it can't find the site.yml then paths are the most likely cause. You can use jenkins replay on a job to iterate quickly and modify parts of the jenkins code. That will let you run things like
sh "pwd; ls -la"
I recommend adding the equivalent within your docker container so you can check the paths. My guess is that the workspace isn't where you think it is and you'll want to run docker with:
-v${env.WORKSPACE}:jenkins-workspace
and then within the container:
pushd /jenkins-worspace

Resources