Gitlab pipeline error when building quarkus project and dockerizing it - docker

I wrote a simple pipeline in gitlab to build a quarkus project, to dockerize it and to push the final image to a registry.
Here it is:
image: maven:latest
stages:
- build-package-dockerize
- deploy
before_script:
- apt-get update -qq
- apt-get install -y -qq build-essential libz-dev zlib1g-dev
build, package, dockerize:
stage: build-package-dockerize
script:
- mvn clean package -DskipTests -Dquarkus.profile=dev -Dquarkus.container-image.build=true -Dquarkus.container-image.push=true -Dquarkus.container-image.group=pss -Dquarkus.container-image.tag=$CI_BUILD_REF -Dquarkus.container-image.registry=$DOCKER_REGISTRY_AZURE_URL -Dquarkus.container-image.username=$DOCKER_REGISTRY_AZURE_USERNAME -Dquarkus.container-image.password=$DOCKER_REGISTRY_AZURE_PASSWORD
only:
- DEV
When the pipe runs it returns error in ryuk container deployment:
[INFO] Container testcontainers/ryuk:0.3.3 is starting: 2611772cb72f4f2437ee1c405243d7519dfe787d8a0f343b292e8b2db4aa4869
1745[ERROR] Could not start container
1746java.lang.IllegalStateException: Container is removed
[ERROR] There are no stdout/stderr logs available for the failed container
1777[WARNING] [io.quarkus.deployment.IsDockerWorking] No docker binary found or general error: java.lang.RuntimeException: Input/Output error while executing command.
Any help?
Thanks

I tried many times doing it with docker but it was a mess (cause Docker in docker). I believe you have the same problem because your error states that No docker binary found.
My solution would be easier, try with Jib. Quarkus supports it by default and it's much easier to use, see my example :
image: maven:latest
stages:
- build-package-dockerize
- deploy
before_script:
- apt-get update -qq
- apt-get install -y -qq build-essential libz-dev zlib1g-dev
build-package-dockerize:
stage: build-package-dockerize
script:
- mvn clean package
-DskipTests
-Dquarkus.profile=dev
# Instruct Quarkus to use Jib
-Dquarkus.container-image.builder=jib
# Don't forget to add token file to repo, otherwise you'll get http 401 when pushing
-Dquarkus.jib.to.auth.username=gitlab-ci-token
-Dquarkus.jib.to.auth.password=${CI_TOKEN_PASSWORD}
-Dquarkus.container-image.build=true
-Dquarkus.container-image.push=true
-Dquarkus.container-image.group=pss
-Dquarkus.container-image.tag=$CI_BUILD_REF
-Dquarkus.container-image.registry=$DOCKER_REGISTRY_AZURE_URL
-Dquarkus.container-image.username=$DOCKER_REGISTRY_AZURE_USERNAME
-Dquarkus.container-image.password=$DOCKER_REGISTRY_AZURE_PASSWORD
only:
- DEV
I'm using it right now for multiple projects, and its working perfectly.
For reference, you can find every Jib params here Quarkus Jib

Related

Failed to load platform plugin xcb while launching pyqt5 app on ubuntu on docker container

I am trying to run a gui using PyQt5 on a docker container. Everything working fine but when I am actualy running the container using docker-compose up command I am getting an error that says:
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.
Can someone help my fix this?
Note: I have been tried this solutions and none of them worked for me
Solution 1
Solution 2
Solution 3
This is my Dcokerfile:
FROM ubuntu:latest
# Preparing work environment
ADD server.py .
ADD test.py .
RUN apt-get update
RUN apt-get upgrade
RUN apt-get -y install python3
RUN apt-get -y install python3-pip
# Preparing work environment
RUN apt-get -y install python3-pyqt5
This is the docker-compose part:
test:
container_name: test
image: image
command: python3 test.py
ports:
- 4000:4000/tcp
networks:
pNetwork1:
ipv4_address: 10.1.0.3

How to cache deployment image in gitlab-ci?

I'm creating a gitlab-ci deployment stage that requires some more libraries than existing in my image. In this example, I'm adding ssh (in real world, I want to add many more libs):
image: adoptopenjdk/maven-openjdk11
...
deploy:
stage: deploy
script:
- which ssh || (apt-get update -y && apt-get install -y ssh)
- chmod 600 ${SSH_PRIVATE_KEY}
...
Question: how can I tell gitlab runner to cache the image that I'm building in the deploy stage, and reuse it for all deployment runs in future? Because as written, the library installation takes place for each and every deployment, even if nothing changed between runs.
GitLab can only cache files/directories, but because of the way apt works, there is no easy way to tell it to cache installs you've done this way. You also cannot "cache" the image.
There are two options I see:
Create or use a docker image that already includes your dependencies.
FROM adoptopenjdk/maven-openjdk11
RUN apt update && apt install -y foo bar baz
Then build/push the image the image to dockerhub, then change the image: in the yaml:
image: membersound/maven-openjdk11-with-deps:latest
OR simply choose an image that already has all the dependencies you want! There are many useful docker images out there with useful tools installed. For example octopusdeploy/worker-tools comes with many runtimes and tools installed (java, python, AWS CLI, kubectl, and much more).
attempt to cache the deb packages and install from the deb packages. (beware this is ugly)
Commit a bash script as so to a file like install-deps.sh
#!/usr/bin/env bash
PACKAGES="wget jq foo bar baz"
if [ ! -d "./.deb_packages" ]; then
apt update && apt --download-only install -y ${PACKAGES}
cp /var/cache/apt/archives/*.deb ./.deb_packages
fi
apt install -y ./.deb_packages/*.deb
This should cause the debian files to be cached in the directory ./.deb_packages. You can then configure gitlab to cache them so you can use them later.
my_job:
before_script:
- install-deps.sh
script:
- ...
cache:
paths:
- ./.deb_packages

apt not found when I use apt in gitlab ci before_script

I use gitlab ci to build docker image and I want to install python. When I build, the following is my gitlab-ci.yml:
image: docker:stable
stages:
- test
- build
before-script:
- apt install -y python-dev python pip
test1:
stage: test
script:
...
- pytest
build:
stage: build
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
but i got a Job failed
/bin/sh: eval: line : apt: not found
ERROR: Job failed: exit code 127
I also tried to apt-get install but the result is the same.
How do I install python??
It's actually not a problem but you can say it featured by package-manager with Alpine you are using image: docker:stable or any such images like tomcat or Django they are on Alpine Linux. with minimal in the size .
image: docker:stable
stages:
- test
- build
before-script:
- apk add python python-dev python pip
test1:
stage: test
script:
...
- pytest
build:
stage: build
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
apk is Alpine Linux package management
The image that you are using docker: stable is based on Alpine Linux which uses apk as its package manager. installation with apk will look like that: apk add python
The error you see is because apt doesn’t exist in alpine docker.
This line solved the problem for me:
apk update && apk add python

How to setup GitLab CE CI to user docker images for runners

I've now tried for several days to get a runner working on a docker container. I have a Debian running system with GitLab, gitlab-runner and docker installed. I want to use docker as a container for my runners, because shell executors are installing all things on my CI maschine...
What I have done until now: I installed docker like it is described in the GitLab CE docs and run this command:
gitlab-runner register -n \
--url DOMAIN \
--registration-token TOKEN \
--executor docker \
--description "docker-builder" \
--docker-image "gliderlabs/alpine" \
--docker-privileged
then I created a test repo to look if it is working, with this .gitlab-ci-yml
variables:
# GIT_STRATEGY: fetch # re-uses the project workspace
GIT_CHECKOUT: "false" # don't checkout the working copy to a revision related to the CI pipeline
GIT_DEPTH: "3"
cache:
paths:
- node_modules/
stages:
- deploy
before_script:
- apt-get update
- apt-get install -y -qq sshpass
- ls -la
# ======================= Jobs=======================
# Teporaly disable jobs by adding a . (dot) before the job name
ftp-upload:
stage: deploy
# environment: Production
except:
- testing
script:
- rm ./package-lock.json
- npm install
- ls -la
- sshpass -V
- export SSHPASS=$PASSWORD
- sshpass -e scp -o stricthostkeychecking=no -r . $USERNAME#$HOST:/Test
only:
- master
# ===================== ./Jobs ======================
but I get an error in the GitLab CI console:
Running with gitlab-runner 11.1.0 (081978aa)
on docker-builder 5ce3c211
Using Docker executor with image gliderlabs/alpine ...
Pulling docker image gliderlabs/alpine ...
Using docker image sha256:74a78e860d7b39aa694197a70d4467019b611b80c21d886fcd1bfc04d2e767d4 for gliderlabs/alpine ...
Running on runner-5ce3c211-project-3-concurrent-0 via srvvgit001...
Cloning repository for master with git depth set to 3...
Cloning into '/builds/additive/test'...
Skipping Git checkout
Skipping Git submodules setup
Checking cache for default...
Successfully extracted cache
/bin/sh: eval: line 64: apt-get: not found
$ apt-get update
ERROR: Job failed: exit code 127
I don't know much about those docker containers but them seems good for reuse without modifying my CI system. It looks here that it is installing another alpine image/container, but have I not said GitLab runner to use an existing one?
Hopefully, there is someone that can easier explain to me how this works... I really have tried anything google gave me.
The Docker image you are using is a Alpine image, which is a minimal Linux distribution.
Alpine Linux is not using apt for package management but apk.
The problem is in your .gitlab-ci-yml's before_script section where you are trying to run apt.
To solve your issue, replace the use of apt by apk:
before_script:
- apk update
- apk add sshpass
...
Read more about the Alpine Linux package management here.

How do I run an ARM-based Docker container in Circle CI?

I have a docker image that contains all ARM binaries, except for a statically-linked x86 QEMU executable. It was specifically designed for doing ARM builds while on x86 hardware.
The base image is show0k/miniconda-armv7. Since I don't use Conda, but do need Python, I then build atop it with this Dockerfile:
FROM show0k/miniconda-armv7
MAINTAINER savanni#cloudcity.io
RUN [ "cross-build-start" ]
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install python3 python3-pip python3-venv ssh git iputils-ping
RUN [ "cross-build-end" ]
I can start this image perfectly on my machine and even run the build commands.
But, when I go to Circle, my container either gets hung up in a queue after "Spin up Environment", or I end frequently with this error message:
Unexpected preparation error: Error response from daemon: Container d366de1282a32a79bca5265a8a97f573c8949f2838be231abcd234e5694d8d0b is not running (where the container ID is different every time)
This is my Circle configuration file:
---
version: 2
jobs:
build:
docker:
- image: savannidgerinel/arm-python:latest
working_directory: ~/repo
steps:
- run:
name: test the image
command: /bin/uname -a

Resources