Azure pipeline: unable to start cross-built docker image - docker

I have pushed a linux/arm64 image to a docker registry and I am trying to use this image with the container tag in an azure-pipeline. The image seems to be pulled correctly but it can't be executed because the Virtual Machine is ubuntu-20.04 (not the same architecture -->linux/amd64). When I am on my local computer and want to execute this docker image I simply need to run the following command. However, I can't seem to be able to run an emulator before the container tries to execute in the azure job.
docker run --privileged --rm tonistiigi/binfmt:qemu-v6.2.0 --install all
Here is the azure pipeline that I am trying to run:
resources:
containers:
- container: build_container_arm64
image: my_arm_image
endpoint: my_endpoint
jobs:
- job:
pool:
vmImage: ubuntu-20.04
timeoutInMinutes: 240
container: build_container_arm64
steps:
- bash: |
echo "Hello world"
I am wondering if there is a way that I could install or run an emulator before the container tries to execute.
Thanks

Related

start docker container from within self hosted bitbucket pipeline (dind)

I work on a spring-boot based project and use a local machine as test environment to deploy it as a docker container.
I am in the middle of creating a bitbucket pipeline that automates everything between building and deploying. For this pipeline I make use of a self hosted runner (docker) that also runs on the same machine and docker instance where I plan to deploy my project.
I managed to successfully build the project (mvn and docker), and load the docker image into my GCP container registry.
My final deployment step (docker run xxx, see yml script below) was also successful but since it is running in a container itself it was not running the script on the top level docker.
as far as I understand the runner itself has access to the host docker, because the docker.sock is mounted. but for each step another container is created which does not have access to the docker.sock, right? So basically I need to know how to give access to this file unless there's a better solution to that.
here the shortened pipeline definition:
image: maven:3.8.7-openjdk-18
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
# build only for feature branches or so
branches:
test:
# build, docker and upload steps
- step:
name: Deploy
deployment: test
image: google/cloud-sdk:alpine
runs-on:
- 'self.hosted'
- 'linux'
caches:
- docker
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- VERSION="${BITBUCKET_BUILD_NUMBER}"
- DOCKER_IMAGE="${DOCKER_REGISTRY}/${IMAGE_NAME}:${VERSION}"
# Authenticating with the service account key file
- echo $GCLOUD_API_KEYFILE > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
# Login with docker and stop old container (if exists) and run new one
- cat ./gcloud-api-key.json | docker login -u _json_key --password-stdin https://eu.gcr.io
- docker ps -q --filter "name=${IMAGE_NAME}" | xargs -r docker stop
- docker run -d -p 82:8080 -p 5005:5005 --name ${IMAGE_NAME} --rm ${DOCKER_IMAGE}
services:
- docker

Jenkins job unstable after adding Ansible playbook command

I successfully connected my Jenkins server to my Ansible server/control node and it built a stable and successful job after I made a change to my github too. But when I actually created an ansible playbook to create and tag a docker image and then push to dockerhub. I added 'ansible-playbook regapp.yml' to the Ansible configuration in my Jenkins server and it gives this error:
For context, this is the playbook I created:
- hosts: ansible
tasks:
- name: create docker image
command: docker build -t regapp:v2 .
args:
chdir: /opt/docker
- name: create tag to push image onto dockerhub
command: docker tag regapp:v2 codestein/regapp:v2
- name: push docker image
command: docker push codestein/regapp:v2
Things I've tried:
Resetting my docker container. I did rm -rf /var/lib/docker and then systemctl restart docker
Changed webapp/target/webapp.var on Jenkins config to '**/*war' and vice versa.
Gave ansadmin:ansadmin control/access of /opt/docker directory
Not sure how else to fix the issue. I'd appreciate any suggestions.

If possible to run a Docker Compose comand before a job exe in GitLab CI

I am new to GitLabCI, it seems GitLab CI is docker everywhere.
I was trying to run a Mariadb before run tests. In Github actions, it is very easy, just docker-compose up -d command before my mvn.
When came to GitLab CI.
I was trying to use the following job to archive the purpose.
test:
stage: test
image: maven:3.6.3-openjdk-16
services:
- name: docker
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
- .m2/repository
script: |
docker-compose up -d
sleep 10
mvn clean verify sonar:sonar
But this does not work, docker-compose is not found.
You can make use of docker-dind docker-dind and run the docker commands inside another docker container.
But there is limitation to run docker-compose by default. It is recommended to build a custom image on top of DIND and push it to gitlab image registry. So that can be used across your jobs

gitlab-runner locally - No such command sh

I have gitlab-runner installed locally.
km#Karls-MBP ~ $ gitlab-runner --version
Version: 10.4.0
Git revision: 857480b6
Git branch: 10-4-stable
GO version: go1.8.5
Built: Mon, 22 Jan 2018 09:47:12 +0000
OS/Arch: darwin/amd64
Docker:
km#Karls-MBP ~ $ docker --version
Docker version 17.12.0-ce, build c97c6d6
.gitlab-ci.yml:
image: docker/compose:1.19.0
before_script:
- echo wtf
test:
script:
- echo test
Results:
km#Karls-MBP ~ $ sudo gitlab-runner exec docker --docker-privileged test
WARNING: Since GitLab Runner 10.0 this command is marked as DEPRECATED and will be removed in one of upcoming releases
WARNING: You most probably have uncommitted changes.
WARNING: These changes will not be tested.
Running with gitlab-runner 10.4.0 (857480b6)
on ()
Using Docker executor with image docker/compose:1.19.0 ...
Using docker image sha256:be4b46f2adbc8534c7f6738279ebedd6106969695f5e596079e89e815d375d9c for predefined container...
Pulling docker image docker/compose:1.19.0 ...
Using docker image docker/compose:1.19.0 ID=sha256:e06b58ce9de2ea3f11634e022ec814984601ea3a5180440c2c28d9217b713b30 for build container...
Running on runner--project-0-concurrent-0 via x.x.x...
Cloning repository...
Cloning into '/builds/project-0'...
done.
Checking out b5a262c9 as km/ref...
Skipping Git submodules setup
No such command: sh
Commands:
build Build or rebuild services
bundle Generate a Docker bundle from the Compose file
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
images List images
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pull service images
push Push service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information
Don't really know what the issue is.
It seems that the docker/compose image is configured with docker-compose as an entrypoint.
You can override the default entrypoint of the docker/compose image in your .gitlab-ci.yml file :
image:
name: docker/compose:1.19.0
entrypoint: [""]
before_script:
- echo wtf
test:
script:
- echo test
The docker/compose image has the command docker-compose as its entrypoint (until version 1.24.x), which enables a usage similar to this (assuming a compatible volume mount):
docker run --rm -t docker/compose -f some-dir/compose-file.yml up
Unfortunately that same feature makes it incompatible with usage within GitLab CI’s Docker Runner. Theoretically you could have a construct like this:
job-name:
image: docker/compose:1.24.1
script:
- up
- --build
- --force-recreate
But the GitLab Docker Runner assumes the entrypoint is /bin/bash - or at least functions likewise (many Docker images thoughtfully use a shell script with "$#" as its final line for the entrypoint) - and from the array elements that you specify for the script, it creates its own temporary shell script on the fly. It starts with statements like set -e and set -o pipeline and will be used in a statement like sh temporary-script.sh as the container command. That’s what causes the unexpected error message you got.
This behaviour was recently documented more clearly:
The Docker executor doesn’t overwrite the ENTRYPOINT of a Docker image.
That means that if your image defines the ENTRYPOINT and doesn’t allow to run scripts with CMD, the image will not work with the Docker executor.
Overriding the entrypoint with [""] will allow usage of docker/docker-compose (before version 1.25.x) with the Docker Runner, but the script that GitLab will create on the fly is not going to run as process 1 and because of that the container will not stop at the end of the script. Example:
job-name:
image:
name: docker/docker-compose
entrypoint: [""]
script:
- docker-compose
- up
- --build
- --force-recreate
At the time I write this the latest version of docker/docker-compose is 1.25.0-rc2. Your mileage may vary, but it suffices for my purposes and entirely resolves both problems.

How do i deploy from GitLab CI to Google Container Engine instance using Docker?

I am trying to set up automated deployment using a GitLab CI runner to deploy our 4-container app via docker-compose. I can pull the container images down using docker pull commands, but I'm stuck on how to connect to the Google Compute Engine instance in order to run the full docker-compose script.
Typically, from my local machine, I run something like:
eval $(docker-machine env <machine-instance>)
docker-compose up -d
But my .gitlab-ci.yml script doesn't have docker-machine available.
Do I have to install docker-machine via the script section in my
.gitlab-ci.yml file?
How do I provision the instance without
creating a new one every time? Normally, from my local host, I would
run docker-machine create ... once then just use the eval
command above to reconnect to the instance. But how would this work
with CI?
Here's a sample of my .gitlab-ci.yml:
deploy staging:
image: docker:latest
services:
- docker:dind
environment: staging
stage: deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN my-registry.githost.io
script:
- docker pull my-registry.githost.io/group/project1:develop
- docker pull my-registry.githost.io/group/project2:develop
- docker pull my-registry.githost.io/group/project3:develop
- docker pull my-registry.githost.io/group/project4:develop
- docker-machine ls
Not sure what you need docker-machine for in this case. You might want to get rid of it.
But to go back to your question, the docker image you're using does not come with neither docker-machine, nor docker-compose :
https://github.com/docker-library/docker/blob/36e2107fb879d5d5c3dbb5d8d93aeef0a2d45ac8/1.12/Dockerfile
So you will need to create a new image (or find an existing one) that comes with those two installed.
So in the .gitlab-ci.yml, instead of image: docker:latest, it's going to be something like image: mydocker
You maybe have to install docker-machine in the GitLab CI Runner to use it with GCE
https://docs.docker.com/machine/install-machine/
https://docs.docker.com/machine/drivers/gce/

Resources