I have this .gitlab-ci.yml file wanting to automate the docker image building, basically I'm using the one from the template:
docker-build:
image: my_image_build_with_docker_inside_inprivate_repo
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
# Run this job in a branch where a Dockerfile exists
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile
#$CI_REGISTRY_IMAGE = my_image_build_with_docker_inside_inprivate_repo
When I run it get this error:
$ docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGIST
/bin/bash: line 132: docker: command not found
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1
With : RUN apt-get install -y docker-compose , I get everything needed for docker to run in the debian image. But I still get the error of docker command not found as above. What are the other steps needed to run docker daemon from your custom image?
Update: From the private repository, pushing images with docker installed is not allowed either. Looks like I have to use something called kaniko. Any good resource for this?
You can try installing docker in the Dockerfile to build your custom image.
You can follow the steps defined in the official docs https://docs.docker.com/engine/install/debian/ which would look like something like this in your Dockerfile
RUN apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Related
I have rootless docker host, jenkins on docker and a fastapi app inside a container as well.
Jenkins dockerfile:
FROM jenkins/jenkins:lts-jdk11
USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
This is the docker run command:
docker run -d --name jenkins-docker --restart=on-failure -v jenkins_home:/var/jenkins_home -v /run/user/1000/docker.sock:/var/run/docker.sock -p 8080:8080 -p 5000:5000 jenkins-docker-image
Where -v /run/user/1000/docker.sock:/var/run/docker.sock is used so jenkins-docker can use the host's docker engine.
Then, for the tests I have a docker compose file:
services:
app:
volumes:
- /home/jap/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result:/usr/src
depends_on:
- testdb
...
testdb:
image: postgres:14-alpine
...
volumes:
test-result:
Here I am using the volume create on the host when I ran the jenkins-docker-image. After running jenkins 'test' stage I can see that a report.xml file was created inside the host and jenkins-docker volumes.
Inside jenkins-docker
root#89b37f219be1:/var/jenkins_home/workspace/vlep-pipeline_main/test-result# ls
report.xml
Inside host
jap#jap:~/.local/share/docker/volumes/jenkins_home/_data/workspace/vlep-pipeline_main/test-result $ ls
report.xml
I then have the following steps on my jenkinsfile:
steps {
sh 'docker compose -p testing -f docker/testing.yml up -d'
junit "/var/jenkins_home/workspace/vlep-pipeline_main/test-result/report.xml"
}
I also tried using the host path for the junit step, but either way I get on jenkins logs:
Recording test results
No test report files were found. Configuration error?
What am I doing wrong?
Our Dockerfile has the following lines:
# Installing google cloud SDK for gsutil
RUN apt-get update && \
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
When we launch a docker container locally from this image, and docker exec -it containerID bash into the container, we get:
airflow#containerID:~$ gsutil --version
gsutil version: 4.65
When we launch a docker container on our GCP compute engine from this image, and docker exec -it containerID bash into the container, we get:
airflow#containerID:~$ gsutil --version
bash: gsutil: command not found
I thought the whole point of docker and dockerfiles was so that we could avoid this exact issue of something working locally but not in production... We're at a loss for how to even debug this?
As I understand, a GitHub action can also be executed by a (Linux) Docker image to perform a certain task.
For example this GitHub action Azure/static-web-apps-deploy#v0.0.1-preview uses Docker image mcr.microsoft.com/appsvc/staticappsclient:stable to deploy the project to an Azure Static Web App.
Is there a way to call/execute/run this Docker image in a YAML DevOps Pipeline?
Container jobs in Azure devops pipeline is probably what you are looking for.
When you specify a container in your pipeline, the agent will first fetch and start the container. Then, each step of the job will run inside the container. So you can set the Docker image as the container. See below example:
To run a job in a container:
pool:
vmImage: 'ubuntu-18.04'
container: mcr.microsoft.com/appsvc/staticappsclient:stable
steps:
- script: printenv
To run a certain task in a container:
resources:
containers:
- container: staticappsclient
image: mcr.microsoft.com/appsvc/staticappsclient:stable
steps:
- task: SampleTask#1
target: host
- task: AnotherTask#1
target: staticappsclient # this task will run in the container
Update:
To run sudo command inside a container without sudo preinstalled. You can checkout below steps:
1, Name the container by defining the --name parameter in Option options: "--name ci-container -v /usr/bin/docker:/tmp/docker:ro"
2, Add a script task on top of your yaml pipeline to install the sudo
- script: |
/tmp/docker exec -t -u 0 ci-container \
sh -c "apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::="--force-confold" -y install sudo"
3, Run the scripts to Install dotnet 3.1 in the following script task: See below full yaml example:
resources:
containers:
- container: staticappsclient
image: mcr.microsoft.com/appsvc/staticappsclient:stable
options: "--name ci-container -v /usr/bin/docker:/tmp/docker:ro"
container: staticappsclient
steps:
- script: |
/tmp/docker exec -t -u 0 ci-container \
sh -c "apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::="--force-confold" -y install sudo"
- script: |
wget -O - https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.asc.gpg
sudo mv microsoft.asc.gpg /etc/apt/trusted.gpg.d/
wget https://packages.microsoft.com/config/debian/9/prod.list
sudo mv prod.list /etc/apt/sources.list.d/microsoft-prod.list
sudo chown root:root /etc/apt/trusted.gpg.d/microsoft.asc.gpg
sudo chown root:root /etc/apt/sources.list.d/microsoft-prod.list
sudo apt-get install -y apt-transport-https && \
sudo apt-get update && \
sudo apt-get install -y dotnet-sdk-3.1
See this thread for more information.
I'm trying to build image for platform ppc64le via Docker Buildx and Buildkit on our enterprise Travis CI instance.
.travis.yml:
os: linux
dist: bionic
language: shell
branches:
only:
- master
before_install:
- set -e
# Configure environment so changes are picked up when the Docker daemon is restarted after upgrading
- echo '{"experimental":true}' | sudo tee /etc/docker/daemon.json
- export DOCKER_CLI_EXPERIMENTAL=enabled
- sudo rm -rf /var/lib/apt/lists/*
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) edge"
- sudo apt-get update
- sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
- mkdir -vp ~/.docker/cli-plugins/
- curl --silent -L "https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64" > ~/.docker/cli-plugins/docker-buildx
- chmod a+x ~/.docker/cli-plugins/docker-buildx
jobs:
include:
- stage: build and push docker image
script:
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
- sudo docker buildx build --platform linux/ppc64le --tag myimage:ppc64le -f src/main/docker/Dockerfile.ppc64 --push .
Build will fail on error:
$ sudo docker buildx build --platform linux/ppc64le --tag myimage:ppc64le -f src/main/docker/Dockerfile.ppc64 --push .
unknown flag: --platform
See 'docker --help'.
Usage: docker [OPTIONS] COMMAND
Looks like Buildx extension is not enabled, but Docker info will show that experimental_cli is enabled.
Any ideas about how to enable buildx on Travis?
I'm having trouble myself but TravisCI official documentation states you need to install buildx plugin. Here: https://www.docker.com/blog/multi-arch-build-what-about-travis/
From what I can see you are missing this in before_install
- mkdir -vp ~/.docker/cli-plugins/
- curl --silent -L "https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64" > ~/.docker/cli-plugins/docker-buildx
- chmod a+x ~/.docker/cli-plugins/docker-buildx
I just tried build my test image for Jenkins course and got the issue
+ docker build -t nginx_lamp_app .
/var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: 2: /var/jenkins_home/jobs/docker-test/workspace#tmp/durable-d84b5e6a/script.sh: docker: not found
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
But I've already configured docker socket in docker-compose file for Jenkins, like this
version: "2"
services:
jenkins:
image: "jenkins/jenkins:lts"
ports:
- "8080:8080"
restart: "always"
volumes:
- "/var/jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
But, when I attach to container I see also "docker: not found" when I type command "docker"...
And I've changed permissions to socket like 777
What's can be wrong?
Thanks!
You are trying to achieve a Docker-in-Docker kind of thing. Mounting just the docker socket will not make it working as you expect. You need to install docker binary into it as well. You can do this by either extending your jenkins image/Dockerfile or create(docker commit) a new image after installing docker binary into it & use that image for your CI/CD. Try to integrate below RUN statement with the extended Dockerfile or the container to be committed(should work on ubuntu docker image) -
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
Ref - https://github.com/jpetazzo/dind
PS - It isn't really recommended (http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/)
Adding to that, you shouldn't mount host docker binary inside the container -
⚠️ Former versions of this post advised to bind-mount the docker
binary from the host to the container. This is not reliable anymore,
because the Docker Engine is no longer distributed as (almost) static
libraries.