I made a simple Dockerfile:
FROM openjdk
EXPOSE 8080
and built an image using:
docker build -t test .
I installed and configured a docker GitLab CI runner and now I would like to use this runner with my test image. So I wrote the following .gitlab-ci.yml file:
image: test
run:
script:
- echo "Hello world!"
But to my disappointment, the local test image that I can use on my machine was not found.
Running with gitlab-ci-multi-runner 9.4.2 (6d06f2e)
on martin-docker-rawip (70747a61)
Using Docker executor with image test ...
Using docker image sha256:fa91c6ea64ce4b9b44672c6e56eed8312d0ec2afc80730cbee7754bc448ea22b for predefined container...
Pulling docker image test ...
ERROR: Job failed: Error response from daemon: repository test not found: does not exist or no pull access
I do not even know what is going on anymore. How can I make the runner aware of this image that I made?
I had the same question. And I found the answer here: https://forum.gitlab.com/t/runner-cant-use-local-docker-images/5507/6
Add the following in the /etc/gitlab-runner/config.toml
[runners.docker]
# more config for the runner here...
pull_policy = "if-not-present"
More info here: https://docs.gitlab.com/runner/executors/docker.html#how-pull-policies-work
My Dockerfile
FROM node:latest
RUN apt-get update -y && apt-get install openssh-client rsync -y
On the runner I build the image:
docker build -t node_rsync .
The .gitlab-ci.yml in the project using this runner.
image: node_rsync
job:
stage: deploy
before_script:
# now in the custom docker image
#- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ssh-add <(tr '#' '\n' <<< "$STAGING_PRIVATE_KEY" | base64 --decode)
# now in the custom docker image
#- apt-get install -y rsync
script:
- rsync -rav -e ssh --exclude='.git/' --exclude='.gitlab-ci.yml' --delete-excluded ./ $STAGING_USER#$STAGING_SERVER:./deploy/
only:
- master
tags:
- ssh
Related
I'm new to building docker images in gitlab ci and keep returning an error during connect error.
I set up my docker image in Gitlab to be created in AWS.
Dockerfile
FROM python:3-alpine
RUN apk add --update git bash curl unzip zip openssl make
ENV TERRAFORM_VERSION="0.12.28"
RUN curl https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip > terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /bin && \
rm -f terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN pip install awscli boto3
ENTRYPOINT ["terraform"]
.gitlab-ci.yml
variables:
DOCKER_REGISTRY: *.dkr.ecr.eu-west-2.amazonaws.com
AWS_DEFAULT_REGION: eu-west-2
APP_NAME: mytestbuild
DOCKER_HOST: tcp://thedockerhost:2375/
#publish script
publish:
image:
name: amazon/aws-cli:latest
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID
When I push the file up to GitLab and the script begins to run it fails and presents this error code
error during connect: Post
"http://thedockerhost:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=854124157125.dkr.ecr.eu-west-2.amazonaws.com%2Fmytestbuild%3A20&target=&ulimits=null&version=1":
dial tcp: lookup thedockerhost on 172.20.0.10:53: no such host
I've tried a few things to try to sort it out but it is mostly related to using docker: latest image, however, I also found that using amazon/aws-cli should also work. None of what I have seen has worked, and I'd appreciate the help.
I have Gitlab repository and try to add ci/cd pipeline to it
Here .yml file
stages:
- development-db-migrations
- development
step-development-db-migrations:
stage: development-db-migrations
image: mcr.microsoft.com/dotnet/core/sdk:3.1
before_script:
- apt-get update -y
- apt-get upgrade -y
- apt-get dist-upgrade -y
- apt-get -y autoremove
- apt-get clean
- apt-get -y install zip
- dotnet tool install --global dotnet-ef
- export PATH="$PATH:/root/.dotnet/tools"
- sed -i "s/DB_CONNECTION/$DB_CONNECTION_DEV/g" src/COROI.Web.Host/appsettings.json
script:
- echo db migrations started
- cd src/COROI.EntityFrameworkCore
- dotnet ef database update
environment: development
tags:
# - CoroiAdmin
only:
- main
step-deploy-development:
stage: development
image: docker:stable
services:
- docker:18.09.7-dind
before_script:
- export DOCKER_HOST="tcp://localhost:2375"
- docker info
- export DYNAMIC_ENV_VAR=DEVELOPMENT
- apk update
- apk upgrade
- apk add util-linux pciutils usbutils coreutils binutils findutils grep
- apk add python3 python3-dev python3 py3-pip
- pip install awscli
script:
- echo setting up env $DYNAMIC_ENV_VAR
- $(aws ecr get-login --no-include-email --region eu-west-2)
- docker build --build-arg ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT_DEV} --build-arg DB_CONNECTION=${DB_CONNECTION_DEV} --build-arg CORS_ORIGINS=${CORS_ORIGINS_DEV} --build-arg SERVER_ROOT_ADDRESS=${SERVER_ROOT_ADDRESS_DEV} -f src/COROI.Web.Host/Dockerfile -t $ECR_DEV_REPOSITORY_URL:$CI_COMMIT_SHA .
- docker push $ECR_DEV_REPOSITORY_URL:$CI_COMMIT_SHA
- cd deployment
- sed -i -e "s/TAG/$CI_COMMIT_SHA/g" ecs_task_dev.json
- aws ecs register-task-definition --region $ECS_REGION --cli-input-json file://ecs_task_dev.json >> temp.json
- REV=`grep '"revision"' temp.json | awk '{print $2}'`
- aws ecs update-service --cluster $ECS_DEV_CLUSTER --service $ECS_DEV_SERVICE --task-definition $ECS_DEV_TASK --region $ECS_REGION
environment: development
tags:
# - CoroiAdmin
only:
- main
at this step
step-deploy-development:
I got this error
ERROR: Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
after
- export DOCKER_HOST="tcp://localhost:2375"
- docker info
Where is my problem and how I can fix it?
Docker tries to connect to local docker daemon by default via unix sockets.
In the deployment file there is this entry which is setting the docker host env variable before building the image
before_script:
- export DOCKER_HOST="tcp://localhost:2375"
To specify remote docker hosts there are env variables we can use to indicate docker client which docker server we want to connect to.
These env vars are DOCKER_HOST and DOCKER_PORT, if we have them defined on the system then docker will connect to the provided docker daemon server provided via the vars.
Read this guide https://linuxhandbook.com/docker-remote-access/ for further info.
I run automated tests on gitlab ci with gitlab runner, all works good except reports. After tests junit reports are not updated, always show the same pass and not pass tests even thought cmd show different number of passed tests.
Gitlab script:
stages:
- build
- test
docker-build-master:
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker build ./AutomaticTests --pull -t "dockerImage"
- docker image tag dockerImage xxx/dockerImage:0.0.1
- docker push "xxx/dockerImage:0.0.1"
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run "xxx/dockerImage:0.0.1"
artifacts:
when: always
paths:
- AutomaticTests/bin/Release/artifacts/test-result.xml
reports:
junit:
- AutomaticTests/bin/Release/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /AutomaticTests
RUN chmod 777 /AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=..\artifacts\test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
I had a similar issue when using docker-in-docker for my gitlab pipeline. You run your tests inside your container. Therefore, test results are stored inside your "container-under-test". However, the gitlab-ci paths reference not the "container-under-test", but the outside container of your docker-in-docker environment.
You could try to copy the test results from the image directly to your outside container via something like this:
mkdir reports
docker cp $(docker create --rm DOCKER_IMAGE):/ABSOLUTE/FILEPATH/IN/DOCKER/CONTAINER reports/.
So, this would be something like this in your case (untested...!):
...
test:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- mkdir reports
- docker cp $(docker create --rm xxx/dockerImage:0.0.1):/AutomaticTests/bin/Release/artifacts/test-result.xml reports/.
artifacts:
when: always
reports:
junit:
- reports/test-result.xml
...
Also, see this post for furhter explanation on the docker cp command: https://stackoverflow.com/a/59055906/6603778
Keep in mind, that docker cp requires an absolute path to the file you want to copy from your container.
I've now tried for several days to get a runner working on a docker container. I have a Debian running system with GitLab, gitlab-runner and docker installed. I want to use docker as a container for my runners, because shell executors are installing all things on my CI maschine...
What I have done until now: I installed docker like it is described in the GitLab CE docs and run this command:
gitlab-runner register -n \
--url DOMAIN \
--registration-token TOKEN \
--executor docker \
--description "docker-builder" \
--docker-image "gliderlabs/alpine" \
--docker-privileged
then I created a test repo to look if it is working, with this .gitlab-ci-yml
variables:
# GIT_STRATEGY: fetch # re-uses the project workspace
GIT_CHECKOUT: "false" # don't checkout the working copy to a revision related to the CI pipeline
GIT_DEPTH: "3"
cache:
paths:
- node_modules/
stages:
- deploy
before_script:
- apt-get update
- apt-get install -y -qq sshpass
- ls -la
# ======================= Jobs=======================
# Teporaly disable jobs by adding a . (dot) before the job name
ftp-upload:
stage: deploy
# environment: Production
except:
- testing
script:
- rm ./package-lock.json
- npm install
- ls -la
- sshpass -V
- export SSHPASS=$PASSWORD
- sshpass -e scp -o stricthostkeychecking=no -r . $USERNAME#$HOST:/Test
only:
- master
# ===================== ./Jobs ======================
but I get an error in the GitLab CI console:
Running with gitlab-runner 11.1.0 (081978aa)
on docker-builder 5ce3c211
Using Docker executor with image gliderlabs/alpine ...
Pulling docker image gliderlabs/alpine ...
Using docker image sha256:74a78e860d7b39aa694197a70d4467019b611b80c21d886fcd1bfc04d2e767d4 for gliderlabs/alpine ...
Running on runner-5ce3c211-project-3-concurrent-0 via srvvgit001...
Cloning repository for master with git depth set to 3...
Cloning into '/builds/additive/test'...
Skipping Git checkout
Skipping Git submodules setup
Checking cache for default...
Successfully extracted cache
/bin/sh: eval: line 64: apt-get: not found
$ apt-get update
ERROR: Job failed: exit code 127
I don't know much about those docker containers but them seems good for reuse without modifying my CI system. It looks here that it is installing another alpine image/container, but have I not said GitLab runner to use an existing one?
Hopefully, there is someone that can easier explain to me how this works... I really have tried anything google gave me.
The Docker image you are using is a Alpine image, which is a minimal Linux distribution.
Alpine Linux is not using apt for package management but apk.
The problem is in your .gitlab-ci-yml's before_script section where you are trying to run apt.
To solve your issue, replace the use of apt by apk:
before_script:
- apk update
- apk add sshpass
...
Read more about the Alpine Linux package management here.
I'm want to use gitlab runners to deploy a successfully built docker image but I am not sure how to use the deploy stage in .gitlab-ci.yml to do this. The build log shows the database is properly created on the docker image during the build process.
I use docker locally on a Mac (OSX 10.11.6) to build my docker container. Gitlab is running remotely. I registered a specific local runner to handle the build. When I push changes to my project, gitlab CI runs the build script to create a test database. What happens to the image after it's built? There is no docker image for the completed build listed on my local machine. The gitlab-runner-prebuilt-x86_64 is a barebones linux image that isn't connected with the build.
https://docs.gitlab.com/ce/ci/docker/using_docker_build.html
http://container-solutions.com/running-docker-in-jenkins-in-docker/
>gitlab-ci-multi-runner list
Listing configured runners ConfigFile=/Users/username/.gitlab-runner/config.toml
local-docker-executor Executor=docker Token=[token] URL=http://gitlab.url/ci
>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gitlab-runner-prebuilt-x86_64 f6fdece [id1] 25 hours ago 50.87 MB
php7 latest [id2] 26 hours ago 881.8 MB
ubuntu latest [id3] 13 days ago 126.6 MB
docker latest [id4] 2 weeks ago 104.9 MB
.gitlab-ci.yml:
image: php7:latest
# build_image:
# script:
# - docker build -t php7 .
# Define commands that run before each job's script
# before_script:
# - docker info
# Define build stages
# First, all jobs of build are executed in parallel.
# If all jobs of build succeed, the test jobs are executed in parallel.
# If all jobs of test succeed, the deploy jobs are executed in parallel.
# If all jobs of deploy succeed, the commit is marked as success.
# If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
stages:
- build
- test
- deploy
variables:
db_name: db_test
db_schema: "db_test_schema.sql"
build_job1:
stage: build
script:
- service mysql start
- echo "create database $db_name" | mysql -u root
- mysql -u root $db_name < $db_schema
- mysql -u root -e "show databases; use $db_name; show tables;"
#- echo "SET PASSWORD FOR 'root'#'localhost' = PASSWORD('root');" | mysql -u root
#- echo "run unit test command here"
#Defines a list of tags which are used to select Runner
tags:
- docker
deploy_job1:
stage: deploy
#this script is run inside the docker container
script:
- whoami
- pwd
- ls -la
- ls /
#Usage: docker push [OPTIONS] NAME[:TAG]
#Push an image or a repository to a registry
- docker push deploy:latest
#gitlab runners will look for and run jobs with these tags
tags:
- docker
config.toml:
concurrent = 1
check_interval = 0
[[runners]]
name = "local-docker-executor"
url = "http://gitlab.url/ci"
token = "[token]"
executor = "docker"
builds_dir = "/Users/username/DOCKER_BUILD_DIR"
[runners.docker]
tls_verify = false
image = "ubuntu:latest"
privileged = false
disable_cache = false
volumes = ["/cache"]
[runners.cache]
Dockerfile:
FROM ubuntu:latest
#https://github.com/sameersbn/docker-mysql/blob/master/Dockerfile
ENV DEBIAN_FRONTEND noninteractive
ENV MYSQL_USER mysql
ENV MYSQL_DATA_DIR /var/lib/mysql
ENV MYSQL_RUN_DIR /run/mysqld
ENV MYSQL_LOG_DIR /var/log/mysql
ENV DB_NAME "db_test"
ENV DB_IMPORT "db_test_schema.sql"
# RUN apt-get update && \
# apt-get -y install sudo
# RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
# USER docker
# CMD /bin/bash
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
# \
# && rm -rf ${MYSQL_DATA_DIR} \
# && rm -rf /var/lib/apt/lists/*
ADD ${DB_IMPORT} /tmp/${DB_IMPORT}
# #RUN /usr/bin/sudo service mysql start \
# RUN service mysql start \
# && mysql -u root -e "CREATE DATABASE $DB_NAME" \
# && mysql -u root $DB_NAME < /tmp/$DB_IMPORT
RUN locale-gen en_US.UTF-8 \
&& export LANG=en_US.UTF-8 \
&& apt-get update \
&& apt-get -y install apache2 libapache2-mod-php7.0 php7.0 php7.0-cli php-xdebug php7.0-mbstring php7.0-mysql php-memcached php-pear php7.0-dev php7.0-json vim git-core libssl-dev libsslcommon2-dev openssl libssl-dev \
&& a2enmod headers
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
ln -sf /dev/stderr /var/log/apache2/error.log
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
#VOLUME [ "/var/www/html" ]
WORKDIR /var/www/html
EXPOSE 80 3306
#ENTRYPOINT [ "/usr/sbin/apache2" ]
#CMD ["-D", "FOREGROUND"]
#ENTRYPOINT ["/bin/bash"]
You're not building any docker image on CI.
You are using php7 image from DockerHub to execute all jobs. This includes the job deploy_job1, that are trying to use docker binary to push an image (deploy:latest) that is not inside that container. Additionaly, I think that docker binary is not included on the php7 image.
I guess that you want to push the image that you build locally on your Mac, isn't it? In that case, you need to use another runner, which its executor should be shell. On that scenario, you will have 2 runners, one using docker to run the build_job1 job, and another one to push the locally built image. But there is a better solution that build manually the docker image, and it's to make GitLab CI to build it.
So, modifing your .gitlab-ci.yml (removing your comments, adding mines for explanation):
# Removed global image definition
stages:
- build
- test
- deploy
variables:
db_name: db_test
db_schema: "db_test_schema.sql"
build_job1:
stage: build
# Use image ONLY in docker runner
image: php7:latest
script:
- service mysql start
- echo "create database $db_name" | mysql -u root
- mysql -u root $db_name < $db_schema
- mysql -u root -e "show databases; use $db_name; show tables;"
# Run on runner with docker executor, this is ok
tags:
- docker
deploy_job1:
stage: deploy
script:
# Build the docker image first, and then push it
- docker build -t deploy:latest .
- docker push deploy:latest
# Run on runner with shell executor, set proper tag
tags:
- docker_builder
When you register the new runner, set executor as shell and tags docker_builder. I'm asuming that you have installed docker engine on your Mac.
On the other hand, this example makes no sense, at least for me. The build stage does nothing, as the container is ephemeral. I guess you should do that on the Dockerfile.