Gitlab docker runner problem with path. gitlab-ci.yml - docker

I'm trying to run a pipeline in Gitlab using gitlab-ci.yml file and a runner which can run docker images, but I got an error because the runner cannot find the right path to the Dockerfile
this is my yml file
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- release
variables:
TEST_IMAGE: 193.206.43.98:5555/apfeed/apserver:$CI_COMMIT_REF_NAME
RELEASE_IMAGE: 193.206.43.98:5555/ap:latest
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
build:
stage: build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
test:
stage: test
services:
- mongo:bionic
script:
- docker pull $TEST_IMAGE
- docker run $TEST_IMAGE npm test
release:
stage: release
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
only:
And this is the error I get
$ docker build --pull -t $TEST_IMAGE .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/gitlab-runner/builds/WsYiLtmC/0/al/apfeed/Dockerfile: no such file or directory
ERROR: Job failed: exit status 1
I tried several different ways of write the path in the line TEST IMAGE but none seems to work

You must have Dockerfile in the project root directory
OR
You can pass the relative path to your Dockerfile if it exists in a subdirectory in the project repo.
e.g. docker build --pull -t $TEST_IMAGE -f ./some-dir/Dockerfile .
some-dir == the directory inside your project repo where Dockerfile is located.
The project repo is first cloned into CI_PROJECT_DIR before each job is executed and
CI_PROJECT_DIR is the dir where the .gitlab-ci.yml is gonna exist and the job scripts also run from that directory as well.
https://docs.gitlab.com/ee/ci/variables/README.html

Related

no such host error while doing docker build from gitlab CI

I am trying to build CI pipeline to build and publish my application docker image, however during build i am getting following error:
.gitlab-ci.yml:
image: "docker:dind"
before_script:
- apk add --update python3 py3-pip
- pip3 install -r requirements.txt
- python3 --version
...
docker-build:
stage: Docker
script:
- docker build -t "$CI_REGISTRY_IMAGE" .
- docker ps
However, this gets me following error:
$ docker build -t "$CI_REGISTRY_IMAGE" .
error during connect: Post "http://docker:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=registry.gitlab.com%2Fmaven123%2Frest-api&target=&ulimits=null&version=1": dial tcp: lookup docker on 169.254.169.xxx:53: no such host
Any idea, whats the issue here?
You are missing the docker:dind service.
The image you should use for the job is the normal docker:latest image.
image: docker
services:
- "docker:dind"
variables: # not strictly needed, depending on runner configuration
DOCKER_HOST: "tcp://docker:2375"
DOCKER_TLS_CERTDIR: ""

docker: command not found in gitlab-ci

Background
in my gitlab-ci file I am trying to build a docker image, however even though I have docker:dind as a service, it is failing.
.gitlab-ci
---
stages:
- build
- docker
build:
stage: build
image: fl4m3ph03n1x/my-app:1.0
variables:
MIX_ENV: prod
script:
- mix deps.get
- mix deps.compile
- mix compile
artifacts:
paths:
- .hex/
- _build/
- deps/
- mix.lock
build_image:
stage: docker
image: fl4m3ph03n1x/my-app:1.0
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: tcp://docker:2375/
services:
- docker:dind
script:
- echo ${CI_JOB_TOKEN} | docker login --password-stdin -u ${CI_REGISTRY_USER} ${CI_REGISTRY}
- docker build . -t ${CI_REGISTRY_IMAGE}:latest
- docker push ${CI_REGISTRY_IMAGE}:latest
The problematic stage is docker.
As you can see I am trying to:
login into docker
build an image from gitlab's registry
push that image
Error
However, I am getting the following error:
$ echo ${CI_JOB_TOKEN} | docker login --password-stdin -u
${CI_REGISTRY_USER} ${CI_REGISTRY} /bin/bash: line 110: docker:
command not found
Which is confusing, because docker:dind is supposed to actually prevent this from happening:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#enable-registry-mirror-for-dockerdind-service
Question
So clearly I am missing something here. What am I doing wrong?
EDIT
This is my Dockerfile
FROM elixir:1.10
# Install Hex + Rebar
RUN mix do local.hex --force, local.rebar --force
COPY . /
WORKDIR /
ENV MIX_ENV=prod
RUN mix do deps.get --only $MIX_ENV, deps.compile
RUN mix release
EXPOSE 8080
ENV PORT=8080
ENV SHELL=/bin/bash
CMD ["_build/prod/rel/my_app/bin/my_app", "start"]
image is used to specify the image in which to run the script. You want to run the script in a docker image, to build your image.
The image keyword is the name of the Docker image the Docker executor runs to perform the CI tasks.
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-image-and-services-from-gitlab-ciyml
After all, isn't your application image CI_REGISTRY_IMAGE in this? You don't want to build the image in itself.
- docker build . -t ${CI_REGISTRY_IMAGE}:latest
- docker push ${CI_REGISTRY_IMAGE}:latest

Caching Maven dependencies in Gitlab-CI between different stages and images

I have built the following CI in gitlab in order to execute unit-tests and integration tests.
stages:
- build
- test
variables:
MAVEN_OPTS: "-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository"
cache:
key: "$CI_BUILD_REF"
paths:
- .m2/repository/
unit-tests:
image: maven:latest
stage: test
script:
- cd source_code
- mvn test -P test
intergration-tests:
image: docker
stage: test
services:
- docker:dind
script:
- apk add --no-cache docker-compose
- docker-compose up -d
- docker exec -t account_service_container sh "integration_tests.sh"
- docker-compose down --rmi all
The point is that when I use maven image between stages then I am able to cache maven dependencies in m2 repository. However in case of intergration test I use different image and the container with docker dind creates an isolated set of containers using docker-compose and then there is no acess to previously defined and cached mvn repository. Is there any solution to this case. Should I created customized image including all fetched and required mvn dependencies and keep it on docker hub and then use that image between each stage and in docker-compose.

gitlab CI + Docker + .NET Core, I don't understand why only an older .net version is installed

I have a gitlab CI file that is building projects like this:
image: 'docker/compose:1.25.1-rc1'
services:
- 'docker:dind'
variables:
GIT_SUBMODULE_STRATEGY: recursive
stages:
- build
- deploy
buildCode:
stage: build
except:
- deploy
script:
- docker build -t dataserver -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest -f dockerfile .
deployCode:
stage: deploy
only:
- deploy
script:
- docker build -t dataserver -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest -f dockerfile .
- docker login registry.gitlab.com -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD}
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest
- docker network create network && echo 'creating network'
- docker-compose -f docker-compose.yml pull
- docker-compose -f docker-compose.yml rm -f -s
- docker-compose -f docker-compose.yml up -d
The idea is to use docker/compose:1.25.1-rc1 to bring a docker-compose environment and build the files.
The docker file itself is calling for this image to build
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 as build
and then uses this image for runtime:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 as final
so .net 3.1 should be installed.
however, when I run the app, I get this:
(I can't do a text capture, so this is a screenshot)
Which means that .net 3.1 is not installed and I can't figure out the problem.
If I compile the app for 3.0, with the same CI setup, it runs.
Try force a docker image pull of the images before build yours.
Seems your aspnet:3.1
Just had the 3.1.0-preview version when you pulled It.
The 3.1 tag aims always the last 3.1.xxx version.
Before release was the preview.. now is 3.1.0... in Future Will be 3.1.x.
If you already has pulled the image with the tag 3.1 your build will used the met image. And that may not be the current 3.1 in remote repository. If you pull It the hash is verified and will update the image If needed

How to use files created during build stage with dockerfile in the same stage?

I'm practicing with Gitlab CI to understand how to build an application and then use that within a Docker image. For now, my repo consists simply of helloworld.txt, dockerfile, and gitlab-ci.yml.
PROBLEM: During the build stage, I use a shell executor to 'zip helloworld.zip helloworld.txt". Then, I "docker build -t myproject/myapp ." where I expect to COPY helloworld.zip /" but it seems that the zip file I created is not available during the docker build context. Am I not saving the helloworld.zip file to the right location? Or something else? My long term intent is to write a python application, and during the build stage to compile into a single executable and copy into a docker container.
#cat helloworld.txt
hello world
#cat dockerfile
FROM centos:7
COPY helloworld.zip /
CMD ["/bin/bash"]
#cat gitlab-ci.yml
stages:
- build
- test
- release
- deploy
variables:
IMAGE_TEST_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
IMAGE_RELEASE_NAME: $CI_REGISTRY_IMAGE:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
build:
stage: build
script:
- echo "compile the program"
- zip zipfile.zip helloworld.txt
- docker build --pull -t $IMAGE_TEST_NAME .
- docker push $IMAGE_TEST_NAME
test:
stage: test
script:
- docker pull $IMAGE_TEST_NAME
- docker run $IMAGE_TEST_NAME yum install unzip -y && unzip /helloworld.zip && cat /helloworld.txt
release:
stage: release
script:
- docker pull $IMAGE_TEST_NAME
- docker tag $IMAGE_TEST_NAME $IMAGE_RELEASE_NAME
- docker push $IMAGE_RELEASE_NAME
only:
- master
deploy:
stage: deploy
script:
- ./deploy.sh
only:
- master
when: manual
I expect that within the same stage (in this case build), I can run a program such as zip and then COPY that zip file into a given directory within a newly built docker image during the docker build process.
EDIT
After learning that I can't do this, I've created two different stages: build_app and build_container. Also knowing that artifacts are used by default in following stages, I didn't add an artifacts to the first stage or a dependancies to the next stage. This is the gitlab-ci.yml below and is still producing the same error.
stages:
- build_app
- build_container
- test
- release
- deploy
# you can delete this line if you're not using Docker
#image: centos:latest
variables:
IMAGE_TEST_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
IMAGE_RELEASE_NAME: $CI_REGISTRY_IMAGE:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
build_app:
stage: build_app
script:
- echo "compile the program"
- zip zipfile.zip helloworld.txt
build_container:
stage: build_container
script:
- docker build --pull -t $IMAGE_TEST_NAME .
- docker push $IMAGE_TEST_NAME
test:
stage: test
script:
- docker pull $IMAGE_TEST_NAME
- docker run $IMAGE_TEST_NAME yum install unzip -y && unzip /helloworld.zip && cat /helloworld.txt
release:
stage: release
script:
- docker pull $IMAGE_TEST_NAME
- docker tag $IMAGE_TEST_NAME $IMAGE_RELEASE_NAME
- docker push $IMAGE_RELEASE_NAME
only:
- master
deploy:
stage: deploy
script:
- ./deploy.sh
only:
- master
when: manual
Job Status:
Build App: Passed
Build Container: Failed
Running with gitlab-runner 11.6.1 (8d829975)
on gitrunner-shell trtHcQTS
Using Shell executor...
Running on gitrunner.example.com...
Fetching changes...
Removing zipfile.zip
HEAD is now at e0a0a95 Update .gitlab-ci.yml
Checking out e0a0a952 as newFeature...
Skipping Git submodules setup
$ echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin
WARNING! Your password will be stored unencrypted in /home/gitlab-runner/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker build --pull -t $IMAGE_TEST_NAME .
Sending build context to Docker daemon 112.1kB
Step 1/3 : FROM centos:7
7: Pulling from library/centos
Digest: sha256:184e5f35598e333bfa7de10d8fb1cebb5ee4df5bc0f970bf2b1e7c7345136426
Status: Image is up to date for centos:7
---> 1e1148e4cc2c
Step 2/3 : COPY helloworld.zip /
COPY failed: stat /var/lib/docker/tmp/docker-builder312764301/helloworld.zip: no such file or directory
ERROR: Job failed: exit status 1
This is not possible. Gitlab CI's job model assumes that jobs of the same stage are independent.
See the manual for the dependencies keyword in gitlab-ci.yml:
This feature [...] allows you to define the artifacts to pass between different jobs.
Note that artifacts from all previous stages are passed by default.
[...] You can only define jobs from stages that are executed before the current one.

Resources