I have the following .gitlab-ci.yml file:
image: docker
services:
- docker:dind
stages:
- test
- build
- deploy
test:
stage: test
before_script:
- apk add --update -y python-pip
- pip install docker-compose
script:
- echo "Testing the app"
- docker-compose run app sh -c "python manage.py test && flake8"
build:
stage: build
only:
- develop
- production
- feature/deploy-debug-gitlab
before_script:
- apk add --update -y python-pip
- pip install docker-compose
script:
- echo "Building the app"
- docker-compose build
deploy:
stage: deploy
only:
- master
- develop
- feature/deploy
- feature/deploy-debug-gitlab
before_script:
- apk add --update -y python-pip
- pip install docker-compose
script:
- echo "Deploying the app"
- docker-compose up -d
environment: production
when: manual
When the Gitlab runner executes it, I get the following error:
$ apk add --update -y python-pip
bash: line 82: apk: command not found
ERROR: Job failed: exit status 1
How am I supposed to install apk? Or what image other than docker should I be using to run this gitlab-ci.yml file?
Well, it turns out I had two different runners: one marked as "shell executor" (Ubuntu) and the other marked as "docker executor“.
This error was being thrown out only when the Ubuntu runner was dispatching the job, since Ubuntu doesn´t come with apk.
I disabled the Ubuntu runner and solved the problem.
The alternative is to set your installation on step above test, as in this issue
image: docker:latest
services:
- docker:dind
before_script:
- apk add --update python-pip
Related
I have a testing project in GitLab which uses Python, Robot Framework, Chrome and Selenium. I am using the following gitlab-ci.yml:
image: python:3.9
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
stages:
- test_API1
- test_API2
before_script:
- python -V # Print out python version for debugging
- wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
- echo 'deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main' | tee /etc/apt/sources.list.d/google-chrome.list
- apt update -y
- apt install -y google-chrome-stable
- pip install virtualenv
- virtualenv venv
- source venv/bin/activate
- pip install -r requirements.txt
API1:
stage: test_API1
when: manual
script:
- python3 -m robot.run -d Results1 Test/test_API1.robot
API2:
stage: test_API2
script:
- python3 -m robot.run -d Results2 Test/test_API2.robot
I specifically want the two Test Stages to be separate and in doing so, i have to go through the before_script stage two times, which slows down the overall pipeline.
Is there a better and faster way to have Python, Robot Framework, Chrome and Selenium installed in one stage and then reused in test stages? Can someone suggest any docker images which are already present to work for this scenario?
I've got test job called "e2e" to run in my project but when it reaches the point to execute the test it throws:
Cypress could not verify that this server is running:
> http://nginx
We are verifying this server because it has been configured as your `baseUrl`.
gitlab-ci.yml:
image: docker:stable
services:
- docker:19.03.5-dind
stages:
- build
- test
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
compile:
stage: build
script:
- apk add --no-cache py-pip python2-dev python3-dev libffi-dev openssl-dev gcc libc-dev make npm
- pip install docker-compose
- npm install randomstring --save-dev
- docker-compose up -d --build
- docker-compose exec -T users python manage.py recreate_db
- docker-compose exec -T users python manage.py test
- docker-compose exec -T client npm test -- --coverage --watchAll --watchAll=false
e2e:
stage: test
image: cypress/base:10
script:
- npm install
- npm run runHeadless
I specified http://nginx as a baseUrl in cypress.json because that's how it suppose to
work in production. I changed it to http://127.0.0.1 for development but that didn't help.
I suspect it's because I'm not specifying network to use in .gitlab-ci.yml file.
Am I correct ?
I created docker which runs automated tests. I run it by gitlab script. All works except the report file. I cannot get a report file from docker and insert it to the repository. Command docker cp not working. My GitLab script and docker file:
Gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# See https://github.com/docker-library/docker/pull/166
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run --name authContainer "rrr/image:0.0.1"
after_script:
- docker cp authContainer:/artifacts $CI_PROJECT_DIR/artifacts/
artifacts:
when: always
paths:
- $CI_PROJECT_DIR/artifacts/test-result.xml
reports:
junit:
- $CI_PROJECT_DIR/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /Spinelle.AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel Spinelle.AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
Heres my .gitlab-ci.yml
stages:
- containerize
- compile
build_image:
image: docker
stage: containerize
script:
- docker build -t compiler_image_v0 .
compile:
image: compiler_image_v0
stage: compile
script:
- make
artifacts:
when: on_success
paths:
- output/
expire_in: 1 day
The build_image is running correctly, the image created is listed when using the docker images command on the machine with the runners. But the second job fails with the error:
ERROR: Job failed: Error response from daemon: pull access denied for compiler_image_v0, repository does not exist or may require 'docker login' (executor_docker.go:168:1s)
What's going on?
This is my Dockerfile
FROM ubuntu:18.04
WORKDIR /app
# Ubuntu packages
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install apt-utils subversion g++ make cmake unzip
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install libgtk2.*common libpango-1* libasound2* xserver-xorg
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install cpio
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install bash
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install autoconf automake perl m4
# Intel Fortran compiler
RUN mkdir /intel
COPY parallel_studio_xe_2018_3_pro_for_docker.zip /intel
RUN cd /intel && unzip /intel/parallel_studio_xe_2018_3_pro_for_docker.zip
RUN cd /intel/parallel_studio_xe_2018_3_pro_for_docker && ./install.sh --silent=custom_silent.cfg
RUN rm -rf /intel
The stage compile tries to pull the image compiler_image_v0. This image exists only temporary in the docker container of the stage containerize. You have a container registry in your gitlab repository and can push the built image in the containerize stage and then pull it in the compile stage. Furthermore: You should provide a full name of your private gitlab registry. I think dockerhub is used per default.
You can change your .gitlab.ci.yaml to add the push command and use a fully named image:
stages:
- containerize
- compile
build_image:
image: docker
stage: containerize
script:
- docker build -t compiler_image_v0 .
- docker push registry.gitlab.com/group-name/repo-name:compiler_image_v0
compile:
image: registry.gitlab.com/group-name/repo-name:compiler_image_v0
stage: compile
script:
- make
artifacts:
when: on_success
paths:
- output/
expire_in: 1 day
This would overwrite the image on each build. But you could add some versioning.
I am having difficulties with enabling docker for build job. This is how gitlab ci config file looks like:
image: docker:latest
services:
- docker:dind
stages:
- build
build:
image: java:8
stage: build
script:
- docker info
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com/...
- sbt server/docker:publish
And here is the output from job:
gitlab-ci-multi-runner 1.3.2 (0323456)
Using Docker executor with image java:8 ...
Pulling docker image docker:dind ...
Starting service docker:dind ...
Waiting for services to be up and running...
Pulling docker image java:8 ...
Running on runner-30dcea4b-project-1408237-concurrent-0 via runner-30dcea4b-machine-1470340415-c2bbfc45-digital-ocean-4gb...
Cloning repository...
Cloning into '/builds/.../...'...
Checking out 9ba87ff0 as master...
$ docker info
/bin/bash: line 42: docker: command not found
ERROR: Build failed: exit code 1
Any clues why docker is not found?
After few days of struggling, I came up with following setup:
image: gitlab/dind
stages:
- test
- build
before_script:
- echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections
- apt-get update
- apt-get install -y curl
- apt-get install -y software-properties-common python-software-properties
- add-apt-repository -y ppa:webupd8team/java
- apt-get update
- apt-get install -y oracle-java8-installer
- rm -rf /var/lib/apt/lists/*
- rm -rf /var/cache/oracle-jdk8-installer
- apt-get update -yqq
- apt-get install apt-transport-https -yqq
- echo "deb http://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list
- apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823
- apt-get update -yqq
- apt-get install sbt -yqq
- sbt sbt-version
test:
stage: test
script:
- sbt scalastyle && sbt test:scalastyle
- sbt clean coverage test coverageReport
build:
stage: build
script:
- docker info
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com/...
- sbt server/docker:publish
It has docker (mind gitlab/dind image), java and sbt. Now I can push to gitlab registry from sbt docker plugin.
docker info command is running inside java:8 based container which will not have docker installed/available in it.