How to specify IP to use in gitlab-ci file? - docker

I've got test job called "e2e" to run in my project but when it reaches the point to execute the test it throws:
Cypress could not verify that this server is running:
> http://nginx
We are verifying this server because it has been configured as your `baseUrl`.
gitlab-ci.yml:
image: docker:stable
services:
- docker:19.03.5-dind
stages:
- build
- test
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
compile:
stage: build
script:
- apk add --no-cache py-pip python2-dev python3-dev libffi-dev openssl-dev gcc libc-dev make npm
- pip install docker-compose
- npm install randomstring --save-dev
- docker-compose up -d --build
- docker-compose exec -T users python manage.py recreate_db
- docker-compose exec -T users python manage.py test
- docker-compose exec -T client npm test -- --coverage --watchAll --watchAll=false
e2e:
stage: test
image: cypress/base:10
script:
- npm install
- npm run runHeadless
I specified http://nginx as a baseUrl in cypress.json because that's how it suppose to
work in production. I changed it to http://127.0.0.1 for development but that didn't help.
I suspect it's because I'm not specifying network to use in .gitlab-ci.yml file.
Am I correct ?

Related

Gitlab CI Split Docker Build Into Multiple Stages

I have a react/django app that's dockerized. There's 2 stages to the GitLab CI process. Build_Node and Build_Image. Build node just builds the react app and stores it as an artifact. Build image runs docker build to build the actual docker image, and relies on the node step because it copies the built files into the image.
However, the build process on the image takes a long time if package dependencies have changed (apt or pip), because it has to reinstall everything.
Is there a way to split the docker build job into multiple parts, so that I can say install the apt and pip packages in the dockerfile while build_node is still running, then finish the docker build once that stage is done?
gitlab-ci.yml:
stages:
- Build Node Frontend
- Build Docker Image
services:
- docker:18.03-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
build_node:
stage: Build Node Frontend
only:
- staging
- production
image: node:14.8.0
variables:
GIT_SUBMODULE_STRATEGY: recursive
artifacts:
paths:
- http
cache:
key: "node_modules"
paths:
- frontend/node_modules
script:
- cd frontend
- yarn install --network-timeout 100000
- CI=false yarn build
- mv build ../http
build_image:
stage: Build Docker Image
only:
- staging
- production
image: docker
script:
#- sleep 10000
- tar -cvf app.tar api/ discordbot/ helpers/ http/
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
#- docker pull $CI_REGISTRY_IMAGE:latest
#- docker build --network=host --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker build --network=host --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Dockerfile:
FROM python:3.7-slim
# Add user
ARG APP_USER=abc
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
WORKDIR /app
ENV PYTHONUNBUFFERED=1
EXPOSE 80
EXPOSE 8080
ADD requirements.txt /app/
RUN set -ex \
&& BUILD_DEPS=" \
gcc \
" \
&& RUN_DEPS=" \
ffmpeg \
postgresql-client \
nginx \
dumb-init \
" \
&& apt-get update && apt-get install -y $BUILD_DEPS \
&& pip install --no-cache-dir --default-timeout=100000 -r /app/requirements.txt \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& rm -rf /var/lib/apt/lists/*
# Set uWSGI settings
ENV UWSGI_WSGI_FILE=/app/api/api/wsgi.py UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy PYTHONUNBUFFERED=1 UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
ENV PYTHONPATH=$PYTHONPATH:/app/api:/app
ENV DB_PORT=5432 DB_NAME=shittywizard DB_USER=shittywizard DB_HOST=localhost
ADD nginx.conf /etc/nginx/nginx.conf
# Set entrypoint
ADD entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["dumb-init", "--", "/entrypoint.sh"]
ADD app.tar /app/
RUN python /app/api/manage.py collectstatic --noinput
Sure! Check out the gitlab docs on stages and on building docker images with gitlab-ci.
If you have multiple pipeline steps defined within a stage they will run in parallel. For example, the following pipeline would build the node and image artifacts in parallel and then build the final image using both artifacts.
stages:
- build
- bundle
build-node:
stage: build
script:
- # steps to build node and push to artifact registry
build-base-image:
stage: build
script:
- # steps to build image and push to artifact registry
bundle-node-in-image:
stage: bundle
script:
- # pull image artifact
- # download node artifact
- # build image on top of base image with node artifacts embedded
Note that all the pushing and pulling and starting and stopping might not save you time depending on your image size relative to build time, but this will do what you're asking for.

Postgres database, Error : NetworkError when attempting to fetch resource

I am trying to do a Docker image but I have some problems. Here is my docker-compose.yml :
version: '3.7'
services:
web:
container_name: web
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 35729:35729
stdin_open: true
depends_on:
- db
db:
restart: always
environment:
- POSTGRES_USER=admin
- POSTGRES_PASS=pass
- POSTGRES_DB=mydb
- POSTGRES_PORT=5432
- POSTGRES_HOST=localhost
- POSTGRES_HOST_AUTH_METHOD=trust
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
And there my Dockerfile :
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/web
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install nodejs
RUN apk add --update nodejs nodejs-npm
RUN apk add zlib-dev jpeg-dev gcc musl-dev
# copy project
COPY . .
RUN python -m pip install -U --force-reinstall pip
RUN python -m pip install Pillow
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN pip install Pillow
# run entrypoint.sh
ENTRYPOINT ["sh", "./entrypoint.sh"]
Anf finally my entrypoint.sh :
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
exec "$#"
When I do that :
docker-compose up -d --build
It works perfectly. Then I type that :
docker-compose exec web npm start --prefix ./front/
It looks ok but when I type in my browser http://localhost:3000/
I got that kind of messages : Error : NetworkError when attempting to fetch resource.
I thought the front is ok but I am not able to communicate with the back and so the database.
Could you help me please ?
Thank you very much !
As I can see in docker-compose.yml file you did not define the environment variables for Postgres in the web container. Please define the environment variables for the below :
DATABASE
SQL_HOST
SQL_PORT
Then bring down the docker and bring up the docker hopefully it will help you.

How get report file from docker and insert to gitlab repository

I created docker which runs automated tests. I run it by gitlab script. All works except the report file. I cannot get a report file from docker and insert it to the repository. Command docker cp not working. My GitLab script and docker file:
Gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# See https://github.com/docker-library/docker/pull/166
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run --name authContainer "rrr/image:0.0.1"
after_script:
- docker cp authContainer:/artifacts $CI_PROJECT_DIR/artifacts/
artifacts:
when: always
paths:
- $CI_PROJECT_DIR/artifacts/test-result.xml
reports:
junit:
- $CI_PROJECT_DIR/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /Spinelle.AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel Spinelle.AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"

.gitlab-ci.yml error: "apk: command not found"

I have the following .gitlab-ci.yml file:
image: docker
services:
- docker:dind
stages:
- test
- build
- deploy
test:
stage: test
before_script:
- apk add --update -y python-pip
- pip install docker-compose
script:
- echo "Testing the app"
- docker-compose run app sh -c "python manage.py test && flake8"
build:
stage: build
only:
- develop
- production
- feature/deploy-debug-gitlab
before_script:
- apk add --update -y python-pip
- pip install docker-compose
script:
- echo "Building the app"
- docker-compose build
deploy:
stage: deploy
only:
- master
- develop
- feature/deploy
- feature/deploy-debug-gitlab
before_script:
- apk add --update -y python-pip
- pip install docker-compose
script:
- echo "Deploying the app"
- docker-compose up -d
environment: production
when: manual
When the Gitlab runner executes it, I get the following error:
$ apk add --update -y python-pip
bash: line 82: apk: command not found
ERROR: Job failed: exit status 1
How am I supposed to install apk? Or what image other than docker should I be using to run this gitlab-ci.yml file?
Well, it turns out I had two different runners: one marked as "shell executor" (Ubuntu) and the other marked as "docker executor“.
This error was being thrown out only when the Ubuntu runner was dispatching the job, since Ubuntu doesn´t come with apk.
I disabled the Ubuntu runner and solved the problem.
The alternative is to set your installation on step above test, as in this issue
image: docker:latest
services:
- docker:dind
before_script:
- apk add --update python-pip

Gitlab: build docker container, then use it for compilation

Heres my .gitlab-ci.yml
stages:
- containerize
- compile
build_image:
image: docker
stage: containerize
script:
- docker build -t compiler_image_v0 .
compile:
image: compiler_image_v0
stage: compile
script:
- make
artifacts:
when: on_success
paths:
- output/
expire_in: 1 day
The build_image is running correctly, the image created is listed when using the docker images command on the machine with the runners. But the second job fails with the error:
ERROR: Job failed: Error response from daemon: pull access denied for compiler_image_v0, repository does not exist or may require 'docker login' (executor_docker.go:168:1s)
What's going on?
This is my Dockerfile
FROM ubuntu:18.04
WORKDIR /app
# Ubuntu packages
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install apt-utils subversion g++ make cmake unzip
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install libgtk2.*common libpango-1* libasound2* xserver-xorg
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install cpio
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install bash
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install autoconf automake perl m4
# Intel Fortran compiler
RUN mkdir /intel
COPY parallel_studio_xe_2018_3_pro_for_docker.zip /intel
RUN cd /intel && unzip /intel/parallel_studio_xe_2018_3_pro_for_docker.zip
RUN cd /intel/parallel_studio_xe_2018_3_pro_for_docker && ./install.sh --silent=custom_silent.cfg
RUN rm -rf /intel
The stage compile tries to pull the image compiler_image_v0. This image exists only temporary in the docker container of the stage containerize. You have a container registry in your gitlab repository and can push the built image in the containerize stage and then pull it in the compile stage. Furthermore: You should provide a full name of your private gitlab registry. I think dockerhub is used per default.
You can change your .gitlab.ci.yaml to add the push command and use a fully named image:
stages:
- containerize
- compile
build_image:
image: docker
stage: containerize
script:
- docker build -t compiler_image_v0 .
- docker push registry.gitlab.com/group-name/repo-name:compiler_image_v0
compile:
image: registry.gitlab.com/group-name/repo-name:compiler_image_v0
stage: compile
script:
- make
artifacts:
when: on_success
paths:
- output/
expire_in: 1 day
This would overwrite the image on each build. But you could add some versioning.

Resources