Gitlab-ci service command not running - docker

I'm trying to spin up my backend image to use for e2e-testing. When building the backend I make an image which can be used for a service in the frontend. My .gitlab-cy.yml looks partly likes this:
build-test:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build --build-arg BUILD_ARGUMENT_ENV=test --build-arg BUILD_ARGUMENT_DEBUG_ENABLED=true -t registry.gitlab.com/<company>/<repo>/test:latest .
- docker push registry.gitlab.com/<company>/<repo>/test:latest
only:
changes:
- Dockerfile
- docker-compose.yml
- .gitlab-ci.yml
- docker/*
- .env.citest
when: manual
tags:
- test
The job in the front-end looks like this:
e2e-testing:
image: cypress/browsers:node16.14.2-slim-chrome100-ff99-edge
services:
- name: registry.gitlab.com/<company>/<repo>/test:latest
alias: <company>-backend
command: [
"php artisan db:seed"
]
- name: postgis/postgis:12-3.3
alias: postgres_citest
- name: redis:5.0.9
alias: redis_citest
variables:
POSTGRES_DB: <company>_citest
POSTGRES_USER: <company>
POSTGRES_PASSWORD: secret
REDIS_PORT: 6379
stage: testing
before_script:
- yarn
- yarn run dev &
script:
- yarn run e2e:record --parallel --env IS_CI_RUNNING=true
artifacts:
when: always
paths:
- cypress/videos/**/*.mp4
- cypress/screenshots/**/*.png
expire_in: 1 day
tags:
- test
The command php artisan db:seed isn't preformed and I get the following error:
exec: php artisan db:seed: not found
The other services are used by the backend so I figured they need to be spin up as well. How do I get the command working?
Edit: I've tried testing it local and it does work if I go to the right folder cd ../base, so I added this to the command, but it makes no difference. It still says it can't find the command
Edit 2: I overlooked an error and it seems te gitlab runner is not running. Stil don't know how to solve this
Error response from daemon: Cannot link to a non running container

Related

Travis CI won't build though I push my app to GitHub

I want Travis CI to build my app for test when I push my app to GitHub.
I think there is cooperation between Travis CI and GitHub.
But it didn't work.
docker-compose.yml here.
version: '3'
volumes:
db-data:
services:
web:
build: .
ports:
- '3000:3000'
volumes:
- '.:/product-register'
environment:
- 'DATABASE_PASSWORD=postgres'
tty: true
stdin_open: true
depends_on:
- db
links:
- db
db:
image: postgres
volumes:
- 'db-data:/var/lib/postgresql/data'
environment:
- 'POSTGRES_HOST_AUTH_METHOD=trust'
- 'POSTGRES_USER=postgres'
- 'POSTGRES_PASSWORD=postgres'
.travis.yml here.
sudo: required
services: docker
before_install:
- docker login -u polymetisoutis -p 5fb47200-dd19-4772-a9ad-c98913ef1cb9
- docker-compose up --build -d
script:
- docker-compose exec --env 'RAILS_ENV=test' web rails db:create
- docker-compose exec --env 'RAILS_ENV=test' web rails db:migrate
- docker-compose exec --env 'RAILS_ENV=test' web rails test
Then repository I pushed to GitHub here.⇒https://github.com/PolymetisOutis/product-register
After the next command executed,
git push origin master
I think Travis CI should build the app on the travis-ci.com page for test.
But Travis CI didn't work.
Why?
Is there anyone who have an idea and clue about this?
Your Travis CI Build Request Page [1] always gives the idea of why the build wasn't triggered actually. I can see that at a time that you have triggered some requests your account wasn't confirmed and your build requests were rejected, however, now seems like you are able to trigger builds.
[1] https://app.travis-ci.com/github/PolymetisOutis/product-register/requests

Connecting to a service from a Gitlab CI job

I am attempting to run e2e tests in the gitlab ci that use a React frontend, Java Spring backend and PostgreSQL.
The relevant pieces of the .gitlab-ci -config are as follows:
variables:
IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
FF_NETWORK_PER_BUILD: 1
docker-backend-build:
image: docker:latest
services:
- docker:dind
stage: package
dependencies:
- backend-build
script:
- docker build -t registry.gitlab.com/repo-name .
- docker tag registry.gitlab.com/repo-name $IMAGE_NAME
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push $IMAGE_NAME
end-to-end-test:
stage: integration-test
image: node:latest
services:
- name: postgres:9.6
- name: $IMAGE_NAME
alias: backend
variables:
DB_USERNAME: postgres
DB_PASSWORD: postgres
JDBC_CONNECTION_STRING: 'jdbc:postgresql://postgres:5432/database?stringtype=unspecified'
dependencies:
- frontend-build
script:
- cd frontend
- yarn start:ci & ./node_modules/wait-on/bin/wait-on http://backend:9070/api/health http://localhost:3000
- yarn run cy:run
artifacts:
when: always
paths:
- frontend/cypress/videos/*.mp4
- frontend/cypress/screenshots/**/*.png
expire_in: 1 day
The Dockerfile for the backend is as follows:
FROM tomcat:latest
ADD backend/target/server.war /usr/local/tomcat/webapps/
RUN sed -i 's/port="8080"/port="9070"/' /usr/local/tomcat/conf/server.xml
EXPOSE 9070
CMD ["catalina.sh", "run"]
The server.war is created on an earlier stage in the CI-pipeline.
The server.war is set to listen to port 9070, and the Dockerfile succesfully changes the Tomcat port to 9070 as well. The Tomcat instance is able to connect to the postgres instance via postgres:5432 because of the FF_NETWORK_PER_BUILD -flag, but for some reason this script hangs on the wait-on http://backend:9070/api/health command forever. It can not connect to backend:9070 even though the server is up and running. (and the health-endpoint exists). The server doesn't receive any indication that it is trying to be connected to.
What could I be doing wrong? I also tried to connect to http://localhost:9070/api/health but that didn't work either.
The answer for me was simply changing the Dockerfile as follows:
- ADD backend/target/server.war /usr/local/tomcat/webapps/
+ ADD backend/target/server.war /usr/local/tomcat/webapps/ROOT.war
because without that, the server was actually listening in http://backend:9070/api/health/server. Silly me.

gitlab ci error could not translate host name "postgres" to address: Name does not resolve

I use gitlab-ci in my rails app, it ran correctly till yesterday
but it does not pass due to:
rake aborted!
PG::ConnectionBad: could not translate host name "postgres" to address: Name does not resolve
/usr/local/bundle/gems/pg-1.1.4/lib/pg.rb:56:in `initialize'
/usr/local/bundle/gems/pg-1.1.4/lib/pg.rb:56:in `new'
/usr/local/bundle/gems/pg-1.1.4/lib/pg.rb:56:in `connect'
....
Tasks: TOP => db:schema:load => db:check_protected_environments
.gitlab-ci.yml :
rspec:
stage: test
services:
- postgres:10
variables:
DATABASE_URL: "postgresql://postgres:postgres#postgres:5432/$POSTGRES_DB"
POSTGRES_DB: db_test
RAILS_ENV: test
before_script:
- ruby -v
script:
- cp config/application.sample.yml config/application.yml
- cp config/database.sample.yml config/database.yml
- bundle exec rake db:schema:load
- bundle exec rspec spec
It seems it can not find the postgres service running or for some reason the database service is not running correctly, I guess some internals has changed in gitlab-ci.
EDIT: This was an intended change to the images, you now must set a password or configure further:
If you know that you want to be insecure (i.e. any one can connect without a password from anywhere), then POSTGRES_HOST_AUTH_METHOD=trust is how you opt in to that.
This seems to have been introduced when the docker images were upgraded to the new releases.
You can pull the 10.11 image instead to avoid this problem for the time being:
services:
- postgres:10.11
Not sure why this is happening, but we are experiencing the same since the last docker image update. I have found this to also be the case going from 12.1 to 12.2.
postgres has two required environment variables names POSTGRES_USER and POSTGRES_PASSWORD if you not provide them container will not run.
gitlab-ci documentation about services
Also have same problem, but my case was that I have started new docker inside script block:
test:
stage: test
image: ${GOOGLE_CLOUD_IMAGE}
only:
- merge_requests
extends: .build-backend
services:
- docker:19.03.5-dind
- postgres:11.1-alpine
script:
- docker build -t ${IMAGE_NAME}:${CI_COMMIT_SHA} -f docker/Dockerfile.prod .
- docker run -t sh -c 'poetry run python manage.py test apps'
Problem was that postgres alias was create into script space, not into new docker container space, so I started using postgres inside script and have connected to local network:
test:
stage: test
image: ${GOOGLE_CLOUD_IMAGE}
only:
- merge_requests
extends: .build-backend
services:
- docker:19.03.5-dind
script:
- docker network create my-net
- docker run -d --network my-net --name postgres postgres:11.1-alpine
- docker build -t ${IMAGE_NAME}:${CI_COMMIT_SHA} -f docker/Dockerfile.prod .
- docker run -t --network my-net sh -c 'poetry run python manage.py test apps'

How to set proxy in docker-in-docker (dind) in gitlab CI

I am trying to set up a job with gitlab CI to build a docker image from a dockerfile, but I am behind a proxy.
My .gitlab-ci.yml is as follows:
image: docker:stable
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
HTTP_PROXY: $http_proxy
HTTPS_PROXY: $http_proxy
http_proxy: $http_proxy
https_proxy: $http_proxy
services:
- docker:dind
before_script:
- wget -O - www.google.com # just to test
- docker search node # just to test
- docker info # just to test
build:
stage: build
script:
- docker build -t my-docker-image .
wget works, meaning that proxy setup is correct, in theory
But the commands docker search, docker info and docker build do not work, apparently because of a proxy issue.
An excerpt from the job output:
$ docker search node
Warning: failed to get default registry endpoint from daemon (Error response from daemon:
[and here comes a huge raw HTML output including the following message: "504 - server did not respond to proxy"]
It appears docker does not read from the environment variables to setup proxy.
Note: I am indeed using a runner in --privileged mode, as the documentation instructs to do.
How do I fix this?
If you want to be able to use docker-in-docker (dind) in gitlab CI behind proxy, you will also need to setup no_proxy variable in your gitlab-ci.yml file. NO_PROXY for host "docker".
This is the gitlab-ci.yml that works with my dind:
image: docker:19.03.12
variables:
DOCKER_TLS_CERTDIR: "/certs"
HTTPS_PROXY: "http://my_proxy:3128"
HTTP_PROXY: "http://my_proxy:3128"
NO_PROXY: "docker"
services:
- docker:19.03.12-dind
before_script:
- docker info
build:
stage: build
script:
- docker run hello-world
Good luck!
Oddly, the solution was to use a special dind (docker-in-docker) image provided by gitlab instead, and it works without setting up services and anything. The .gitlab-ci.yml that worked was as follows:
image: gitlab/dind:latest
before_script:
- wget -O - www.google.com
- docker search node
- docker info
build:
stage: build
script:
- docker build -t my-docker-image .
Don't forget that the gitlab-runner must be registered with the --privileged flag.
I was unable to get docker-in-docker (dind) working behind our corporate proxy.
In particular, even when following the instructions here a docker build command would still fail when executing FROM <some_image> as it was not able to download the image.
I had far more success using kaniko which appears to be Gitlabs current recommendation for doing Docker builds.
A simple build script for a .NET Core project then looks like:
build:
stage: build
image: $BUILD_IMAGE
script:
- dotnet build
- dotnet publish Console--output publish
artifacts:
# Upload all build artifacts to make them available for the deploy stage.
when: always
paths:
- "publish/*"
expire_in: 1 week
kaniko:
stage: dockerise
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
# Construct a docker-file
- echo "FROM $RUNTIME_IMAGE" > Dockerfile
- echo "WORKDIR /app" >> Dockerfile
- echo "COPY /publish ." >> Dockerfile
- echo "CMD [\"dotnet\", \"Console.dll\"]" >> Dockerfile
# Authenticate against the Gitlab Docker repository.
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
# Run kaniko
- /kaniko/executor --context . --dockerfile Dockerfile --destination $CI_REGISTRY_IMAGE:$VersionSuffix

Docker Compose based Gitlab CI - Pipe error

The problem
I have made a project with docker compose. It works well on localhost. I want to use this base to test or analyze code with Gitlab Runner. I solved a lot of problems, like install docker compose, run and build selected containers and run commands in container. The first job ran and success (!!!), but the following jobs failed before "before_script":
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
...
Error response from daemon: Conflict.
...
Error response from daemon: Conflict.
I don't understand why. What do I do wrong? I repeat: the first job of the pipeline runs well with "success" message! Each other jobs of the pipeline fail.
Full output:
Running with gitlab-ci-multi-runner 9.4.0 (ef0b1a6)
on XXX Runner (fdc0d656)
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9ffor docker service...
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Will be retried in 3s ...
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9f for docker service...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
Using Docker executor with image docker:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image docker:dind ID=sha256:5096e5a0cba00693905879b09e24a487dc244b56e8e15349fd5b71b432c6ec9f for docker service...
ERROR: Preparation failed: Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Will be retried in 3s ...
ERROR: Job failed (system failure): Error response from daemon: Conflict. The container name "/runner-fdc0d656-project-35-concurrent-0-docker" is already in use by container "80918876ffe53e33ce1f069e6e545f03a15469af6596852457f11dbc7a6c5b58". You have to remove (or rename) that container to be able to reuse that name.
Files
.gitlab-ci.yml
# Select image from https://hub.docker.com/r/_/php/
image: docker:latest
# Services
services:
- docker:dind
stages:
- build
- test
- deploy
cache:
key: ${CI_BUILD_REF_NAME}
untracked: true
paths:
- vendor
- var
variables:
DOCKER_CMD: docker exec --user user bin
COMPOSE_HTTP_TIMEOUT: 300
before_script:
- apk add --no-cache py-pip bash
- pip install docker-compose
- touch ~/.gitignore
- bin/docker-init.sh
- cp app/config/parameters.gitlab-ci.yml app/config/parameters.yml
- cp app/config/nodejs_parameters.yml.dist app/config/nodejs_paramteres.yml
- chmod -R 777 app/cache app/logs var
# Load only binary and mysql
- docker-compose up -d binary mysql
build:
stage: build
script:
- ${DOCKER_CMD} composer install -n
- ${DOCKER_CMD} php app/console doctrine:database:create --env=test --if-not-exists
- ${DOCKER_CMD} php app/console doctrine:migrations:migrate --env=test
codeSniffer:
stage: test
script:
- ${DOCKER_CMD} bin/php-cs-fixer fix --dry-run --config-file=.php_cs
database:
stage: test
script:
- ${DOCKER_CMD} php app/console doctrine:mapping:info --env=test
- ${DOCKER_CMD} php app/console doctrine:schema:validate --env=test
- ${DOCKER_CMD} php app/console doctrine:fixtures:load --env=test
unittest:
stage: test
script:
- ${DOCKER_CMD} bin/phpunit -c app --debug
deploy_demo:
stage: deploy
script:
- echo "Deploy to staging server"
environment:
name: staging
url: https://staging.example.com
only:
- develop
deploy_prod:
stage: deploy
script:
- echo "Deploy to production server"
environment:
name: production
url: https://example.com
when: manual
only:
- master
docker-compose.yml
version: "2"
services:
web:
image: nginx:latest
ports:
- "${HTTP_PORT}:80"
depends_on:
- mysql
- elasticsearch
- binary
links:
- binary:php
volumes:
- ".:/var/www"
- "./app/config/docker/vhost.conf:/etc/nginx/conf.d/site.conf"
- "${BASE_LOG_DIR}/nginx:/var/log/nginx"
mysql:
image: mysql:5.6
environment:
MYSQL_USER: test
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
ports:
- "${MYSQL_PORT}:3306"
volumes:
- "${BASE_LOG_DIR}/mysql:/var/log/mysql"
- "${BASE_MYSQL_DATA_DIR}:/var/lib/mysql"
- "./app/config/docker/mysql.cnf:/etc/mysql/conf.d/mysql.cnf"
elasticsearch:
image: elasticsearch:1.7.6
ports:
- "${ELASTICSEARCH_PORT}:9200"
volumes:
- "${BASE_ELASTICSEARCH_DATA_DIR}:/usr/share/elasticsearch/data"
binary:
image: fchris82/kunstmaan-test
container_name: bin
volumes:
- ".:/var/www"
- "${BASE_LOG_DIR}/php:/var/log/php"
- "~/.ssh:/home/user/.ssh"
tty: true
environment:
LOCAL_USER_ID: ${LOCAL_USER_ID}
config.toml
[[runners]]
name = "XXX Runner"
url = "https://gitlab.xxx.xx/"
token = "xxxxxxxxxxx"
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.cache]
OK, I found the problem. I spoilt the configuration. If you use dind service in .gitlab-ci.yml then don't use /var/run/docker.sock volume in config.toml file OR vica versa if you use "socket" method, don't use the dind service.
More informations: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html

Resources