I'm getting this error output from gitlab runner I have installed locally:
error during connect: Get "http://docker:2375/v1.24/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.project%3Dproject-0%22%3Atrue%7D%7D&limit=0": dial tcp: lookup docker on 192.168.1.254:53: no such host
Test failed: users users-lint client
ERROR: Job failed: exit code 1
FATAL: exit code 1
my config.toml file:
concurrent = 1
check_interval = 0
[[runners]]
name = "tdd"
url = "https://gitlab.com/"
token = "redacted_XXXXXXXX"
executor = "docker"
builds_dir = "~/tdd"
[runners.docker]
tls_verify = false
disable_entrypoint_overwrite = false
image = "docker:stable"
privileged = true
oom_kill_disable = false
disable_cache = false
cache_dir = ""
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
taking a look at .gitlab-ci.yml file may help:
image: docker:stable
# services:
# - docker:dind
variables:
DOCKER_DRIVER: overlay
DOCKER_HOST: tcp://docker:2375
POSTGRES_DB: users_dev
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_HOST: postgres
stages:
- build
before_script:
- apk update
- apk add --no-cache --update python3-dev py3-pip curl
- curl -L https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- mv docker-compose /usr/local/bin
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
- export SECRET_KEY=hakuna000matata
build:
stage: build
script:
- sh test.sh
I commented out services because it works OK for CI environment but threads on the web suggests that it's better to keep it off when running the build locally.
Any ideas what may help ? I'm on Ubuntu.
Related
I have registered gitlab-runner with the following command:
gitlab-runner register --non-interactive \
--url ${URL} \
--registration-token ${REGISTRATION_TOKEN} \
--description ${RUNNER_NAME} \
--tag-list ${TAGS} \
--executor "docker" \
--docker-image="docker:stable" \
--docker-pull-policy if-not-present \
--locked=false \
--docker-privileged=true \
--docker-volumes=["/var/run/docker.sock:/var/run/docker.sock", "/cache"] \
There is my runner config:
[[runners]]
name = "XXX"
url = "XXX"
id = 19981753
token = "XXX"
token_obtained_at = 2022-12-24T11:43:10Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
MaxUploadedArchiveSize = 0
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
pull_policy = ["if-not-present"]
shm_size = 0
Then I am trying to run ci with docker-compose, but there is an error: docker-compose: not found:
There is a part from my .gitlab-ci.yml:
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
stages:
- build
- staging
- release
- deploy
build:
stage: build
script:
- echo "IMAGE_APP_TAG=$STAGE_IMAGE_APP_TAG" >> .env
- docker-compose build
- docker-compose push
only:
- dev
- main
Which image for docker executor should i use to run docker-compose? Should i change .gitlac-ci.yml or gitlab-runner config.toml?
In the buid job, just add the docker compose image
build:
image: docker/compose
Or you can use any other image and install the docker compose by yourself.
I've just installed Gitlab-runner locally on my Ubuntu machine so I can debug my pipeline without using shared runners.
I'm getting this error output:
$ docker-compose up -d --build
Couldn't connect to Docker daemon at http://docker:2375 - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
ERROR: Failed to cleanup volumes
ERROR: Job failed: exit code 1
FATAL: exit code 1
when I run docker --version I get:
Docker version 20.10.12, build e91ed57
and when I run sudo systemctl status docker I get:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-01-01 20:26:25 GMT; 37min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 1404 (dockerd)
Tasks: 20
Memory: 112.0M
CGroup: /system.slice/docker.service
└─1404 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
so it is installed and running hence the error output is confusing.
Here's my pipeline:
image: docker:stable
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
before_script:
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
job:
stage: build
script:
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev docker-compose
- docker-compose up -d --build
- docker logs testdriven_e2e:latest -f
after_script:
- docker-compose down
I start the running executing gitlab-runner exec docker --docker-privileged job
Any suggestion as to why the runner is complaining about Docker not running ?
update: based on opinions from this thread https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1986
image: docker:stable
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_DRIVER: overlay2
services:
- docker:dind
stages:
- build
- test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
before_script:
- docker info
- export REACT_APP_USERS_SERVICE_URL=http://127.0.0.1
job:
stage: build
script:
- apk add --update --no-cache gcc g++ make python2 python2-dev py-pip python3-dev curl
- curl -L "https://github.com/docker/compose/releases/download/v2.2.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker-compose up -d --build
- docker logs testdriven_e2e:latest -f
after_script:
- docker-compose down
config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "testdriven"
url = "https://gitlab.com/"
token = "yU2yn4eUmFJ-xr3HzzmE"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
insecure = false
[runners.docker]
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
cache_dir = "cache"
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
shm_size = 0
error output:
$ docker info
Client:
Debug Mode: false
Server:
ERROR: Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
errors pretty printing info
ERROR: Failed to cleanup volumes
ERROR: Job failed: exit code 1
FATAL: exit code 1
Strangely, what worked for me was pinning down the dind version like this:
services:
- docker:18.09-dind
Which port is being used by docker in your system? It seems that it's running in a non-default port. Try adding this to your .gitlab-ci.yaml file, but change the 2375 port.
variables:
DOCKER_HOST: "tcp://docker:2375"
I'm trying to run a gitlab ci on my own server. I registered gitlab-runner in a separated machine using privileges
sudo gitlab-runner -n \
--url https://git.myServer.com/ \
--registration-token TOKEN \
--executor docker \
--description "Docker runner" \
--docker-image "myImage:version" \
--docker-privileged
Then I created a simple .gitlab-ci.yml configuration
stages:
- build
default:
image: myImage:version
build-os:
stage: build
script: ./build
My build script builds some cpp files and triggers some cmake files. However, one of those cmake files fails when trying to execute configure_file command
CMake Error at CMakeLists.txt:80 (configure_file):
Operation not permitted
I think it's a problem of privileges of my gitlab-runner but I registered it with sudo privileges.
Any idea of what I'm missing? thank you!
edit:
Here's my config.toml file
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Description"
url = "https://git.myServer.com/"
token = "TOKEN"
executor = "docker"
environment = [
"DOCKER_AUTH_CONFIG={config}",
"GIT_STRATEGY=clone",
]
clone_url = "https://git.myServer.com"
builds_dir = "/home/gitlab-runner/build"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "myImage:version"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = [
"/tmp/.X11-unix:/tmp/.X11-unix",
"/dev:/dev",
"/run/user/1000/gdm/Xauthority:/home/gitlab-runner/.Xauthority",
]
memory = "8g"
memory_swap = "8g"
ulimit = ["core=0", "memlock=-1", "rtprio=99"]
shm_size = 0
pull_policy = ["if-not-present"]
network_mode = "host"
I have also tried changing the user from gitlab-runner to my host user following this but it didn't work.
This is the line which makes my build fail.
I entered the container from the runner machine while the ci was running and I noticed that the repository was cloned as root but the build directories were created under a user. The cofigure_file command is trying to modify a file from the repository, so it's like user is trying to modify a file created as root (when cloned). I didn't manage to make the gitlab-runner software clone the repository as user. Instead, my workaround was to change the permissions of the folder before building. My .gitlab-ci.yml looks like this now
stages:
- build
default:
image: myImage:version
build-os:
stage: build
script:
- cd ../ && sudo chown -R user:sudo my-repo/ && cd my-repo/
- ./build
I have an application that runs in docker-compose (for acceptance testing). The acceptance tests work locally, but they require the host (or ip) of the webservice container running in docker-compose in order to send requests to it. This works fine locally, but I cannot find the ip of the container when it is running in a gitlab ci server. I've tried the following few solutions (all of which work when running locally, but none of which work in gitlab ci) to find the url of the container running in docker-compose in gitlab ci server:
use "docker" as the host. This works for an application running in docker, but not docker-compose
use docker-inspect to find the ip of the container (docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' reading-comprehension)
assign a static ip to the container using a network in docker-compose.yml (latest attempt).
The gitlab ci file can be found here:
https://gitlab.com/connorbutch/reading-comprehension/-/blob/9-list-all-assessments/.gitlab-ci.yml
image: connorbutch/gradle-and-java-11:alpha
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
DOCKER_HOST: "tcp://docker:2375"
DOCKER_DRIVER: "overlay2"
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
services:
- docker:stable-dind
stages:
- build
- docker_build
- acceptance_test
unit_test:
stage: build
script: ./gradlew check
cache:
key: "$CI_COMMIT_REF_NAME"
policy: pull
paths:
- build
- .gradle
build:
stage: build
script:
- ./gradlew clean quarkusBuild
- ./gradlew clean build -Dquarkus.package.type=native -Dquarkus.native.container-build=true
cache:
key: "$CI_COMMIT_REF_NAME"
policy: push
paths:
- build
- .gradle
artifacts:
paths:
- reading-comprehension-server-quarkus-impl/build/
docker_build:
stage: docker_build
script:
- cd reading-comprehension-server-quarkus-impl
- docker build -f infrastructure/Dockerfile -t registry.gitlab.com/connorbutch/reading-comprehension:$CI_COMMIT_SHORT_SHA .
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker push registry.gitlab.com/connorbutch/reading-comprehension:$CI_COMMIT_SHORT_SHA
acceptance_test:
stage: acceptance_test
only:
- merge_requests
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- cd reading-comprehension-server-quarkus-impl/infrastructure
- export IMAGE_TAG=$CI_COMMIT_SHORT_SHA
- docker-compose up -d & ../../wait-for-it-2.sh
- cd ../..
- ./gradlew -DBASE_URL='192.168.0.8' acceptanceTest
artifacts:
paths:
- reading-comprehension/reading-comprehension-server-quarkus-impl/build/
The docker-compose file can be found here:
https://gitlab.com/connorbutch/reading-comprehension/-/blob/9-list-all-assessments/reading-comprehension-server-quarkus-impl/infrastructure/docker-compose.yml
Find the output of one of the failed jobs here:
https://gitlab.com/connorbutch/reading-comprehension/-/jobs/734771859
#This file is NOT ever intended for use in production. Docker-compose is a great tool for running
#database with our application for acceptance testing.
version: '3.3'
networks:
network:
ipam:
driver: default
config:
- subnet: 192.168.0.0/24
services:
db:
image: mysql:5.7.10
container_name: "db"
restart: always
environment:
MYSQL_DATABASE: "rc"
MYSQL_USER: "user"
MYSQL_PASSWORD: "password"
MYSQL_ROOT_PASSWORD: "password"
MYSQL_ROOT_HOST: "%"
networks:
network:
ipv4_address: 192.168.0.4
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- db:/var/lib/mysql
reading-comprehension-ws:
image: "registry.gitlab.com/connorbutch/reading-comprehension:${IMAGE_TAG}"
container_name: "reading-comprehension"
restart: on-failure
environment:
WAIT_HOSTS: "db:3306"
DB_USER: "user"
DB_PASSWORD: "password"
DB_JDBC_URL: "jdbc:mysql://192.168.0.4:3306/rc"
networks:
network:
ipv4_address: 192.168.0.8
ports:
- 8080:8080
expose:
- 8080
volumes:
db:
Does anyone have any idea on how to access the ip of the container running in docker-compose on gitlab ci server? Any suggestions are welcome.
Thanks,
Connor
This is little bit tricky, just few days ago I had similar problem but with VPN from CI to client :)
EDIT: Solution for on-premise gitlab instances
Create custom network for gitlab runners:
docker network create --subnet=172.16.0.0/28 \
--opt com.docker.network.bridge.name=gitlab-runners \
--opt com.docker.network.bridge.enable_icc=true \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
--opt com.docker.network.bridge.host_binding_ipv4=0.0.0.0 \
--opt com.docker.network.driver.mtu=9001 gitlab-runners
Attach new network to gitlab-runners
# /etc/gitlab-runner/config.toml
[[runners]]
....
[runners.docker]
....
network_mode = "gitlab-runners"
Restart runners.
And finally gitlab-ci.yml
start-vpn:
stage: prepare-deploy
image: docker:stable
cache: {}
variables:
GIT_STRATEGY: none
script:
- >
docker run -it -d --rm
--name vpn-branch-$CI_COMMIT_REF_NAME
--privileged
--net gitlab-runners
-e VPNADDR=$VPN_SERVER
-e VPNUSER=$VPN_USER
-e VPNPASS=$VPN_PASSWORD
auchandirect/forticlient || true && sleep 2
- >
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
vpn-branch-$CI_COMMIT_REF_NAME > vpn_container_ip
artifacts:
paths:
- vpn_container_ip
And in next step you can use something like:
before_script:
- ip route add 10.230.163.0/24 via $(cat vpn_container_ip) # prod/dev
- ip route add 10.230.164.0/24 via $(cat vpn_container_ip) # test
EDIT: Solution for gitlab.com
Base on gitlab issue answer port mapping in DinD is bit different from nonDinD gitlab-runner and for exposed ports you should use hostname 'docker'.
Example:
services:
- docker:stable-dind
variables:
DOCKER_HOST: "tcp://docker:2375"
stages:
- test
test env:
image: tmaier/docker-compose:latest
stage: test
script:
# containous/whoami with exposed port 80:80
- docker-compose up -d
- apk --no-cache add curl
- curl docker:80 # <-------
- docker-compose down
I'm using docker and not docker-compose and the solution above doesn't work for me/
I am using my own image based on node in which I install docker & buildx like this:
ARG NODE_VER=lts-alpine
FROM node:${NODE_VER}
ARG BUILDX_VERSION=0.5.1
ARG DOCKER_VERSION=20.10.6
ARG BUILDX_ARCH=linux-arm-v7
RUN apk --no-cache add curl
# install docker
RUN curl -SL "https://download.docker.com/linux/static/stable/armhf/docker-${DOCKER_VERSION}.tgz" | \
tar -xz --strip-components 1 --directory /usr/local/bin/
COPY docker/modprobe.sh /usr/local/bin/modprobe
# replace node entrypoint by docker one /!\
COPY docker/docker-entrypoint.sh /usr/local/bin/
ENV DOCKER_TLS_CERTDIR=/certs
RUN mkdir /certs /certs/client && chmod 1777 /certs /certs/client
# download buildx
RUN mkdir -p /usr/lib/docker/cli-plugins \
&& curl -L \
--output /usr/lib/docker/cli-plugins/docker-buildx \
"https://github.com/docker/buildx/releases/download/v${BUILDX_VERSION}/buildx-v${BUILDX_VERSION}.${BUILDX_ARCH}"
RUN chmod a+x /usr/lib/docker/cli-plugins/docker-buildx
RUN mkdir -p /etc/docker && echo '{"experimental": true}' > /usr/lib/docker/config.json
My gitlab-ci.yml contains:
image: myimageabove
variables:
DOCKER_DRIVER: overlay2
PLATFORMS: linux/arm/v7
IMAGE_NAME: ${CI_PROJECT_NAME}
TAG: ${CI_COMMIT_BRANCH}-latest
REGISTRY: registry.gitlab.com
REGISTRY_ROOT: mygroup
WEBSOCKETD_VER: 0.4.1
# DOCKER_GATEWAY_HOST: 172.17.0.1
DOCKER_GATEWAY_HOST: docker
services:
- docker:dind
before_script:
- docker info
build:
stage: build
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" ${REGISTRY}
- docker buildx create --use
- docker buildx build --platform $PLATFORMS --tag "${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}" --push .
test:
stage: test
variables:
WSD_DIR: /tmp/websocketd
WSD_FILE: /tmp/websocketd/websocketd
cache:
key: websocketd
paths:
- ${WSD_DIR}
before_script:
# download websocketd and put in cache if needed
- if [ ! -f ${WSD_FILE} ]; then
mkdir -p ${WSD_DIR};
curl -o ${WSD_FILE}.zip -L "https://github.com/joewalnes/websocketd/releases/download/v${WEBSOCKETD_VER}/websocketd-${WEBSOCKETD_VER}-linux_arm.zip";
unzip -o ${WSD_FILE}.zip websocketd -d ${WSD_DIR};
chmod 755 ${WSD_FILE};
fi;
- mkdir /home/pier
- cp -R ./test/resources/* /home/pier
# get websocketd from cache
- cp ${WSD_FILE} /home/pier/Admin/websocketd
# setup envt variables
- JWT_KEY=$(cat /home/pier/Services/Secrets/WEBSOCKETD_KEY)
# - DOCKER_GATEWAY_HOST=$(/sbin/ip route|awk '/default/ { print $3 }')
# - DOCKER_GATEWAY_HOST=$(hostname)
- ENVT="-e BASE_URL=/ -e JWT_KEY=$JWT_KEY -e WEBSOCKETD_KEY=$JWT_KEY -e WEBSOCKET_URL=ws://${DOCKER_GATEWAY_HOST:-host.docker.internal}:8088 -e SERVICES_DIR=/home/pier/Services"
- VOLUMES='-v /tmp:/config -v /home/pier/Services:/services -v /etc/wireguard:/etc/wireguard'
script:
# start websocketd
- /home/pier/start.sh &
# start docker pier admin
- docker run -p 4000:4000 ${ENVT} ${VOLUMES} ${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}
# run postman tests
- newman run https://api.getpostman.com/collections/${POSTMAN_COLLECTION_UID}?apikey=${POSTMAN_API_KEY}
deploy:
stage: deploy
script:
# just push to docker hub
- docker login -u "$DOCKERHUB_REGISTRY_USER" -p "$DOCKERHUB_REGISTRY_PASSWORD" ${DOCKERHUB}
- docker buildx build --platform $PLATFORMS --tag "${DOCKERHUB}/mygroup/${IMAGE}:${TAG}" --push .
When I run this, the build job works alright, then the test "before_script" works but when the script starts, I get the following trace:
# <= this starts the websocketd server locally on port 8088 =>
$ /home/pier/start.sh &
# <= this starts the image I just built which should connect to the above websocketd server =>
$ docker run -p 4000:4000 ${ENVT} ${VOLUMES} ${REGISTRY}/${REGISTRY_ROOT}/${IMAGE_NAME}:${TAG}
# <= trace of the websocketd server start with url ws://runner-hbghjvzp-project-22314059-concurrent-0:8088/ =>
Tue, 11 May 2021 12:08:13 +0000 | INFO | server | | Serving using application : ./websocket-server.py
Tue, 11 May 2021 12:08:13 +0000 | INFO | server | | Starting WebSocket server : ws://runner-hbghjvzp-project-22314059-concurrent-0:8088/
# <= trace of the image start saying it tires to conenct to the websocketd server
Websocket connecting to ws://docker:8088 ...
Listen on port 4000
# <= trace with ENOTFOUND on "docker" address =>
websocket connection failed: Error: getaddrinfo ENOTFOUND docker
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'docker'
}
/pier/storage/websocket-client.js:52
throw err;
^
Error: getaddrinfo ENOTFOUND docker
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'docker'
}
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I tried other ways like:
websocket connection failed: Error: getaddrinfo ENOTFOUND host.docker.internal
websocket connection failed: Error: connect ETIMEDOUT 172.17.0.1:8088 # <= same error when trying $(/sbin/ip route|awk '/default/ { print $3 }') =>
websocket connection failed: Error: getaddrinfo ENOTFOUND runner-meuessxe-project-22314059-concurrent-0 # using $(hostname)
Out of new idea...
Would greatly appreciate any help on that one.
I am trying to cache ruby gems locally for my docker to run faster.
I found this 8 months old post Configure cache on GitLab runner
that talks about caching locally isn't possible. Is that still true or am I just doing it wrong?
My gitlab-ci.yml:
stages:
- test
test:unit:
stage: test
image: ruby:2.5.8
cache:
key: gems
untracked: true
paths:
- vendor/ruby
services:
- mysql:5.7
variables:
MYSQL_DB: inter_space_test
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: root
MYSQL_PASSWORD: ''
MYSQL_HOST: mysql
RAILS_ENV: test
script:
- bundle config set path 'vendor/ruby'
- cp config/database.yml_ci config/database.yml
- apt-get update && apt-get install -y nodejs
- gem install bundler --no-document
- bundle install -j $(nproc) --path vendor/ruby
- ls -lah vendor/ruby/
- bundle exec rake db:setup
- bundle exec rake db:migrate
- bundle exec rails test -d
my config toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "main"
url = "https://gitlab.com/"
token = "1r1op5jJARn8akjaG-hs"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.5.8"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
cache_dir = "/vendor/ruby"
volumes = ["/cache", "/vendor/ruby", "/var/cache/apt"]
shm_size = 300000
I fixed it.
Those two lines did it:
- export BUNDLE_PATH="/vendor/ruby"
- bundle config set path '/vender/ruby'