I have a locally hosted GitLab CI setup, that I am running via Docker Compose. I am trying to push a basic app through the pipeline, but keep getting the error below in the runner. My gitlab-ci.yml file is in a repo with a Kotlin project. When I run the same gitlab-ci.yml file in a blank repo (i.e., no Kotlin project, just the gitlab-ci.yml file) it works. Any idea why I'm getting this error and the pipeline is failing?
GitLab CI File
image: alpine
stages:
- build
- test
build:
stage: build
script:
- mkdir build
- touch build/info.txt
artifacts:
paths:
- ./build/libs/
test:
stage: test
script:
- echo "Testing"
- test -f "build/info.txt"
Docker Compose File
version: "3.7"
services:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'XXX'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://XXX'
ports:
- '80:80'
- '443:443'
- '223:22'
volumes:
- '/Users/XXX/dockvol/srv/gitlab/config:/etc/gitlab'
- '/Users/XXX/dockvol/srv/gitlab/logs:/var/log/gitlab'
- '/Users/XXX/dockvol/srv/gitlab/data:/var/opt/gitlab'
runner:
image: 'gitlab/gitlab-runner:latest'
restart: always
user: root
privileged: true
volumes:
- '/Users/XXX/dockvol/srv/gitlab-runner/config:/etc/gitlab-runner'
- '/var/run/docker.sock:/var/run/docker.sock'
GitLab Runner Logs
[0KRunning with gitlab-runner 12.10.1 (ce065b93)
[0;m[0K on first u7d9d-Gt
[0;msection_start:1589194575:prepare_executor
[0K[0K[36;1mPreparing the "docker" executor[0;m
[0;m[0KUsing Docker executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image:v0.2.2 ...
[0;m[0KStarting service docker:19.03.8-dind ...
[0;m[0KPulling docker image docker:19.03.8-dind ...
[0;m[0KUsing docker image sha256:c814ba3a41a3de0a9a23b7d0bb36f64257b12aef5103b4ce1d5f1bfc3033aad3 for docker:19.03.8-dind ...
[0;m[0KWaiting for services to be up and running...
[0;m
[0;33m*** WARNING:[0;m Service runner-u7d9d-gt-project-2-concurrent-0-2742755dfb40c120-docker-0 probably didn't start properly.
Health check error:
service "runner-u7d9d-gt-project-2-concurrent-0-2742755dfb40c120-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2020-05-11T10:56:16.757478300Z time="2020-05-11T10:56:16.753561500Z" level=info msg="Starting up"
2020-05-11T10:56:16.757519200Z time="2020-05-11T10:56:16.754810900Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2020-05-11T10:56:16.757539400Z time="2020-05-11T10:56:16.754999600Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
2020-05-11T10:56:16.759713500Z time="2020-05-11T10:56:16.759610700Z" level=info msg="libcontainerd: started new containerd process" pid=24
2020-05-11T10:56:16.759987700Z time="2020-05-11T10:56:16.759877800Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-05-11T10:56:16.760232100Z time="2020-05-11T10:56:16.760052300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-05-11T10:56:16.760440300Z time="2020-05-11T10:56:16.760323100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-05-11T10:56:16.760697900Z time="2020-05-11T10:56:16.760562700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-05-11T10:56:16.802604300Z time="2020-05-11T10:56:16.802375600Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13
2020-05-11T10:56:16.802887300Z time="2020-05-11T10:56:16.802666400Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
2020-05-11T10:56:16.802911600Z time="2020-05-11T10:56:16.802756700Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
2020-05-11T10:56:16.803104600Z time="2020-05-11T10:56:16.802954000Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2020-05-11T10:56:16.803127900Z time="2020-05-11T10:56:16.802996000Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
2020-05-11T10:56:16.808895200Z time="2020-05-11T10:56:16.808690300Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2020-05-11T10:56:16.808920800Z time="2020-05-11T10:56:16.808735700Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
2020-05-11T10:56:16.808938400Z time="2020-05-11T10:56:16.808831800Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
2020-05-11T10:56:16.809111500Z time="2020-05-11T10:56:16.808985800Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
2020-05-11T10:56:16.809360200Z time="2020-05-11T10:56:16.809185500Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
2020-05-11T10:56:16.809517400Z time="2020-05-11T10:56:16.809286000Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
2020-05-11T10:56:16.809541700Z time="2020-05-11T10:56:16.809360200Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
2020-05-11T10:56:16.809561500Z time="2020-05-11T10:56:16.809381000Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2020-05-11T10:56:16.809576500Z time="2020-05-11T10:56:16.809405200Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2020-05-11T10:56:16.815691100Z time="2020-05-11T10:56:16.815570700Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
2020-05-11T10:56:16.815717500Z time="2020-05-11T10:56:16.815635400Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
2020-05-11T10:56:16.815792400Z time="2020-05-11T10:56:16.815691100Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
2020-05-11T10:56:16.815812800Z time="2020-05-11T10:56:16.815711600Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
2020-05-11T10:56:16.815832200Z time="2020-05-11T10:56:16.815731400Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
2020-05-11T10:56:16.815959900Z time="2020-05-11T10:56:16.815758300Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
2020-05-11T10:56:16.815979600Z time="2020-05-11T10:56:16.815786300Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
2020-05-11T10:56:16.816031600Z time="2020-05-11T10:56:16.815812800Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
2020-05-11T10:56:16.816050500Z time="2020-05-11T10:56:16.815832200Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
2020-05-11T10:56:16.816069200Z time="2020-05-11T10:56:16.815852500Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
2020-05-11T10:56:16.816256700Z time="2020-05-11T10:56:16.816012200Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
2020-05-11T10:56:16.816295100Z time="2020-05-11T10:56:16.816107400Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
2020-05-11T10:56:16.816670700Z time="2020-05-11T10:56:16.816517200Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
2020-05-11T10:56:16.816689200Z time="2020-05-11T10:56:16.816565100Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
2020-05-11T10:56:16.816905500Z time="2020-05-11T10:56:16.816601200Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.816927300Z time="2020-05-11T10:56:16.816644400Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.816946500Z time="2020-05-11T10:56:16.816664600Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.816970000Z time="2020-05-11T10:56:16.816683100Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.816988200Z time="2020-05-11T10:56:16.816706000Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.817007500Z time="2020-05-11T10:56:16.816725600Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.817027300Z time="2020-05-11T10:56:16.816748100Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.817051600Z time="2020-05-11T10:56:16.816770600Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.817069300Z time="2020-05-11T10:56:16.816826200Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
2020-05-11T10:56:16.817164600Z time="2020-05-11T10:56:16.817013400Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.817184800Z time="2020-05-11T10:56:16.817051600Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.817204200Z time="2020-05-11T10:56:16.817069300Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.817512500Z time="2020-05-11T10:56:16.817088000Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
2020-05-11T10:56:16.817535100Z time="2020-05-11T10:56:16.817246500Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
2020-05-11T10:56:16.817554300Z time="2020-05-11T10:56:16.817388600Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
2020-05-11T10:56:16.817887700Z time="2020-05-11T10:56:16.817745100Z" level=info msg="containerd successfully booted in 0.015996s"
2020-05-11T10:56:16.832721600Z time="2020-05-11T10:56:16.831736800Z" level=info msg="Setting the storage driver from the $DOCKER_DRIVER environment variable (overlay2)"
2020-05-11T10:56:16.832749800Z time="2020-05-11T10:56:16.831998200Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-05-11T10:56:16.832767100Z time="2020-05-11T10:56:16.832027100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-05-11T10:56:16.832787000Z time="2020-05-11T10:56:16.832051500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-05-11T10:56:16.832814000Z time="2020-05-11T10:56:16.832071300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-05-11T10:56:16.835365700Z time="2020-05-11T10:56:16.834371800Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-05-11T10:56:16.835384000Z time="2020-05-11T10:56:16.834434500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-05-11T10:56:16.835404400Z time="2020-05-11T10:56:16.834464500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-05-11T10:56:16.835460300Z time="2020-05-11T10:56:16.834487500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-05-11T10:56:16.872802700Z time="2020-05-11T10:56:16.870967500Z" level=info msg="Loading containers: start."
2020-05-11T10:56:16.892366800Z time="2020-05-11T10:56:16.891473000Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'\nip: can't find device 'br_netfilter'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n, error: exit status 1"
2020-05-11T10:56:17.032576600Z time="2020-05-11T10:56:17.032377200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
2020-05-11T10:56:17.091567300Z time="2020-05-11T10:56:17.091375400Z" level=info msg="Loading containers: done."
2020-05-11T10:56:17.113255800Z time="2020-05-11T10:56:17.113013400Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
2020-05-11T10:56:17.113701300Z time="2020-05-11T10:56:17.113556300Z" level=info msg="Daemon has completed initialization"
2020-05-11T10:56:17.179131600Z time="2020-05-11T10:56:17.178944800Z" level=info msg="API listen on [::]:2375"
2020-05-11T10:56:17.179529600Z time="2020-05-11T10:56:17.179155300Z" level=info msg="API listen on /var/run/docker.sock"
[0;33m*********[0;m
[0KPulling docker image registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image:v0.2.2 ...
[0;m[0KUsing docker image sha256:a9a470e7a925ecfd27cfbb60e98c0915f02a3eb8a81f15fb6b11af1baca21e63 for registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image:v0.2.2 ...
[0;msection_end:1589194608:prepare_executor
[0Ksection_start:1589194608:prepare_script
[0K[0K[36;1mPreparing environment[0;m
[0;mRunning on runner-u7d9d-gt-project-2-concurrent-0 via 242fc900f561...
section_end:1589194609:prepare_script
[0Ksection_start:1589194609:get_sources
[0K[0K[36;1mGetting source from Git repository[0;m
[0;m[32;1mFetching changes with git depth set to 50...[0;m
Reinitialized existing Git repository in /builds/XXX/starter-project-kotlin/.git/
From http://XXX/XXX/starter-project-kotlin
* [new ref] refs/pipelines/9 -> refs/pipelines/9
fa35e89..260c063 master -> origin/master
[32;1mChecking out 260c0632 as master...[0;m
Removing Dockerfile
[32;1mSkipping Git submodules setup[0;m
section_end:1589194611:get_sources
[0Ksection_start:1589194611:restore_cache
[0K[0K[36;1mRestoring cache[0;m
[0;msection_end:1589194612:restore_cache
[0Ksection_start:1589194612:download_artifacts
[0K[0K[36;1mDownloading artifacts[0;m
[0;msection_end:1589194613:download_artifacts
[0Ksection_start:1589194613:build_script
[0K[0K[36;1mRunning before_script and script[0;m
[0;m[32;1m$ if [[ -z "$CI_COMMIT_TAG" ]]; then # collapsed multi-line command[0;m
[32;1m$ /build/build.sh[0;m
Building Heroku-based application using gliderlabs/herokuish docker image...
invalid reference format
invalid reference format
invalid argument "/master:260c0632aca32f789a54acdb976cde17e0113f62" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
section_end:1589194615:build_script
[0Ksection_start:1589194615:after_script
[0K[0K[36;1mRunning after_script[0;m
[0;msection_end:1589194616:after_script
[0Ksection_start:1589194616:upload_artifacts_on_failure
[0K[0K[36;1mUploading artifacts for failed job[0;m
[0;msection_end:1589194617:upload_artifacts_on_failure
[0K[31;1mERROR: Job failed: exit code 1
[0;m
Seems like build tag "/master:260c0632aca32f789a54acdb976cde17e0113f62" is in wrong format for Docker.
A tag name must be valid ASCII and may contain lowercase and uppercase letters, digits, underscores, periods and dashes. A tag name may not start with a period or a dash and may contain a maximum of 128 characters. Ref.
Is it identical for "$CI_COMMIT_TAG" ENV content on GitLab CI? There are some problems with that build script build.sh.
Some related issues:
https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64530
Docker build failed: tag invalid reference format (Gitlab CI)
Whereas the problem is potentially solved as:
If anyone is having this issue in combination with Heroku-based applications (e.g. in Gitlab AutoDevOps) you might need to activate the GitLab container registry on your GitLab installation and in your project.
Related
I am struggling to resolve the issue
Cannot connect to the Docker daemon at tcp://localhost:2375/. Is the docker daemon running?
I am using our companies GitLab EE instance, which comes with a bunch of shared group runners. However I would like to be able to use my own runners especially since I will be able to employ the GPU for some machine learning tasks. I have the following .gitlab-ci.yml:
run_tests:
image: python:3.9-slim-buster
before_script:
- apt-get update
- apt-get install make
script:
- python --version
- pip --version
- make test
build_image:
image: docker:20.10.23
services:
- docker:20.10.23-dind
variables:
DOCKER_TLS_CRETDIR: "/certs"
DOCKER_HOST: tcp://localhost:2375/
before_script:
- echo "User $REGISTRY_USER"
- echo "Token $ACCESS_TOKEN"
- echo "Host $REGISTRY_HOST_ALL"
- echo "$ACCESS_TOKEN" | docker login --username $REGISTRY_USER --password-stdin $REGISTRY_HOST_ALL
script:
- docker build --tag $REGISTRY_HOST_ALL/<PATH_TO_USER>/python-demoapp .
- docker push $REGISTRY_HOST_ALL/<PATH_TO_USER>/python-demoapp
The application is currently a demo and it's used in the following tutorial. Note that <PATH_TO_USER> in the above URLs is just a placeholder (I cannot reveal the original one since it contains internal information) and points at my account space, where the project python-demoapp is located. With untagged jobs enabled, I am hoping to have the following workflow:
Push application code change
GitLab pipeline triggered
2.1 Execute tests
2.2 Build image
2.3 Push image to container repository
Re-use image with application inside (e.g. run locally)
I have setup the variables accordingly to contain my username, an access token (generated in GitLab) and the registry host. All of these are correct and I am able to execute everything up to the docker build ... section.
Now as for the runner I followed the instructions provided in GitLab to set it up. I chose to create a VM (QEMU+KVM+libvirt) with a standard minimal installation of Debian 11 with everything set to default (including NAT network, which appears to be working since I can access the Internet through it), where the runner currently resides. I am doing this in order to save the setup and later on transfer it onto a server and run multiple instances of the VM with slight modification (e.g. GPU passthrough for Nvidia CUDA Docker/Podman setup).
Beside the runner (binary was downloaded from our GitLab instance), I installed Docker CE (in the future will be replaced with Podman due to licensing and pricing) following the official instructions. The Docker executor is ran as a systemd service (docker.service, docker.socket), that is I need sudo to interact with it. The runner has its own user (also part of the sudo group) as the official documentation is telling me to do.
The GitLab runner's configuration file gitlab-runner-config.toml contains the following information:
concurrent = 1
check_interval = 0
shutdown_timeout = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Test runner (Debian 11 VM, Docker CE, personal computer)"
url = "<COMPANY_GITLAB_INSTANCE_URL>"
id = <RUNNER_ID>
token = "<ACCESS_TOKEN>"
token_obtained_at = 2023-01-24T09:18:33Z
token_expeires_at = 2023-02-01T00:00:00Z
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
MaxUploadedArchiveSize = 0
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "python:3.9-slim-buster"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
cache_dir = "/cache"
volumes = ["/cache", "/certs/client", "/var/run/docker.sock"]
shm_size = 0
The configuration file was generated by running
sudo gitlab-runner register --url <COMPANY_GITLAB_INSTANCE_URL> --registration-token <ACCESS_TOKEN>
I added the extra cache volumes beside /cache, the cache_dir and changed priveleged to true` (based on my research). All for this based on various posts (including Docker's own issue tracking system) from people having the same issue.
I have made sure that dockerd is listening on the respective port (see comment below for the original poster):
$ sudo ss -nltp
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=601,fd=3))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=601,fd=4))
LISTEN 0 4096 *:2375 *:* users:(("dockerd",pid=618,fd=9))
In addition I have added export DOCKER_HOST=tcp://0.0.0.0:2375 to the .bashrc of ever user out there (except root - perhaps that's the problem?) including the gitlab-runner user.
The Dockerfile within the repository contains the following:
FROM python:3.9-slim-buster
RUN apt-get update && apt-get install make
The log file from the CICD pipeline for this job is (trimmed down) as follows:
Running with gitlab-runner 15.8.0 (12335144)
on Test runner (Debian 11 VM, Docker CE, personal computer) <IDENTIFIER>, system ID: <SYSTEM_ID>
Preparing the "docker" executor 02:34
Using Docker executor with image docker:20.10.23 ...
Starting service docker:20.10.23-dind ...
Pulling docker image docker:20.10.23-dind ...
Using docker image sha256:70ae571e74c1d711d3d5bf6f47eaaf6a51dd260fe0036c7d6894c008e7d24297 for docker:20.10.23-dind with digest docker#sha256:85a1b877d0f59fd6c7eebaff67436e26f460347a79229cf054dbbe8d5ae9f936 ...
Waiting for services to be up and running (timeout 30 seconds)...
*** WARNING: Service runner-dbms-tss-project-42787-concurrent-0-b0bbcfd1a821fc06-docker-0 probably didn't start properly.
Health check error:
service "runner-dbms-tss-project-42787-concurrent-0-b0bbcfd1a821fc06-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2023-01-26T10:09:30.933962365Z Certificate request self-signature ok
2023-01-26T10:09:30.933981575Z subject=CN = docker:dind server
2023-01-26T10:09:30.943472545Z /certs/server/cert.pem: OK
2023-01-26T10:09:32.607191653Z Certificate request self-signature ok
2023-01-26T10:09:32.607205915Z subject=CN = docker:dind client
2023-01-26T10:09:32.616426179Z /certs/client/cert.pem: OK
2023-01-26T10:09:32.705354066Z time="2023-01-26T10:09:32.705227099Z" level=info msg="Starting up"
2023-01-26T10:09:32.706355355Z time="2023-01-26T10:09:32.706298649Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2023-01-26T10:09:32.707357671Z time="2023-01-26T10:09:32.707318325Z" level=info msg="libcontainerd: started new containerd process" pid=72
2023-01-26T10:09:32.707460567Z time="2023-01-26T10:09:32.707425103Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2023-01-26T10:09:32.707466043Z time="2023-01-26T10:09:32.707433214Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2023-01-26T10:09:32.707468621Z time="2023-01-26T10:09:32.707445818Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
2023-01-26T10:09:32.707491420Z time="2023-01-26T10:09:32.707459517Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2023-01-26T10:09:32.768123834Z time="2023-01-26T10:09:32Z" level=warning msg="containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header"
2023-01-26T10:09:32.768761837Z time="2023-01-26T10:09:32.768714616Z" level=info msg="starting containerd" revision=5b842e528e99d4d4c1686467debf2bd4b88ecd86 version=v1.6.15
2023-01-26T10:09:32.775684382Z time="2023-01-26T10:09:32.775633270Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
2023-01-26T10:09:32.775764839Z time="2023-01-26T10:09:32.775729470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
2023-01-26T10:09:32.779824244Z time="2023-01-26T10:09:32.779733556Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"ip: can't find device 'aufs'\\nmodprobe: can't change directory to '/lib/modules': No such file or directory\\n\"): skip plugin" type=io.containerd.snapshotter.v1
2023-01-26T10:09:32.779836825Z time="2023-01-26T10:09:32.779790644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
2023-01-26T10:09:32.779932891Z time="2023-01-26T10:09:32.779904447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
2023-01-26T10:09:32.779944348Z time="2023-01-26T10:09:32.779929392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
2023-01-26T10:09:32.779958443Z time="2023-01-26T10:09:32.779940747Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
2023-01-26T10:09:32.779963141Z time="2023-01-26T10:09:32.779951447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
2023-01-26T10:09:32.780022382Z time="2023-01-26T10:09:32.780000266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
2023-01-26T10:09:32.780134525Z time="2023-01-26T10:09:32.780107812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
2023-01-26T10:09:32.780499276Z time="2023-01-26T10:09:32.780466045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
2023-01-26T10:09:32.780507315Z time="2023-01-26T10:09:32.780489797Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
2023-01-26T10:09:32.780548237Z time="2023-01-26T10:09:32.780529316Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
2023-01-26T10:09:32.780552144Z time="2023-01-26T10:09:32.780544232Z" level=info msg="metadata content store policy set" policy=shared
2023-01-26T10:09:32.795982271Z time="2023-01-26T10:09:32.795854170Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
2023-01-26T10:09:32.795991535Z time="2023-01-26T10:09:32.795882407Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
2023-01-26T10:09:32.795993243Z time="2023-01-26T10:09:32.795894367Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
2023-01-26T10:09:32.795994639Z time="2023-01-26T10:09:32.795932065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
2023-01-26T10:09:32.795996061Z time="2023-01-26T10:09:32.795949931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
2023-01-26T10:09:32.795997456Z time="2023-01-26T10:09:32.795963627Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
2023-01-26T10:09:32.796001074Z time="2023-01-26T10:09:32.795983562Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
2023-01-26T10:09:32.796219139Z time="2023-01-26T10:09:32.796194319Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
2023-01-26T10:09:32.796231068Z time="2023-01-26T10:09:32.796216520Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
2023-01-26T10:09:32.796240878Z time="2023-01-26T10:09:32.796228403Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
2023-01-26T10:09:32.796254974Z time="2023-01-26T10:09:32.796239993Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
2023-01-26T10:09:32.796261567Z time="2023-01-26T10:09:32.796252251Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
2023-01-26T10:09:32.796385360Z time="2023-01-26T10:09:32.796360610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
2023-01-26T10:09:32.796451372Z time="2023-01-26T10:09:32.796435082Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
2023-01-26T10:09:32.797042788Z time="2023-01-26T10:09:32.796984264Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
2023-01-26T10:09:32.797093357Z time="2023-01-26T10:09:32.797073997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797100437Z time="2023-01-26T10:09:32.797091084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
2023-01-26T10:09:32.797148696Z time="2023-01-26T10:09:32.797138286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797164876Z time="2023-01-26T10:09:32.797153186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797176732Z time="2023-01-26T10:09:32.797165488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797187328Z time="2023-01-26T10:09:32.797176464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797208889Z time="2023-01-26T10:09:32.797196407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797220812Z time="2023-01-26T10:09:32.797209290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797232031Z time="2023-01-26T10:09:32.797221051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797242686Z time="2023-01-26T10:09:32.797231676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797254415Z time="2023-01-26T10:09:32.797243815Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
2023-01-26T10:09:32.797484534Z time="2023-01-26T10:09:32.797456547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797500729Z time="2023-01-26T10:09:32.797487444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797524336Z time="2023-01-26T10:09:32.797502098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
2023-01-26T10:09:32.797535447Z time="2023-01-26T10:09:32.797526933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
2023-01-26T10:09:32.797562995Z time="2023-01-26T10:09:32.797539848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
2023-01-26T10:09:32.797570791Z time="2023-01-26T10:09:32.797558864Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
2023-01-26T10:09:32.797589770Z time="2023-01-26T10:09:32.797579849Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
2023-01-26T10:09:32.797766243Z time="2023-01-26T10:09:32.797741256Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
2023-01-26T10:09:32.797805542Z time="2023-01-26T10:09:32.797792483Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
2023-01-26T10:09:32.797836935Z time="2023-01-26T10:09:32.797820296Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
2023-01-26T10:09:32.797854712Z time="2023-01-26T10:09:32.797842891Z" level=info msg="containerd successfully booted in 0.029983s"
2023-01-26T10:09:32.802286356Z time="2023-01-26T10:09:32.802232926Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2023-01-26T10:09:32.802291484Z time="2023-01-26T10:09:32.802269035Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2023-01-26T10:09:32.802322916Z time="2023-01-26T10:09:32.802306355Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
2023-01-26T10:09:32.802369464Z time="2023-01-26T10:09:32.802323232Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2023-01-26T10:09:32.803417318Z time="2023-01-26T10:09:32.803366010Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2023-01-26T10:09:32.803424723Z time="2023-01-26T10:09:32.803376046Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2023-01-26T10:09:32.803426453Z time="2023-01-26T10:09:32.803384392Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
2023-01-26T10:09:32.803428210Z time="2023-01-26T10:09:32.803389450Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2023-01-26T10:09:32.837720263Z time="2023-01-26T10:09:32.837658881Z" level=info msg="Loading containers: start."
2023-01-26T10:09:32.886897024Z time="2023-01-26T10:09:32.886828923Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
2023-01-26T10:09:32.920867085Z time="2023-01-26T10:09:32.920800006Z" level=info msg="Loading containers: done."
2023-01-26T10:09:32.944768798Z time="2023-01-26T10:09:32.944696558Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
2023-01-26T10:09:32.944804324Z time="2023-01-26T10:09:32.944774928Z" level=info msg="Daemon has completed initialization"
2023-01-26T10:09:32.973804146Z time="2023-01-26T10:09:32.973688991Z" level=info msg="API listen on /var/run/docker.sock"
2023-01-26T10:09:32.976059008Z time="2023-01-26T10:09:32.975992051Z" level=info msg="API listen on [::]:2376"
*********
Pulling docker image docker:20.10.23 ...
Using docker image sha256:25deb61ef2709b05249ad4e66f949fd572fb43d67805d5ea66fe3f86766b5cef for docker:20.10.23 with digest docker#sha256:2655039c6abfc8a1d75978c5258fccd5c5cedf880b6cfc72077f076d0672c70a ...
Preparing environment 00:00
Running on runner-dbms-tss-project-42787-concurrent-0 via debian...
Getting source from Git repository 00:02
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/<PATH_TO_USER>/python-demoapp/.git/
Checking out 93e494ea as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:25deb61ef2709b05249ad4e66f949fd572fb43d67805d5ea66fe3f86766b5cef for docker:20.10.23 with digest docker#sha256:2655039c6abfc8a1d75978c5258fccd5c5cedf880b6cfc72077f076d0672c70a ...
$ echo "User $REGISTRY_USER"
User [MASKED]
$ echo "Token $ACCESS_TOKEN"
Token [MASKED]
$ echo "Host $REGISTRY_HOST_ALL"
Host ..............
$ echo "$ACCESS_TOKEN" | docker login --username $REGISTRY_USER --password-stdin $REGISTRY_HOST_ALL
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker build --tag $REGISTRY_HOST_ALL/<PATH_TO_USER>/python-demoapp .
Cannot connect to the Docker daemon at tcp://localhost:2375/. Is the docker daemon running?
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
From my understanding I need two images here:
The python-capable one - here the official Python image from Docker Hub, which is used to run the tests as well as for the image that is added to the container registry
The Docker DinD one - this is the Docker in Docker setup, which allows building a Docker image inside a running Docker container.
The second one is way above my head and it's the (for me) obvious culprit for my headaches.
Perhaps important additional information: my computer is outside our company's network. The GitLab instance is accessible externally through user authentification (username + password for the WebUI, access tokens and SSH keys otherwise).
Do I need two separate runners? I have seen a lot of examples, where people are using a single runner to do multiple jobs including testing and image building (even packaging) so I don't believe I do. I am not really a Docker expert as you can probably tell. :D If more information is required, please let me know in the comments below, especially if I am overdoing it and there is a much easier way to accomplish what I am trying to.
DISCUSSION
Health check error regarding Docker volume
I can see the following error in the log posted above:
Health check error:
service "runner-dbms-tss-project-42787-concurrent-0-b0bbcfd1a821fc06-docker-0-wait-for-service" timeout
The footprint looked familiar so I went back to check some old commands I executed and apparently this is a Docker volume. However on my host
$ docker volume ls
DRIVER
local runner-...415a70
local runner-...66cea8
neither volumes have that name. So I am guessing that this is a volume that is created by Docker in Docker.
Adding hosts to JSON configuration file for Docker daemon
I added the following configuration at /etc/systemd/system/docker.service.d/90-docker.conf:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --config-file /etc/docker/daemon.json
with daemon.json containing the following:
{
"hosts": [
"tcp://0.0.0.0:2375",
"unix:///var/run/docker.sock"
]
}
Now I am noticing an additional error in the job's log:
failed to load listeners: can't create unix socket /var/run/docker.sock: is a directory
On my host I checked and the path is an actual socket file (information retrieved by executing file command on the path). This means that the issues is again inside the Docker container, that is part of the DinD. I have read online that apparently Docker would automatically create the path and it will be a directory for some reason.
In addition the above mentioned error in the original question has now changed to
unable to resolve docker endpoint: Invalid bind address format: http://localhost:2375/
even though I cannot find any http://localhost:2375 entry on my host, leading again to the conclusion that something with the DinD setup went wrong.
First post here so apologize for any error.
I have a docker environment that exhibits a really strange problem.
It used to work flawlessly when I was on 18.09.2 but then I needed to upgrade docker version as it was needed for some dockers, due to change in API version ( IIRC ).
I've ugraded to 20.10.2 ( without reboot ) and everything seemed to be ok, dockers starts and I can use them .
After some time I had a power failure that lead me to a reboot and since then I have the problem.
At boot dockers command results in :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Thus I've searched in logs ( /var/log/docker.log ) and found:
time="2021-08-30T16:40:11.702266553+02:00" level=info msg="Starting up"
time="2021-08-30T16:40:11.715505120+02:00" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
time="2021-08-30T16:40:11.728188524+02:00" level=info msg="libcontainerd: started new containerd process" pid=9883
time="2021-08-30T16:40:11.728497763+02:00" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2021-08-30T16:40:11.728564781+02:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2021-08-30T16:40:11.728723243+02:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
time="2021-08-30T16:40:11.728841483+02:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2021-08-30T16:40:11.813209337+02:00" level=info msg="starting containerd" revision=269548fa27e0089a8b8278fc4fc781d7f65a939b version=1.4.3
time="2021-08-30T16:40:11.928783093+02:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2021-08-30T16:40:11.929009055+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.936721860+02:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Modu
le aufs not found in directory /lib/modules/5.4.65-v7l-sarpi4\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.936880396+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.937437133+02:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4)
must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.937510744+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.937618391+02:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2021-08-30T16:40:11.937684465+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.937796094+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.938041796+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.938477682+02:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a z
fs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.938549200+02:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2021-08-30T16:40:11.938622793+02:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2021-08-30T16:40:11.938674255+02:00" level=info msg="metadata content store policy set" policy=shared
time="2021-08-30T16:40:11.938972068+02:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2021-08-30T16:40:11.939055994+02:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2021-08-30T16:40:11.939191530+02:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939374825+02:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939489232+02:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939557250+02:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939634268+02:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939699008+02:00" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939768008+02:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939834674+02:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939925785+02:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2021-08-30T16:40:11.940284968+02:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2021-08-30T16:40:12.729504178+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc
= \"transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
time="2021-08-30T16:40:15.081866772+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc
= \"transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
time="2021-08-30T16:40:18.723223037+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc
= \"transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
time="2021-08-30T16:40:23.950263284+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc
= \"transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
failed to start containerd: timeout waiting for containerd to start
I've banged my head to wall and finally I've found that if I remove the
/var/run/docker/containerd
directory I can start dockerd and containerd without any issue, but obviously loosing every docker instance and need to docker rm and docker start my containers again.
Do you have any idea why this happens ?
My environment:
root#casa:/var/adm/packages# cat /etc/slackware-version
Slackware 14.2+
root#casa:/var/adm/packages# uname -a
Linux casa.pigi.org 5.4.65-v7l-sarpi4 #3 SMP Mon Sep 21 10:13:26 BST 2020 armv7l BCM2711 GNU/Linux
root#casa:/var/adm/packages# docker info
Client:
Context: default
Debug Mode: false
Server:
Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 9
Server Version: 20.10.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc version:
init version: fec3683 (expected: de40ad0)
Security Options:
seccomp
Profile: default
Kernel Version: 5.4.65-v7l-sarpi4
Operating System: Slackware 14.2 arm (post 14.2 -current)
OSType: linux
Architecture: armv7l
CPUs: 4
Total Memory: 3.738GiB
Name: casa.pigi.org
ID: HF4Y:7TDZ:O5GV:HM7H:YCVS:CLKW:GNOM:6PSA:XRCQ:3BQU:TZ3P:URLD
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: pigi102
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
WARNING: No blkio weight support
WARNING: No blkio weight_device support
root#casa:/var/adm/packages# runc -v
runc version spec: 1.0.1-dev
root#casa:/var/adm/packages# /usr/bin/docker -v
Docker version 20.10.2, build 2291f61
root#casa:/var/adm/packages# containerd -v
containerd github.com/containerd/containerd 1.4.3 269548fa27e0089a8b8278fc4fc781d7f65a939b
docker-proxy-20201215_fa125a3
Thanks in advance.
Pigi_102
I did some more tests, and it seems that if I run containerd ( with all the option and flags as dockerd does ) and wait long enough it eventually starts and from there on dockerd is able to start.
I manage to fix my problems by downgrading to docker 19.03.15 and containerd 1.2.13
With these versions everything is working as expected.
Pigi
I am not sure to understand what is wrong with this gitlab-ci.yml.
image: docker
services:
- docker:dind
stages:
- deploy
before_script:
- docker info
- apk update
- apk upgrade
- apk add python python-dev py-pip build-base libffi-dev openssl-dev
- pip install docker-compose
deploy_sandbox:
stage: deploy
only:
- master
script:
- docker-compose up -d --build --force-recreate
environment: stage
The deploy_sandbox job succeed but I don't have any container on this brand new server. I tried to reinstall gitlab-runner, registered it in a different way making sure that it runs with a stage but still got the same issue.
Lead
There is a Warning in the beginning of the log
*** WARNING: Service runner-fa6cab46-project-14612439-concurrent-0-docker-0 probably didn't start properly.
Health check error:
service "runner-fa6cab46-project-14612439-concurrent-0-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
level=info msg="Starting up"
level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
level=info msg="libcontainerd: started new containerd process" pid=18
level=info msg="parsed scheme: \"unix\"" module=grpc
level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
level=info msg="containerd successfully booted in 0.042946s"
level=info msg="Setting the storage driver from the $DOCKER_DRIVER environment variable (overlay2)"
level=info msg="parsed scheme: \"unix\"" module=grpc
level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
level=info msg="parsed scheme: \"unix\"" module=grpc
level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
level=info msg="Loading containers: start."
level=warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'\nbridge 167936 1 br_netfilter\nstp 16384 1 bridge\nllc 16384 2 bridge,stp\nip: can't find device 'br_netfilter'\nbr_netfilter 24576 0 \nbridge 167936 1 br_netfilter\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n, error: exit status 1"
level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
level=info msg="Loading containers: done."
level=info msg="Docker daemon" commit=a872fc2f86 graphdriver(s)=overlay2 version=19.03.3
level=info msg="Daemon has completed initialization"
level=info msg="API listen on [::]:2375"
level=info msg="API listen on /var/run/docker.sock"
*********
This line might be the reason: warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'\nbridge 167936 1 br_netfilter\nstp. Googleling it, I see that this error is related with the use of custom networks but since this is not my case, I am not sure how to fix it nor whether this is the reason for this strange behavior.
Env
Debian 10 (4.19.0-6-amd64)
gitlab-runner 12.3.0
Short story: I have a gitlab-runner in a docker-compose and I want to be able to use DIND, but I'm facing some difficulties....
I try to create a platform which contains :
a sonar
gitlab ce
gitlab runner
a registry
Theses services are started and managed by docker-compose
I use the gitlab-ci to verify tests, coverage and create a docker image which is uploaded to the registry
I have a single shared runner which work for testing purpose
here is the config.toml :
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "21fbd75383fe"
url = "http://gitlab/ci"
token = "--"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
network_mode = "oral_default"
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.custom]
run_exec = ""
My problem :
I need to specify in the runner : network_mode = "oral_default" in order to clone the reposotory.
But if I want to use Dind to build my image, I get this error :
ealth check container logs:
2019-08-20T14:11:21.847061412Z FATAL: No HOST or PORT found
Service container logs:
2019-08-20T14:11:18.189776447Z Generating RSA private key, 4196 bit long modulus (2 primes)
2019-08-20T14:11:18.495587062Z .......................................++++
2019-08-20T14:11:19.261799191Z ...............................................................................................++++
2019-08-20T14:11:19.262353078Z e is 65537 (0x010001)
2019-08-20T14:11:19.288253880Z Generating RSA private key, 4196 bit long modulus (2 primes)
2019-08-20T14:11:19.735803254Z .......................................................++++
2019-08-20T14:11:20.998049980Z .........................................................................................................................................................++++
2019-08-20T14:11:20.998511667Z e is 65537 (0x010001)
2019-08-20T14:11:21.040579379Z Signature ok
2019-08-20T14:11:21.040598512Z subject=CN = docker:dind server
2019-08-20T14:11:21.040814852Z Getting CA Private Key
2019-08-20T14:11:21.071374613Z /certs/server/cert.pem: OK
2019-08-20T14:11:21.075263091Z Generating RSA private key, 4196 bit long modulus (2 primes)
2019-08-20T14:11:21.159644328Z .........++++
2019-08-20T14:11:22.011823318Z ..............................................................................................................++++
2019-08-20T14:11:22.012330364Z e is 65537 (0x010001)
2019-08-20T14:11:22.046700923Z Signature ok
2019-08-20T14:11:22.046735711Z subject=CN = docker:dind client
2019-08-20T14:11:22.046961229Z Getting CA Private Key
2019-08-20T14:11:22.067938238Z /certs/client/cert.pem: OK
2019-08-20T14:11:22.099482505Z time="2019-08-20T14:11:22.099370855Z" level=info msg="Starting up"
2019-08-20T14:11:22.100758237Z time="2019-08-20T14:11:22.100680440Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2019-08-20T14:11:22.101707958Z time="2019-08-20T14:11:22.101626009Z" level=info msg="libcontainerd: started new containerd process" pid=54
2019-08-20T14:11:22.101727175Z time="2019-08-20T14:11:22.101657983Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2019-08-20T14:11:22.101733998Z time="2019-08-20T14:11:22.101673740Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2019-08-20T14:11:22.101750834Z time="2019-08-20T14:11:22.101693854Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] }" module=grpc
2019-08-20T14:11:22.101758034Z time="2019-08-20T14:11:22.101710395Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2019-08-20T14:11:22.101883362Z time="2019-08-20T14:11:22.101777690Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0008181f0, CONNECTING" module=grpc
2019-08-20T14:11:22.119465945Z time="2019-08-20T14:11:22.119356782Z" level=info msg="starting containerd" revision=894b81a4b802e4eb2a91d1ce216b8817763c29fb version=v1.2.6
2019-08-20T14:11:22.119997814Z time="2019-08-20T14:11:22.119921726Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
2019-08-20T14:11:22.120066267Z time="2019-08-20T14:11:22.120010967Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
2019-08-20T14:11:22.120297760Z time="2019-08-20T14:11:22.120239139Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2019-08-20T14:11:22.120305857Z time="2019-08-20T14:11:22.120253119Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
2019-08-20T14:11:22.124698054Z time="2019-08-20T14:11:22.124622589Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'\naufs 241664 0 \nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2019-08-20T14:11:22.124716529Z time="2019-08-20T14:11:22.124642302Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
2019-08-20T14:11:22.124759418Z time="2019-08-20T14:11:22.124715546Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
2019-08-20T14:11:22.124901964Z time="2019-08-20T14:11:22.124862487Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
2019-08-20T14:11:22.125128168Z time="2019-08-20T14:11:22.125083244Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
2019-08-20T14:11:22.125137429Z time="2019-08-20T14:11:22.125095730Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
2019-08-20T14:11:22.125191366Z time="2019-08-20T14:11:22.125143058Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'\naufs 241664 0 \nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2019-08-20T14:11:22.125200443Z time="2019-08-20T14:11:22.125154226Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
2019-08-20T14:11:22.125205718Z time="2019-08-20T14:11:22.125160660Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2019-08-20T14:11:22.299853510Z time="2019-08-20T14:11:22.299730279Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
2019-08-20T14:11:22.299878846Z time="2019-08-20T14:11:22.299776167Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
2019-08-20T14:11:22.299887790Z time="2019-08-20T14:11:22.299812949Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
2019-08-20T14:11:22.299904150Z time="2019-08-20T14:11:22.299828135Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
2019-08-20T14:11:22.299913745Z time="2019-08-20T14:11:22.299842184Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
2019-08-20T14:11:22.299921184Z time="2019-08-20T14:11:22.299854806Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
2019-08-20T14:11:22.299927659Z time="2019-08-20T14:11:22.299869296Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
2019-08-20T14:11:22.299933467Z time="2019-08-20T14:11:22.299884994Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
2019-08-20T14:11:22.299938406Z time="2019-08-20T14:11:22.299904463Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
2019-08-20T14:11:22.299943250Z time="2019-08-20T14:11:22.299917532Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
2019-08-20T14:11:22.300179457Z time="2019-08-20T14:11:22.300128875Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
2019-08-20T14:11:22.300316944Z time="2019-08-20T14:11:22.300270682Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
2019-08-20T14:11:22.300745465Z time="2019-08-20T14:11:22.300693221Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
2019-08-20T14:11:22.300776133Z time="2019-08-20T14:11:22.300731401Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
2019-08-20T14:11:22.300819617Z time="2019-08-20T14:11:22.300782007Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.300828421Z time="2019-08-20T14:11:22.300797250Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.300840355Z time="2019-08-20T14:11:22.300809287Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.300845835Z time="2019-08-20T14:11:22.300821506Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.300863231Z time="2019-08-20T14:11:22.300835107Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.300870180Z time="2019-08-20T14:11:22.300846235Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.300899955Z time="2019-08-20T14:11:22.300858124Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.300910656Z time="2019-08-20T14:11:22.300868856Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.300924355Z time="2019-08-20T14:11:22.300885954Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
2019-08-20T14:11:22.301165214Z time="2019-08-20T14:11:22.301127593Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.301173167Z time="2019-08-20T14:11:22.301148082Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.301197447Z time="2019-08-20T14:11:22.301160478Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.301208675Z time="2019-08-20T14:11:22.301172158Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
2019-08-20T14:11:22.301420074Z time="2019-08-20T14:11:22.301383826Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
2019-08-20T14:11:22.301510586Z time="2019-08-20T14:11:22.301457137Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
2019-08-20T14:11:22.301521798Z time="2019-08-20T14:11:22.301472502Z" level=info msg="containerd successfully booted in 0.182717s"
2019-08-20T14:11:22.306618029Z time="2019-08-20T14:11:22.306496623Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0008181f0, READY" module=grpc
2019-08-20T14:11:22.308604516Z time="2019-08-20T14:11:22.308507649Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2019-08-20T14:11:22.308624244Z time="2019-08-20T14:11:22.308531988Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2019-08-20T14:11:22.308630203Z time="2019-08-20T14:11:22.308550514Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] }" module=grpc
2019-08-20T14:11:22.308635654Z time="2019-08-20T14:11:22.308567856Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2019-08-20T14:11:22.308694129Z time="2019-08-20T14:11:22.308627145Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000707cd0, CONNECTING" module=grpc
2019-08-20T14:11:22.308731380Z time="2019-08-20T14:11:22.308648131Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
2019-08-20T14:11:22.308943521Z time="2019-08-20T14:11:22.308874942Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000707cd0, READY" module=grpc
2019-08-20T14:11:22.309450117Z time="2019-08-20T14:11:22.309385625Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2019-08-20T14:11:22.309462252Z time="2019-08-20T14:11:22.309404366Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2019-08-20T14:11:22.309467958Z time="2019-08-20T14:11:22.309419574Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] }" module=grpc
2019-08-20T14:11:22.309473276Z time="2019-08-20T14:11:22.309431644Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2019-08-20T14:11:22.309568429Z time="2019-08-20T14:11:22.309500963Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000144fa0, CONNECTING" module=grpc
2019-08-20T14:11:22.309585745Z time="2019-08-20T14:11:22.309506179Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
2019-08-20T14:11:22.309786808Z time="2019-08-20T14:11:22.309719559Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000144fa0, READY" module=grpc
2019-08-20T14:11:22.749050188Z time="2019-08-20T14:11:22.748856365Z" level=warning msg="Your kernel does not support swap memory limit"
2019-08-20T14:11:22.749090607Z time="2019-08-20T14:11:22.748905994Z" level=warning msg="Your kernel does not support cgroup rt period"
2019-08-20T14:11:22.749100435Z time="2019-08-20T14:11:22.748934597Z" level=warning msg="Your kernel does not support cgroup rt runtime"
2019-08-20T14:11:22.749424856Z time="2019-08-20T14:11:22.749289206Z" level=info msg="Loading containers: start."
2019-08-20T14:11:22.760083557Z time="2019-08-20T14:11:22.759917977Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'\nbridge 151552 1 br_netfilter\nstp 16384 1 bridge\nllc 16384 2 bridge,stp\nip: can't find device 'br_netfilter'\nbr_netfilter 24576 0 \nbridge 151552 1 br_netfilter\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n, error: exit status 1"
2019-08-20T14:11:22.766459849Z time="2019-08-20T14:11:22.766314726Z" level=warning msg="Running modprobe nf_nat failed with message: `ip: can't find device 'nf_nat'\nnf_nat_masquerade_ipv4 16384 1 ipt_MASQUERADE\nnf_nat_ipv4 16384 1 iptable_nat\nnf_nat 32768 3 xt_nat,nf_nat_masquerade_ipv4,nf_nat_ipv4\nnf_conntrack 131072 9 ip_vs,xt_nat,ipt_MASQUERADE,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_nat_ipv4,xt_conntrack,nf_nat\nlibcrc32c 16384 3 ip_vs,nf_nat,nf_conntrack\nmodprobe: can't change directory to '/lib/modules': No such file or directory`, error: exit status 1"
2019-08-20T14:11:22.772066324Z time="2019-08-20T14:11:22.771952709Z" level=warning msg="Running modprobe xt_conntrack failed with message: `ip: can't find device 'xt_conntrack'\nxt_conntrack 16384 8 \nnf_conntrack 131072 9 ip_vs,xt_nat,ipt_MASQUERADE,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_nat_ipv4,xt_conntrack,nf_nat\nx_tables 40960 11 xt_statistic,ipt_REJECT,xt_comment,xt_mark,xt_nat,xt_tcpudp,ipt_MASQUERADE,xt_addrtype,iptable_filter,xt_conntrack,ip_tables\nmodprobe: can't change directory to '/lib/modules': No such file or directory`, error: exit status 1"
*********
Pulling docker image docker:latest ...
Using docker image sha256:9a38a85b1e4e7bb53b7c7cc45afff6ba7b1cdfe01b9738f36a3b4ad0cdb10b00 for docker:latest ...
Running on runner-sbVCrx6S-project-1-concurrent-0 via 0937d4b8d68a...
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/root/oral/.git/
From http://gitlab/root/oral
b1b2ac2..c0bb7ea master -> origin/master
Checking out c0bb7ea6 as master...
Skipping Git submodules setup
$ docker info
Client:
Debug Mode: false
Server:
ERROR: error during connect: Get http://docker:2375/v1.40/info: dial tcp: lookup docker on 127.0.0.11:53: no such host
Here is my gitlab ci:
image: python:3.6-stretch
stages:
- test
- sonar
- upload to registry
.test:
stage: test
cache:
paths:
- ~/.cache/
artifacts:
untracked: true
script:
- pip install -r requirement.txt
- python -m pytest
- python -m pytest --cov=src --cov-report=xml
.sonar:
image: zaquestion/sonarqube-scanner
dependencies:
- test
stage: sonar
script:
- sonar-scanner
upload to registry:
image: docker:latest
stage: upload to registry
services:
- docker:dind
script:
- docker info
- docker build -t local_image_oral:latest
- docker push local_image_oral:latest
- docker tag local_image_oral:latest registry:5000/local_image_oral:latest
- docker push registry:5000/local_image_oral
I strongly think, that the spwaned container is in another network and so can't access the docker-compose network which lead to this behaviour.
Can you help me ?
Thank You.
#MitsiDev this can happen while using newer docker image like version 19.03. Although this issue is a bit old, but the problem persists till today.
I faced the issue recently and turns out there is a known "solution" or workaround however.
Refer to this Release Note, if you want more details.
Reason:
As of version 19.03, docker:dind will automatically generate TLS certificates and require using them for communication. This is from Docker's official documentation:
Solution/workarounds:
According to the Release Note, there are 2 workarounds:
Explicitly turn off TLS.
Configure GitLab Runner to use TLS.
Turn off TLS
If you cannot or do not want to edit the config.toml, like when running jobs in a gitlab shared runner:
.gitlab-ci.yml:
image: docker:19.03
variables:
DOCKER_TLS_CERTDIR: ""
Configure TLS
If you are running jobs on a runner that you have write access to the config.toml file (and you know what you are doing):
config.toml:
...
[[runners]]
name = "My Docker Runner"
url = "http://gitlab.com"
token = ""
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
## Changes here ##
privileged = true ## may not be necessary, default is false
volumes = ["/certs/client", "/cache"] ## adds volume "/certs/client"
shm_size = 0
...
.gitlab-ci.yml:
image: docker:19.03
variables:
# Create the certificates inside this directory for both the server
# and client. The certificates used by the client will be created in
# /certs/client so we only need to share this directory with the
# volume mount in `config.toml`.
DOCKER_TLS_CERTDIR: "/certs"`
I have docker installed on centos & system.
I am able to download docker images using docker pull command.
but when I run the container using docker run alpine, the server restarts.
this is happening every time.
I found this from /var/log/messages | grep docker
CODE
Below is my configuration:
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 18.09.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: c4446665cb9c30056f4998ed953e6d4ff22c7c39
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-957.1.3.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 40
Total Memory: 251.7GiB
Name: CHMCISPRDOCKENG
ID: XFPC:SYGF:Q3P7:M32Z:VRTX:TFGZ:YA43:NYSY:UGVK:PC2M:HVAU:TIM2
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
some-registry
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: API is accessible on http://127.0.0.1:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information:
https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Now when I run docker pull alpine. image is downloaded successfully.
But when I run docker run -it alpine , I am logged out from the server & server is restarted.
below are the logs when I ran cat /var/log/messages | grep docker.
Dec 17 19:28:12 CHMCISPRDOCKENG systemd: Started docker.service.
Dec 17 19:28:12 CHMCISPRDOCKENG audispd: node=CHMCISPRDOCKENG type=SERVICE_START msg=audit(1545055092.725:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.023820157+05:30" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.043715041+05:30" level=info msg="libcontainerd: started new containerd process" pid=15564
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.043788743+05:30" level=info msg="parsed scheme: \"unix\"" module=grpc
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.043810434+05:30" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.044206037+05:30" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}]" module=grpc
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.044227097+05:30" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.044283337+05:30" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420963f00, CONNECTING" module=grpc
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.726282962+05:30" level=info msg="starting containerd" revision=c4446665cb9c30056f4998ed953e6d4ff22c7c39 version=1.2.0
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.727202682+05:30" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.733393760+05:30" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.733593273+05:30" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.733607841+05:30" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.737142848+05:30" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1"
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.737165057+05:30" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.740458140+05:30" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.747800098+05:30" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.748343654+05:30" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.748359281+05:30" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.748378883+05:30" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.748386577+05:30" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Dec 17 19:28:16 CHMCISPRDOCKENG dockerd: time="2018-12-17T19:28:16.748392893+05:30" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1"
Any suggestions?
Found the answer to be storage driver problem.
the configuration of the server was not supporting the default overlay2 storage driver for docker. changing it to devicemapper fixed things.