I'm trying to access a gitlab-service from a container started with docker run, but it doesn't seem to work.
They actually have a nice section on gitlab about this: https://docs.gitlab.com/ee/ci/services/#using-services-with-docker-run-docker-in-docker-side-by-side
However, even after a 1:1 copy of their code:
access-service:
stage: build
image: docker:19.03.1
before_script:
- echo "Overriding default before_script"
services:
- docker:dind # necessary for docker run
- tutum/wordpress:latest
variables:
FF_NETWORK_PER_BUILD: "true" # activate container-to-container networking
script: |
docker run --rm --name curl \
--volume "$(pwd)":"$(pwd)" \
--workdir "$(pwd)" \
--network=host \
curlimages/curl:7.74.0 curl "http://tutum-wordpress"
I get an error:
Running with gitlab-runner 14.3.4 (77516d85)
on gitlab-aws-autoscaler 7ee750d2
feature flags: FF_NETWORK_PER_BUILD:true, FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR:true
Preparing the "docker+machine" executor 02:34
Using Docker executor with image docker:19.03.1 ...
WARNING: Container based cache volumes creation is disabled. Will not create volume for "/cache"
Starting service docker:dind ...
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image docker:dind ...
Using docker image sha256:1a42336ff683d7dadd320ea6fe9d93a5b101474346302d23f96c9b4546cb414d for docker:dind with digest docker#sha256:6f2ae4a5fd85ccf85cdd829057a34ace894d25d544e5e4d9f2e7109297fedf8d ...
Starting service tutum/wordpress:latest ...
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image tutum/wordpress:latest ...
Using docker image sha256:7e7f97a602ff0c3a30afaaac1e681c72003b4c8a76f8a90696f03e785bf36b90 for tutum/wordpress:latest with digest tutum/wordpress#sha256:2aa05fd3e8543b615fc07a628da066b48e6bf41cceeeb8f4b81e189de6eeda77 ...
Waiting for services to be up and running...
*** WARNING: Service runner-7ee750d2-project-2-concurrent-0-483783518ce3e922-docker-0 probably didn't start properly.
Health check error:
service "runner-7ee750d2-project-2-concurrent-0-483783518ce3e922-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2022-02-22T20:44:10.523612305Z Generating RSA private key, 4096 bit long modulus (2 primes)
2022-02-22T20:44:11.037778878Z ...................................................................................++++
2022-02-22T20:44:11.319540033Z ..................................++++
2022-02-22T20:44:11.320611978Z e is 65537 (0x010001)
2022-02-22T20:44:11.341349948Z Generating RSA private key, 4096 bit long modulus (2 primes)
2022-02-22T20:44:11.360835661Z .++++
2022-02-22T20:44:11.678902603Z ...................................................++++
2022-02-22T20:44:11.679451336Z e is 65537 (0x010001)
2022-02-22T20:44:11.719133216Z Signature ok
2022-02-22T20:44:11.719148571Z subject=CN = docker:dind server
2022-02-22T20:44:11.719151811Z Getting CA Private Key
2022-02-22T20:44:11.734914635Z /certs/server/cert.pem: OK
2022-02-22T20:44:11.738748856Z Generating RSA private key, 4096 bit long modulus (2 primes)
2022-02-22T20:44:11.993700065Z .........................................++++
2022-02-22T20:44:12.036121070Z .....++++
2022-02-22T20:44:12.036364885Z e is 65537 (0x010001)
2022-02-22T20:44:12.067743203Z Signature ok
2022-02-22T20:44:12.067755273Z subject=CN = docker:dind client
2022-02-22T20:44:12.067758449Z Getting CA Private Key
2022-02-22T20:44:12.081823033Z /certs/client/cert.pem: OK
2022-02-22T20:44:12.174949567Z time="2022-02-22T20:44:12.174783104Z" level=info msg="Starting up"
2022-02-22T20:44:12.177055953Z time="2022-02-22T20:44:12.176931675Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2022-02-22T20:44:12.177086275Z failed to load listeners: can't create unix socket /var/run/docker.sock: device or resource busy
*********
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image docker:19.03.1 ...
Using docker image sha256:0cecfefe921f22fc898f7a0055358380c8870ab6f05b01999367911714fe9d00 for docker:19.03.1 with digest docker#sha256:2dcf87c9893b05ab815880e3d223cd6976c388a6f6697de10e90523255259ca4 ...
Not using umask - FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR is set!
...
$ docker run --rm --name curl \ # collapsed multi-line command
Unable to find image 'curlimages/curl:7.74.0' locally
7.74.0: Pulling from curlimages/curl
aad63a933944: Pulling fs layer
...
3d4876cbff99: Pull complete
110e7f874674: Pull complete
Digest: sha256:a3e534fced74aeea171c4b59082f265d66914d09a71062739e5c871ed108a46e
Status: Downloaded newer image for curlimages/curl:7.74.0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: tutum-wordpress
Can anyone give me a pointer why this is not working? Does this have to do with the fact that this is the executer docker+machine and not docker?
Here's our config.toml:
[[runners]]
name = "gitlab-aws-autoscaler"
url = "https://code.example.com"
token = "${TOKEN}"
executor = "docker+machine"
limit = ${LIMIT_MEDIUM_RUNNERS}
[runners.docker]
image = "example/gitlabrunner:2.10"
privileged = true
disable_cache = true
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache", "/builds:/builds"]
wait_for_services_timeout = 120
[runners.cache]
Type = "s3"
ServerAddress = "s3.amazonaws.com"
AccessKey = "${KEY}"
SecretKey = "${SECRET}"
BucketName = "example-gitlab-runner-cache-virginia"
BucketLocation = "us-east-1"
Shared = true
[runners.machine]
IdleCount = 0
IdleTime = 1800
MaxBuilds = 100
MachineDriver = "amazonec2"
MachineName = "gitlab-docker-machine-%s"
MachineOptions = [
"amazonec2-instance-type=t2.medium",
"amazonec2-access-key=${KEY}",
"amazonec2-secret-key=${SECRET}",
"amazonec2-root-size=100", # GB
"amazonec2-region=us-east-1",
"amazonec2-tags=runner-manager-name,gitlab-aws-autoscaler,gitlab,true,gitlab-runner-autoscale,true",
"amazonec2-security-group=EC2-X-ci-runner",
"amazonec2-vpc-id=vpc-XXX",
"amazonec2-subnet-id=subnet-XXX",
"amazonec2-zone=b",
"amazonec2-use-private-address=true",
"amazonec2-private-address-only=true"
]
Edit:
When trying to set the DOCKER_HOST variable as suggested in one answer, I get the following errors:
Running with gitlab-runner 14.3.4 (77516d85)
on gitlab-aws-autoscaler 7ee750d2
feature flags: FF_NETWORK_PER_BUILD:true, FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR:true
Preparing the "docker+machine" executor 02:42
Using Docker executor with image docker:19.03.1 ...
WARNING: Container based cache volumes creation is disabled. Will not create volume for "/cache"
Starting service docker:dind ...
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image docker:dind ...
Using docker image sha256:1a42336ff683d7dadd320ea6fe9d93a5b101474346302d23f96c9b4546cb414d for docker:dind with digest docker#sha256:6f2ae4a5fd85ccf85cdd829057a34ace894d25d544e5e4d9f2e7109297fedf8d ...
Starting service tutum/wordpress:latest ...
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image tutum/wordpress:latest ...
Using docker image sha256:7e7f97a602ff0c3a30afaaac1e681c72003b4c8a76f8a90696f03e785bf36b90 for tutum/wordpress:latest with digest tutum/wordpress#sha256:2aa05fd3e8543b615fc07a628da066b48e6bf41cceeeb8f4b81e189de6eeda77 ...
Waiting for services to be up and running...
*** WARNING: Service runner-7ee750d2-project-2-concurrent-0-a0ec4dc562ad3891-docker-0 probably didn't start properly.
Health check error:
service "runner-7ee750d2-project-2-concurrent-0-a0ec4dc562ad3891-docker-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2022-02-24T16:21:42.803216350Z time="2022-02-24T16:21:42.803077740Z" level=info msg="Starting up"
2022-02-24T16:21:42.804161387Z time="2022-02-24T16:21:42.804107933Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2022-02-24T16:21:42.804233443Z failed to load listeners: can't create unix socket /var/run/docker.sock: device or resource busy
*********
Authenticating with credentials from $DOCKER_AUTH_CONFIG
Pulling docker image docker:19.03.1 ...
Using docker image sha256:0cecfefe921f22fc898f7a0055358380c8870ab6f05b01999367911714fe9d00 for docker:19.03.1 with digest docker#sha256:2dcf87c9893b05ab815880e3d223cd6976c388a6f6697de10e90523255259ca4 ...
The issue here is that your job is not utilizing the docker:dind service. While you have your job configured mostly correct, your docker GitLab runner defines the following volumes configuration:
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache", "/builds:/builds"]
When bind-mounting /var/run/docker.sock, and not providing the DOCKER_HOST environment variable, your jobs will default to using the bind-mounted docker socket and connect to the daemon on the "metal" host directly, instead of connecting to the docker:dind container, which is required for this services: setup to work correctly.
You can run docker info in your job to confirm this.
You should be able to fix this by setting the DOCKER_HOST environment variable in your job (normally, this is set for you when using gitlab.com runners, which is why it is omitted in their documentation).
access-service:
variables:
DOCKER_HOST: "tcp://docker:2375"
DOCKER_TLS_CERTDIR: ""
# ...
Note: DOCKER_TLS_CERTDIR is also unset here to disable TLS to ensure port 2375 is used. Using TLS is an available option and should be considered more secure.
Related
I am using testcontainers to start a Postgresql DB for my junit test. Locally everything works fine. This is my small test project:
https://gitlab.com/janning/tpj-testcontainer
I can run the test inside my IDE and on CLI with ./gradlew test
Now I want to run it in my Gitlab-CI Pipeline but without docker-in-docker (dind). So I need to mount the docker socket, which is documented here: https://www.testcontainers.org/supported_docker_environment/continuous_integration/gitlab_ci/
so I configred my gitlab-runner like this:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "kt103-tpj"
url = "https://gitlab.com"
token = "XXXXXXXXXXXXXXXXX"
executor = "docker"
[runners.docker]
tls_verify = false
image = "registry.gitlab.com/janning/tpj-testcontainer/debian:latest"
privileged = false
disable_entrypoint_overwrite = false
extra_hosts = ["host.docker.internal:host-gateway"]
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
First my gitlab-ci.yml builds a Dockerimage as I need openjdk-17 and some docker commands.
variables:
TESTCONTAINERS_HOST_OVERRIDE: "host.docker.internal"
stages:
- dockerimage
- test
dockerimage:
image: docker:latest
stage: dockerimage
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- tag=":latest"
- docker build --pull -t "$CI_REGISTRY_IMAGE/debian${tag}" .
- docker push "$CI_REGISTRY_IMAGE/debian${tag}"
test:
stage: test
image: $CI_REGISTRY_IMAGE/debian:latest
script:
- echo $DOCKER_HOST
- echo $DOCKER_TLS_VERIFY
- echo $DOCKER_CERT_PATH
- ./gradlew test
artifacts:
when: always
reports:
junit: build/test-results/test/**/TEST-*.xml
The job "test" failed (complete log here)
Task :test HalloTest > test() FAILED
java.lang.IllegalStateException at RyukResourceReaper.java:129 1 test completed, 1 failed
Task :test FAILED
If i dig deeper into the junit result I can see the following stacktrace:
java.lang.IllegalStateException: Could not connect to Ryuk at host.docker.internal:49158
at org.testcontainers.utility.RyukResourceReaper.maybeStart(RyukResourceReaper.java:129)
at org.testcontainers.utility.RyukResourceReaper.init(RyukResourceReaper.java:42)
at org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:203)
at ...
I guess it has something to do with the docker image, which I build. But I am stuck as I don't know what the problem is. I don't how to solve or debug this situation.
The ryuk container is started in my host which is running gitlab-runner. But it can't connect:
$ docker container logs -f ba62a173aafd
2022/06/24 09:42:51 Pinging Docker...
2022/06/24 09:42:51 Docker daemon is available!
2022/06/24 09:42:51 Starting on port 8080...
2022/06/24 09:42:51 Started!
panic: Timed out waiting for the first connection
goroutine 1 [running]:
main.main()
/go/src/github.com/testcontainers/moby-ryuk/main.go:50 +0x449
The problem was just a firewall problem blocking traffic between the docker containers. As I could reach any docker container from external, it took my quite a while until I realized it.
I have a local Gitlab setup and trying to build a pipeline that runs a SAST scan using MobSF. Upon trying to pull the image of MobSF in order to run it I get the following error:
error during connect: Post http://docker:2375/v1.39/images/create?fromImage=opensecurity%2Fmobile-security-framework-mobsf&tag=latest: dial tcp: lookup docker on 8.8.8.8:53: no such host
The error comes up on any script line referencing a Docker command.
The whole output of the pipeline is:
Running with gitlab-runner 14.0.0 (3b6f852e)
on pipeline 5qvFbM4s
Preparing the "docker" executor 00:04
Preparing environment 00:01
Running on runner-5qvfbm4s-project-2-concurrent-0 via TheOneWhoKnocks...
Getting source from Git repository 00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/root/sast-dast-security-testing/.git/
Checking out e71038e1 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:25a1e57c774167d28c44d88fa296f3e1122c6d79e99b98653c899b170393bbd6 for docker:18.09.7-dind with digest docker#sha256:a490c83561c1cef49b6fe12aba2c31f908391ec3efe4eb173225809c981e50c3 ...
$ export DOCKER_HOST=tcp://docker:2375
$ docker pull opensecurity/mobile-security-framework-mobsf
Using default tag: latest
error during connect: Post http://docker:2375/v1.39/images/create?fromImage=opensecurity%2Fmobile-security-framework-mobsf&tag=latest: dial tcp: lookup docker on 8.8.8.8:53: no such host
ERROR: Job failed: exit code 1
This is my .gitlab-ci.yaml:
stages:
- build
- mobsf
build:
image: docker:18.09.7-dind
stage: build
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
- docker pull opensecurity/mobile-security-framework-mobsf
- docker run -i --env-file ./env.list -p 8000:8000 opensecurity/mobile-security-framework-mobsf:latest
mobsf:
image: owasp/glue:raw-latest
stage: mobsf
script:
- ./scan.sh
- docker run -it -v $(pwd):/app owasp/glue:raw-latest ruby bin/glue -t Dynamic -T /app/report.json --mapping-file mobsf --finding-file-path /app/android.json -z 2
And this is my runner's config.toml:
[[runners]]
name = "pipeline"
url = "http://192.168.179.129/"
token = "XXXXX"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:stable"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
Any help would be appreciated!
It's fairly obvious that Google's public DNS servers won't resolve your local DNS requests. "docker"
error during connect: Post http://docker:2375/v1.39/images/create?fromImage=opensecurity%2Fmobile-security-framework-mobsf&tag=latest: dial tcp: lookup docker on 8.8.8.8:53: no such host
Try this answer, i was facing similar one when registering local gitlab-runner to local domain name (gitlab.local).
Docker cannot resolve dns on private network
actually I'm using gitlab runners, with docker executor, and I'm trying to pull some docker images to do some tests, and to preserve my network connection, I've created a private docker registry, to "cache" the images .
So, my registry is linked to my gitlab runner (with configuration in the config.toml https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersdocker-section ) .
This work, my image can ask the registry :
$ wget http://registry:5000/v2/_catalog
--2019-02-15 10:40:54-- http://registry:5000/v2/_catalog
Resolving registry... 172.17.0.3
Connecting to registry|172.17.0.3|:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 20 [application/json]
Saving to: '_catalog'
0K 100% 1.17M=0s
2019-02-15 10:40:54 (1.17 MB/s) - '_catalog' saved [20/20]
but the DIND service can't :
pull registry:5000/arminc/clair-db:latest
Error response from daemon: Get http://registry:5000/v2/: dial tcp: lookup registry on 192.168.9.254:53: no such host
My gitlab-ci conf for this task
scan:image:
stage: scans
image: docker:git
services:
- name: docker:dind
command: ["--insecure-registry=registry:5000"]
variables:
DOCKER_DRIVER: overlay2
allow_failure: true
script:
- chmod 777 ./docker/scan.sh
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD $DOCKER_REGISTRY
- ./docker/scan.sh
artifacts:
paths: [gl-container-scanning-report.json]
only:
- master
Probably, you might need to add a DNS entry to your DNS server or dockers host file:
192.168.xx.xxx registry
I'm going to setup a local registry by following https://docs.docker.com/registry/deploying/.
docker run -d -p 5000:5000 --restart=always --name reg ubuntu:16.04
When I try to run the following command:
$ docker push localhost:5000/my-ubuntu
I get Error:
Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect:connection refused
Any idea?
Connection refused usually means that the service you are trying to connect to isn't actually up and running like it should. There could be other reasons as outlined in this question, but essentially, for your case, it simply means that the registry is not up yet.
Wait for the registry container to be created properly before you do anything else - docker run -d -p 5000:5000 --restart=always --name registry registry:2 that creates a local registry from the official docker image.
Make sure that the registry container is up by running docker ps | grep registry, and then proceed further.
More comments about
Kubenetes(K8s) / Minikube
docker / image / registry, container
If you are using Minikube, and want to pull down an image from 127.0.0.1:5000,
then you meet the errors below:
Failed to pull image "127.0.0.1:5000/nginx_operator:latest": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Full logs:
$ kubectl describe pod/your_pod
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m29s default-scheduler Successfully assigned tj-blue-whale-05-system/tj-blue-whale-05-controller-manager-6c8f564575-kwxdv to minikube
Normal Pulled 2m25s kubelet Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0" already present on machine
Normal Created 2m24s kubelet Created container kube-rbac-proxy
Normal Started 2m23s kubelet Started container kube-rbac-proxy
Normal BackOff 62s (x5 over 2m22s) kubelet Back-off pulling image "127.0.0.1:5000/nginx_operator:latest"
Warning Failed 62s (x5 over 2m22s) kubelet Error: ImagePullBackOff
Normal Pulling 48s (x4 over 2m23s) kubelet Pulling image "127.0.0.1:5000/nginx_operator:latest"
Warning Failed 48s (x4 over 2m23s) kubelet Failed to pull image "127.0.0.1:5000/nginx_operator:latest": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Warning Failed 48s (x4 over 2m23s) kubelet Error: ErrImagePull
Possible root cause:
The registry must be setup inside the Minikube side instead of your host side.
i.e.
host: registry (127.0.0.1:5000)
minikube: no registry (the K8s could not find your image)
How to check?
Step1: check your Minikube container
$ docker ps -a
CONTAINER ID IMAGE ... STATUS PORTS NAMES
8c6f49491dd6 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 ... Up 15 hours 127.0.0.1:49156->22/tcp, 127.0.0.1:49155->2376/tcp, 127.0.0.1:49154->5000/tcp, 127.0.0.1:49153->8443/tcp minikube
# your Minikube is under running
# host:49154 <--> minikube:5000
# where:
# - port 49154 was allocated randomly by the docker service
# - port 22: for ssh
# - port 2376: for docker service
# - port 5000: for registry (image repository)
# - port 8443: for Kubernetes
Step2: login to your Minikube
$ minikube ssh
docker#minikube:~$ curl 127.0.0.1:5000
curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused
# setup
# =====
# You did not setup the registry.
# Let's try to setup it.
docker#minikube:~$ docker run --restart=always -d -p 5000:5000 --name registry registry:2
# test
# ====
# test the registry using the following commands
docker#minikube:~$ curl 127.0.0.1:5000
docker#minikube:~$ curl 127.0.0.1:5000/v2
Moved Permanently.
docker#minikube:~$ curl 127.0.0.1:5000/v2/_catalog
{"repositories":[]}
# it's successful
docker#minikube:~$ exit
logout
Step3: build your image, and push it into the registry of your Minikube
# Let's take nginx as an example. (You can build your own image)
$ docker pull nginx
# modify the repository (the source and the name)
$ docker tag nginx 127.0.0.1:49154/nginx_operator
# check the new repository (source and the name)
$ docker images | grep nginx
REPOSITORY TAG IMAGE ID CREATED SIZE
127.0.0.1:49154/nginx_operator latest ae2feff98a0c 3 weeks ago 133MB
# push the image into the registry of your Minikube
$ docker push 127.0.0.1:49154/nginx_operator
Step4: login to your Minikube again
$ minikube ssh
# check the registry
$ curl 127.0.0.1:5000/v2/_catalog
{"repositories":["nginx_operator"]}
# it's successful
# get the image info
$ curl 127.0.0.1:5000/v2/nginx_operator/manifests/latest
docker#minikube:~$ exit
logout
Customize exposed ports of Minikube
if you would like to use the port 5000 on the host side instead of using 49154 (which was allocated randomly by the docker service)
i.e.
host:5000 <--> minikube:5000
you need to recreate a minikube instance with the flag --ports
# delete the old minikube instance
$ minkube delete
# create a new one (with the docker driver)
$ minikube start --ports=5000:5000 --driver=docker
# or
$ minikube start --ports=127.0.0.1:5000:5000 --driver=docker
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d1e5b61a3bf gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp, 127.0.0.1:49162->22/tcp, 127.0.0.1:49161->2376/tcp, 127.0.0.1:49160->5000/tcp, 127.0.0.1:49159->8443/tcp minikube
$ docker port minikube
22/tcp -> 127.0.0.1:49162
2376/tcp -> 127.0.0.1:49161
5000/tcp -> 127.0.0.1:49160
5000/tcp -> 0.0.0.0:5000
8443/tcp -> 127.0.0.1:49159
you can see: 0.0.0.0:5000->5000/tcp
Re-test your registry in the Minikube
# in the host side
$ docker pull nginx
$ docker tag nginx 127.0.0.1:5000/nginx_operator
$ docker ps -a
$ docker push 127.0.0.1:5000/nginx_operator
$ minikube ssh
docker#minikube:~$ curl 127.0.0.1:5000/v2/_catalog
{"repositories":["nginx_operator"]}
# Great!
I use an ansible script to load & start the https://hub.docker.com/r/rastasheep/ubuntu-sshd/ container.
so it starts well of course :
bash-4.4$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bedbd3b7d88 rastasheep/ubuntu-sshd "/usr/sbin/sshd -D" 37 minutes ago Up 36 minutes 0.0.0.0:49154->22/tcp test
bash-4.4$
so after ansible failure on ssh access to it I tested manually from shell
this is also ok.
bash-4.4$ ssh root#172.17.0.2
The authenticity of host '172.17.0.2 (172.17.0.2)' can't be established.
ECDSA key fingerprint is SHA256:YtTfuoRRR5qStSVA5UuznGamA/dvf+djbIT6Y48IYD0.
ECDSA key fingerprint is MD5:43:3f:41:e9:89:45:06:6f:f6:42:c4:6a:70:37:f8:1d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.
root#172.17.0.2's password:
root#8bedbd3b7d88:~# logout
Connection to 172.17.0.2 closed.
bash-4.4$
so the step that failed is trying to get on it from ansible script & make access to ssh-copy-id
ansible error message is :
Fatal: [172.17.0.2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,password).\r\n", "unreachable": true}
---
- hosts: 127.0.0.1
tasks:
- name: start docker service
service:
name: docker
state: started
- name: load and start the container we wanna use
docker_container:
name: test
image: rastasheep/ubuntu-sshd
state: started
ports:
- "49154:22"
- name: Wait maximum of 300 seconds for ports to be available
wait_for:
host: 0.0.0.0
port: 49154
state: started
- hosts: 172.17.0.2
vars:
passwordadmin: $6$pbE6yznA$AeFIdI.....K0
passwordroot: $6$TMrxQUxT$I8.JIzR.....TV1
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
tasks:
- name: Build test container root user rsa ssh-key
shell: docker exec test ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
so I cannot even run the needed step to build ssh
how to do then ??
1st step (ansible task) : load docker container
2cd step (ansible task on only 172.17.0.2) : connect to it & setup it
there will be 3rd step to run application on it after that.
the problem occurs only when starting the 2cd step
Ok after many trys on a second container
conclusion is my procedure was bad
what I have done to solve that :
build a diroctory tree separating ./ ./inventory ./includes
build 1 yaml file by host (local, docker, labo)
build 1 main yaml file on ./
build 1 new host file in ./inventory
connect forced by sshpass to docker on default password
changed it
add the host key on authorized key to a login dedicated usage
installed pyhton (needed to answer ansible host else it makes
randomly module errors or refused connections depending on current
action)
setup a ssh login user in sudoers
then I can un the docker.yaml actions
then only at last I can run the labo.yaml actions.
Thanks for help
now I'm able to build the missing tools.