Docker compose with specified host pulls from wrong registry - docker

I am playing with Gitlab CI CD and docker. So far I have the following setup:
A server with gitlab-runner (docker executor)
A staging server with docker installed
A self-hosted GitLab instance
After building and pushing images to the container registry, I am trying to deploy the app on a staging server by doing following steps:
- eval $(ssh-agent -s)
- echo "$DEPLOY_SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY_IMAGE
- docker-compose -H "ssh://$DEPLOY_USER#$DEPLOY_SERVER" down --remove-orphans || true
- docker-compose -H "ssh://$DEPLOY_USER#$DEPLOY_SERVER" pull
- docker-compose -H "ssh://$DEPLOY_USER#$DEPLOY_SERVER" up -d
It fails on the 4th step, where as far as I understood, it points to wrong container registry:
error during connect: Get "http://docker.example.com/v1.24/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.project%3Drepo_name%22%3Atrue%7D%7D&limit=0": command [ssh -l deployer -- staging-server-ip docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=Host key verification failed.
Do I have to run docker login on a staging server as well, or what am I missing?

It turns out that there were several issues:
Make sure to use the correct ssh key
I had to run docker on the deployment server in rootles mode (Not sure if it's required)
Also, on a client machine from where we try to connect to the deployment server, I had to disable strict host key checks in the /etc/ssh/ssh_config

Related

Local ci pipeline with gitlab using docker

I installed gitlab via docker locally successfully on port 9800 in an own network "cinet". Now I would like to set up a ci pipeline. For that I first have to install a gitlab-runner and then to register it. The registration fails, however.
Starting gitlab like so
$ docker run --name=gitlab --volume="/srv/gitlab/config:/etc/gitlab" --volume="/srv/gitlab/logs:/var/log/gitlab" --volume="/srv/gitlab/data:/var/opt/gitlab" -p 9800:80 -p 22:22 -p 443:443 --network=cinet --restart=no --detach=true gitlab/gitlab-ce:latest
The gui is ready under http://localhost:9800. I created a java maven projekts with a .gitlab-ci.yml and pushed it into my local gitlab instance successfully. The ci pipeline is stuck as expected since there is no runner yet intalled/registered.
Installing the runner
$ docker run -d --name gitlab-runner --restart always -v /srv/gitlab-runner/config:/etc/gitlab-runner -v /var/run/docker.sock:/var/run/docker.sock --network=cinet gitlab/gitlab-runner
Registering the runner. I first tried a shared runner and obtained the token from the ci gui.
$ docker run --rm -t -i -v /srv/gitlab-runner/config:/etc/gitlab-runner --network=cinet gitlab/gitlab-runner register
The I am prompted for host, token, description and tags. Tags I left empty. For host I tried:
http://localhost:9800/ - That results in the following error:
ERROR: Registering runner... failed runner=Lt7NVbJ_ status=couldn't execute POST against http://localhost:9800/api/v4/runners: Post http://localhost:9800/api/v4/runners: dial tcp 127.0.0.1:9800: connect: connection refused
PANIC: Failed to register this runner. Perhaps you are having network problems
http://172.19.0.2:9800/ - That results in the following error:
ERROR: Registering runner... failed runner=Lt7NVbJ_ status=couldn't execute POST against http://172.19.0.2:9800/api/v4/runners: Post http://172.19.0.2:9800/api/v4/runners: dial tcp 172.19.0.2:9800: connect: connection refused
PANIC: Failed to register this runner. Perhaps you are having network problems
Why is that failing? What do I need to do to make that run?

Docker login to Gitlab Registry fails with "http: server gave HTTP response to HTTPS client"

I have 2 EC2s, one with Gitlab-ee installed, another with Docker installed and running Gitlab-Runner and a Registry container.
Gitlab-Runner is working, and picks up the commit to Gitlab, shipping it to docker for it's build phase. However during the build phase when the docker container attempts to login to the Registry container it errors with "http: server gave HTTP response to HTTPS client"
Docker Login code:
docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
Troubleshooting done:
If I ssh into the server, I can login with sudo docker login localhost:5000
The same error occurs if the registry is referenced with $CI_REGISTRY / localhost / DNS name
I ensured the CI_REGISTRY is set in gitlab.rb
I saw some mentions online about needing to use the --insecure-registries flag in the docker.service Exec line, and I did that as well and get the same error.
This works if the docker installation is on the same server, but I'm trying to decouple the two applications from each other so they can be managed separately.
Software verions:
Docker Version: 19.03.6
Gitlab-ee Version: 12.8.1
Gitlab-Runner Version: 12.8.0
If anyone could help me on this, it would be greatly appreciated! I've been banging my head against it for 2 days.

Starting Hasura GraphQL engine Docker image

I'm trying to get started with Hasura GraphQL engine running locally on OSX in Docker and connecting to an existing database but I am having trouble finding the container or the Hasura console.
Here's what I have:
docker -v
Docker version 19.03.5, build 633a0ea
docker-compose -v
docker-compose version 1.25.4, build 8d51620a
docker images
hasura/graphql-engine v1.0.0
hasura version
INFO hasura cli version=v1.0.0
Here's my start script (docker-run.sh) which sets up the port and environment variables for Hasura:
#!/bin/bash
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://someuser:somepassword#host.docker.internal:5432/somedb \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
hasura/graphql-engine:latest
Running ./docker-run.sh returns a 64 char hex string, which I assume to be the container ID, but I cannot see a container when I run docker ps, and nothing loads on http://localhost:8080/console.
What am I missing?
UPDATE 1
I can see the container when I run docker ps -a - it has a status of exited(1) (which means application error).
I can see in the logs:
{"path":"$","error":"pgcrypto extension is required, but the current user doesn’t have permission to create it. Please grant superuser permission, or setup the initial schema via https://docs.hasura.io/1.0/graphql/manual/deployment/postgres-permissions.html","code":"postgres-error"}
I have followed the instructions for setting up the initial schema but the result of running ./docker-run.sh has not changed.
UPDATE 2
I did not realise that the pgcrypto extension had to be installed on the specific database. Now that I have done so, the logs look healthy - although I am still unable to access the console when I run hasura console.
Here's my config.yaml:
endpoint: http:localhost:8080
...and the resulting error:
FATA[0001] version check: failed to get version from server: failed making version api call: Get http:localhost:8080/v1/version: http: no Host in request URL
Again, what am I missing?
UPDATE 3
Changed config.yaml...
endpoint: http://localhost:8080
Whoops (blush).
OK, it's working :)

Docker - Connecting to localhost - connection refused

This is my Dockerfile:
FROM sonatype/nexus3:latest
COPY ./scripts/ /bin/scripts/
RUN curl -u admin:admin123 -X GET 'http://localhost:8081/service/rest/v1/repositories'
After running build:
docker build -t test./
The input is:
(7) Failed connect to localhost:8081; Connection refused
Why? Sending requests to localhost (container which is building) is possible only after run it? Or maybe I should add something to Dockerfile?
Thanks for help :)
Why do you want to connect to the service while it is build?
By default the service is not running yet, you'll need to start the container first.
Remove the curl and start the container first
docker run test
Dockerfile is a way to create images, then you create containers from images. Ports are up to serve once the container is up & running. Hence, you can't do a curl while building an image.
Change Dockerfile to -
FROM sonatype/nexus3:latest
COPY ./scripts/ /bin/scripts/
Build image -
docker build -t test .
Create container from the image -
docker run -d --name nexus -p 8081:8081 test
Now see if your container is running & do a curl -
docker ps
curl -u admin:admin123 -X GET 'http://localhost:8081/service/rest/v1/repositories'

Unable to push image to a docker registry configured as proxy cache

I followed this guide to setup a Docker v2 Registry acting as a local proxy cache for Docker Hub images. My Docker daemon is configured with both --insecure-registry and --registry-mirror options pointing to the same registry instance.
When pulling images it works correctly by caching them to the local store.
The problem is that when I try to push an image to such local private registry, I get a weird UNSUPPORTED error. The registry log says:
time="2015-11-09T13:20:22Z" level=error msg="response completed with error" err.code=UNSUPPORTED err.message="The operation is unsupported." go.version=go1.4.3 http.request.host="my.registry.io:5000" http.request.id=b1faccb3-f592-4790-bbba-00ebb3a3bfc1 http.request.method=POST http.request.remoteaddr="192.168.0.4:57608" http.request.uri="/v2/mygroup/myimage/blobs/uploads/" http.request.useragent="docker/1.9.0 go/go1.4.2 git-commit/76d6bc9 kernel/3.16.0-4-amd64 os/linux arch/amd64" http.response.contenttype="application/json; charset=utf-8" http.response.duration=2.035918ms http.response.status=405 http.response.written=78 instance.id=79970ec3-c38e-4ebf-9e83-c3890668b122 vars.name="mygroup/myimage" version=v2.2.0
If I disable proxy setting on the registry then the push works correctly. Am I missing something on the configuration or it is just that a private registry cannot act as a proxy cache at the same time?
Just ran into this myself. Turns out pushing to a private registry configured as a proxy is not supported. See
https://docs.docker.com/registry/configuration/#proxy
"Pushing to a registry configured as a pull through cache is currently unsupported".
That is too bad. Now I will have to will have to setup the local proxy cache as a separate registry.
#Konrad already linked to the explaination.
My solution requires the registry to persist its images on a docker volume, so that they stay available even when I kill & trash the container.
# run proxy registry persisting images on local host
docker stop registry
docker rm registry
docker run -d -p 5000:5000
-v ~/.docker/registry:/var/lib/registry \
--name registry \
registry:2
docker push localhost:5000/your-image:your-tag
# --> see successful push happening...
docker stop
docker rm registry
# re-run the registry as proxy, re-mounting the volume with the images
docker run -d -p 5000:5000 \
-e MIRROR_SOURCE=https://registry.example.net \
-e REGISTRY_PROXY_REMOTEURL=https://registry.example.net \
-e REGISTRY_PROXY_USERNAME="${REGISTRY_USER}" \
-e REGISTRY_PROXY_PASSWORD="${REGISTRY_PASSWORD}" \
-v ~/.docker/registry:/var/lib/registry \
--name registry \
registry:2
This fits my usual needs; I dunno if you can afford to throw away the container as I did (but theoretically you should; containers are supposed to be ephemeral).
Otherwise you'll have to docker save your-image:your-tag > your-image.tar, transfer it to the machine running your registry and then docker load -i your-image.tar. It's not ideal but should work.

Resources