How to retrieve SCDF apps metadata through Docker Labels with local containerd registry? - docker-registry

I'm working with SCDF 2.9.1 on microk8s in dev mode and I'm not able to see my apps metadata in SCDF dashboard.
A docker inspect shows the right labels :
The Docker image is pushed to containerd local registry : localhost:32000.
I tried to add the registry in SCDF server Config Map like this :
But it is still not working. I have errors in SCDF server logs :
ApplicationConfigurationMetadataResolver : Failed to retrieve
properties for resource Docker Resource
[docker:localhost:32000/mycomp/myapp:latest] because of
ConnectException: Connection refused (Connection refused)
What am I doing wrong ? Thanks for your help

After all I think the problem is just that I didn't use a DNS name (or IP).
So localhost in SCDF server pod is not the host and obviously nothing is running on 32000 port
Works fine with Harbor registry : How to configure Harbor for SCDF?

Related

Docker login issue with Harbor exposed as NodePort service

I am trying to deploy Harbor on a k8s cluster without much efforts and complexity. So, I followed the Bitnami Harbor Helm chart and deployed a healthy running Harbor instance that is exposed as a NodePort service. I know that the standard is to have a LoadBalancer type of service but as I don't have required setup to provision a requested load balancer automatically, I decided to stay away from that complexity. This is not a public cloud environment where the LB gets deployed automatically.
Now, I can very well access the Harbor GUI using https://<node-ip>:<https-port> URL. However, despite several attempts I cannot connect to this Harbor instance from my local Docker Desktop instance. I have also imported the CA in my machine's keychain, but as the certificate has a dummy domain name in it rather than the IP address, Docker doesn't trust that Harbor endpoint. So, I created a local DNS record in my /etc/hosts file to link the domain name in Harbor's certificate and the node IP address of the cluster. With that arrangement, Docker seems to be happy with the certificate presented but it doesn't acknowledge the port required to access the endpoint. So, in the subsequent internal calls for authentication against Harbor, it fails with below given error. Then I also tried to follow the advice given here on Harbor document to connect to Harbor over HTTP. But this configuration killed Docker daemon and does not let it even start.
~/.docker » docker login -u admin https://core.harbor.domain:30908
Password:
Error response from daemon: Get "https://core.harbor.domain:30908/v2/": Get "https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry": EOF
As you can see in above error, the second Get URL does not have a port in it, which would not work.
So, is there any help can I get to configure my Docker Desktop to connect to a Harbor service running on the NodePort interface? I use Docker Desktop on MacOS.
I got the similar Error when I used 'docker login core.harbor.domain:30003' from another Host. The Error likes 'Error response from daemon: Get https://core.harbor.domain:30003/v2/: Get https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry: dial tcp 1...*:443: connect: connection refused'.
However,docker can login harbor from the host on which the helm chart installed harbor.

Connection refused while trying to push to Harbor registry

I am trying to use Harbor as a Docker registry on Ubuntu 20. I followed the official documentation. As I wanted to connect to Harbor via HTTP, I followed the instructions there.
I am still getting this kind of error:
$ sudo docker push harbor_example:5000/ubuntu
Using default tag: latest
The push refers to repository [harbor_example:5000/ubuntu]
Get http://harbor_example:5000/v2/: dial tcp :5000: connect: connection refused
Did I miss something in the setup of Harbor and/or Docker?
Best regards
It turns out it works when you specifically add the port 80 in /etc/docker/daemon.json
{
"insecure-registries" : ["harbor_example:80", "0.0.0.0"]
}

Concourse Can't Connect to Docker Repository

I'm new to concourse and trying to set it up in my environment. I'm running Ubuntu 18.04 on Virtualbox 6.1.4 r136177 on Windows machine. I managed to get the node running and concourse worker set up, and I was able to access my concourse dashboard successfully. The problem occurred when I was trying to run a simple hello world pipeline as outlined on this page : https://concourse-ci.org/hello-world-example.html
The error says :
[31mERRO [0m[0004] check failed: get remote image: Get https://index.docker.io/v2/: dial tcp: lookup index.docker.io on [::1]:53: read udp [::1]:55989->[::1]:53: read: connection refused
Googling for similar error indicates that virtualbox might not be able to connect to docker repository. So I proceed with installing docker to my system and run the following command :
sudo docker run hello-world
But this this time docker successfully pulled the image. So I think it is not an issue with my virtualbox. Have anyone experienced the same issue and found a solution?
UPDATES
The following question inspire me to build my own registry :
How to use a local docker image as resource in concourse-docker
I have configured my local docker registry, and have verified that it does work by pulling my image from my own registry. So I configured a simple concourse pipeline to use my registry by modifying the hello world example :
---
jobs:
- name: job
public: true
plan:
- task: simple-task
config:
platform: linux
image_resource:
type: docker-image
source:
repository: 127.0.0.1:5000/busybox
tag: latest
insecure_registries: [ "127.0.0.1:5000" ]
run:
path: echo
args: ["Hello, world!"]
But then I run into the following error :
resource script '/opt/resource/check []' failed: exit status 1
stderr:
failed to ping registry: 2 error(s) occurred:
* ping https: Get https://127.0.0.1:5000/v2: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers
* ping http: Get http://127.0.0.1:5000/v2: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers
That 127.0.0.1 is likely referring to the IP of the check container, not the machine where Concourse is running as a worker (unless you have houdini as the container strategy). Try getting the actual IP of the machine running docker and try that.
I faced the same problem. In my case, concourse worker was installed on a qemu VM inside proxmox.
When starting a job with fly-t tutorials trigger-job --job hello-world/hello-world-job --watch command (given in tutorial), worker answered ERRO[0030] checking origin busybox failed: initialize transport: Get "https://index.docker.io/v2/": dial tcp xx.xx.xx.xx:443: i/o timeout.
It means that worker can't reach any DNS server.
There are two ways to solve this problem.
First option: run everything through docker-compose. docker-compose.yml has setting for worker: CONCOURSE_GARDEN_DNS_PROXY_ENABLE: "true". And all works fine. However, I tried to specify same setting when running worker directly inside VM (without docker), and this did not fix the problem.
Second option (without docker):
Use this settings for your worker:
CONCOURSE_RUNTIME=containerd
CONCOURSE_CONTAINERD_EXTERNAL_IP=192.168.1.106
CONCOURSE_CONTAINERD_DNS_SERVER=192.168.1.1
CONCOURSE_CONTAINERD_ALLOW_HOST_ACCESS=true
CONCOURSE_CONTAINERD_DNS_PROXY_ENABLE=true
After setting these parameters my worker could see DNS server and can get access docker registry.
Replace 192.168.1.106 with your machine address in your local network. And
192.168.1.1 with your DNS server.
These parameters are documented here. Also you can get these description with concourse worker --help command.
Containerd Container Networking:
--containerd-external-ip= IP address to use to reach container's mapped ports. Autodetected if not specified. [$CONCOURSE_CONTAINERD_EXTERNAL_IP]
--containerd-dns-server= DNS server IP address to use instead of automatically determined servers. Can be specified multiple times. [$CONCOURSE_CONTAINERD_DNS_SERVER]
--containerd-restricted-network= Network ranges to which traffic from containers will be restricted. Can be specified multiple times. [$CONCOURSE_CONTAINERD_RESTRICTED_NETWORK]
--containerd-network-pool= Network range to use for dynamically allocated container subnets. (default: 10.80.0.0/16) [$CONCOURSE_CONTAINERD_NETWORK_POOL]
--containerd-mtu= MTU size for container network interfaces. Defaults to the MTU of the interface used for outbound access by the host. [$CONCOURSE_CONTAINERD_MTU]
--containerd-allow-host-access Allow containers to reach the host's network. This is turned off by default. [$CONCOURSE_CONTAINERD_ALLOW_HOST_ACCESS]
I had the same issue. Cloned this repo - https://github.com/concourse/concourse-docker
followed the directions as per the readme to generate the keys and then used the docker-compose.yml file from the clone to spin up the docker container.

GCP Kubernetes cannot connect to RabbitMQ server

I have a docker image on the google cloud platform that I would like to run. Part of this script attempts to connect to a RabbitMQ server (located in the same subnet). This does not work.
I've taken the following steps to try and solve it:
I have tried connecting to both the internal and external IP-address of the RabbitMQ server.
I have enabled VPC-native (alias IP)
I have checked I can connect to the internet from my docker image
I have checked that my docker image can connect to RabbitMQ when run locally
I have checked that the server can connect to the internal IP-address from the RabbitMQ server (by pinging it)
I think I probably have an incorrect setting in my kubernetes engine, but I've looked for quite some time and I cannot find it.
Does anybody know how to connect to a RabbitMQ server from a Kubernetes pod running in the Google Cloud Platform?

Got AuthorizedOnly when pulling images behind corporate proxy

I’ve trying to get docker working behind a corporate proxy. Following the document here:
https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
Basically adding:
[Service]
Environment=“HTTP_PROXY=http://[username]:[password]#127.0.0.1:3128/”
under
/etc/systemd/system/docker.service.d/http-proxy.conf
Restart docker and all.
But when running “docker pull hello-world” or “sudo docker pull hello-world”, got this error:
centos7 ~]$ docker pull hello-world
Using default tag: latest
Trying to pull repository docker. io/library/hello-world …
Pulling repository docker. io/library/hello-world
Error while pulling image: Get https:
/index.docker.io/v1/repositories/library/hello-world/images: AuthorizedOnly
Looks around the web, but couldn’t find any “AuthorizedOnly” error reported before.
docker -v
Docker version 1.12.6, build 3e8e77d/1.12.6
Any hints/help appreciated.
Found the issue: It's not the problem with docker proxy configuration. It was the proxy itself that blocks hub.docker.com.
To resolve this particular problem, I have use a different proxy with less restrictions.
Thanks all!
Double-check your enterprise proxy URL.
Usually, an enterprise proxy does not reside on localhost (127.0.0.1), but on a specific IP address.
Usually, HTTPS_PROXY needs to be set as well (to the same HTTP URL)
Usually, NO_PROXY needs to be set, at least to localhost, to avoid contacting the proxy for every remote query.

Resources