I have kubernetes cluster with 7 worker nodes running behind proxy. Deploying application on cluster and scaling application is consuming too much of internet bandwidth. Therefore I decided to deploy Docker Registry acting as pull through cache server. But deployments are not pulling images from the registry. What is the issue here?
Docker daemon.json
...
"registry-mirrors": [
"https://myregistry",
"https://myregistry:443"
]
Docker version
Client: Docker Engine - Community
Version: 20.10.5
API version: 1.40
Go version: go1.13.15
Git commit: 55c4c88
Built: Tue Mar 2 20:33:55 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 19.03.14
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 5eb3275d40
Built: Tue Dec 1 19:19:17 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.18.0
GitCommit: fec3683
Kubernetes version
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:14:05Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"
Docker registry configuration
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /data
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
proxy:
remoteurl: https://index.docker.io/v1/
To access the Docker Registry, your Kubernetes cluster needs to be able to make an outbound connection to the registry. If your proxy server is blocking this connection, then the deployments will not be able to retrieve the images from the registry. You may need to configure your proxy server to allow outbound connections to the registry.
The API server is not able to access any of the services running inside of the Docker container. This could be due to a misconfiguration or a firewall blocking the connection. In order for the API server to access the services running inside the container, it needs to have access to the Docker daemon, which is responsible for managing the containers. In order to allow the API server to access the Docker services deployed registry on another host with docker-compose, you will need to configure the appropriate ports to be accessible from the API server. You will need to open port 2375 on the Docker host, which is the default port used by the Docker Remote API. Additionally, you will need to ensure that port 5000 is open on the Docker host, as this is the default port used by the Docker Registry. Once you have opened up the necessary ports, you should be able to access the Docker services deployed registry on the Docker host from the API server.
Set the imagePullPolicy to Never, otherwise Kubernetes will try to download the image.
Refer to this document for more information.
Related
According to official Docker's doc, Docker will create DNS server when it started which makes it able to query other container directly by container ID or name.
containers that use a custom network use Docker’s embedded DNS server, which forwards external DNS lookups to the DNS servers configured on the host.
But when I trying to use nslookup directly in container it failed to lookup but wget still success! What makes it different?
Reproduce steps:
docker network create my-net
docker run -d --name web --network my-net httpd
docker run -it --rm --network my-net busybox
after inside busybox:
$ wget -q -O - web
<html>...some content...</html>
It works great! but use nslookup will failed:
$ nslookup web
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
*** Can't find web: No answer
This is my docker's version:
$ docker version
Client: Docker Engine - Community
Version: 20.10.21
API version: 1.41
Go version: go1.19.2
Git commit: baeda1f82a
Built: Tue Oct 25 17:53:02 2022
OS/Arch: darwin/amd64
Context: colima
Experimental: true
Server:
Engine:
Version: 20.10.18
API version: 1.41 (minimum version 1.12)
Go version: go1.18.6
Git commit: e42327a6d3c55ceda3bd5475be7aae6036d02db3
Built: Sun Sep 11 07:10:00 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.6.8
GitCommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc:
Version: 1.1.4
GitCommit: 5fd4c4d144137e991c4acebb2146ab1483a97925
docker-init:
Version: 0.19.0
GitCommit:
While reproducing your issue I noticed that nslookup failed for any query (e.g., nslookup google.com also failed. Afterwards, I tried spinning up an ubuntu container on the same network and there both wget and nslookup worked fine. I do not know the exact reason why this is so, but my guess is that wget and nslookup rely on some system functionalities which are different for busybox and for ubuntu.
I have just set up a local VM running Nexus. I have configured a Docker repository on port 5000. I have a separate VM running Docker. I have configured the repository in /etc/docker/daemon.json as so:
{
"insecure-registries": ["192.168.0.5:5000", "nexus:5000"]
}
I then restarted the Docker service. and I have run the command:
docker login 192.168.0.5:5000
I am prompted for a username and password, and when I enter them it returns with::
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password
I have checked the Docker documentation, and other online resources on how to login to a local docker repository, but I have exactly the same configuration and it just always throws this error.
If I try to push my image, it attempts to contact the local repository, but it complains there are no credentials (as well it should):
[root#docker repo]$ docker tag repo 192.168.0.5:5000/repo
[root#docker repo]$ docker image push 192.168.0.5:5000/repo
Using default tag: latest
The push refers to repository [192.168.0.5:5000/repo]
7d5760c4aa8d: Preparing
3102e53269f4: Preparing
2f140462f3bc: Preparing
63c99163f472: Preparing
ccdbb80308cc: Preparing
no basic auth credentials
Am I missing something?
I am experiencing the same but only with MacOS "Docker Desktop" installations. On Windows it's working as expected.
Client:
Cloud integration: 1.0.17
Version: 20.10.8
API version: 1.41
Go version: go1.16.6
Git commit: 3967b7d
Built: Fri Jul 30 19:55:20 2021
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:31 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
As I am trying to login to a local registry (using Artifactory) I've checked the reverse proxy's logs. The MacOS client doesn't even try to reach the local registry.
If I run docker container using the host network (--network host), for any services running in the container their exposed port can be directly accessed from host right?
I always thought so until I'm running docker container using the host network under Windows --
The ip a s eth0 shows that my container IP address is 192.168.65.3
The route | awk '/^default/ { print $2 }' gives 192.168.65.1
However, my host machine has an IP of 10.66.xx.xx
I.e., the container IP address and host IP are completely different. Unlike what the https://www.metricfire.com/blog/understanding-dockers-net-host-option/ says.
Anyway, if I'm running any services in the container, how to expose their port so that they can be directly accessed from host? (I thought with host network (--network host), you no longer need to map port from container to host)
thx
docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:23:10 2020
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
Host networking is not supported on Windows:
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
https://docs.docker.com/network/network-tutorial-host/
I would suggest trying the -p option to docker run, since that is supported on Windows.
Alternately, one forum user suggests using VirtualBox in bridged mode to install Linux, which can then use host networking. YMMV.
I'm trying to pull a container from a private gcloud registry from a gcloud VM using service account for authentication. The VM and registry are in the same project. No matter what I do I always get Error response from daemon: unauthorized.
XXX#sandbox:~$ gcloud auth configure-docker gcr.io
WARNING: Your config file at [/home/XXX/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"gcr.io": "gcloud"
}
}
Adding credentials for: gcr.io
gcloud credential helpers already registered correctly.
XXX#sandbox:~$ sudo docker pull gcr.io/MY-PROJECT-ID/MY-IMAGE:latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform
this operation, and you may have invalid credentials. To authenticate your request, follow
the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
The service account has Storage Admin role for the gcr.io storage bucket:
The VM has storage access enabled as Read-Write:
The VM was stopped, restarted multiple times. Docker is up to date:
XXX#sandbox:~$ which docker
/usr/bin/docker
XXX#sandbox:~$ sudo docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:26:02 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:24:36 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
I can get it to work using JSON keyfile but not with the recommended gcloud auth configure-docker. I guess there is some yet another undocumented switch or permission that I need to flip but I just can't see.
You can pass the account or the impersonate-service-account to the command:
gcloud auth configure-docker --account
gcloud auth configure-docker ----impersonate-service-account
When you run with sudo you change the environment and it will not authenticate to the gcr.io, thus the unauthorized.
I just installed a new centos server with docker
Client:
Version: 1.13.1
API version: 1.26
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Server: Version: 1.13.1 API version: 1.26 (minimum
> version 1.12) Package version: <unknown> Go version: go1.8.3
> Git commit: 774336d/1.13.1 Built: Wed Mar 7 17:06:16
> 2018 OS/Arch: linux/amd64 Experimental: false
And i can use the command oc cluster up to launch a openshift server
oc cluster up --host-data-dir /data --public-hostname master.ouatrahim.com --routing-suffix master.ouatrahim.com
which gives the output
Using nsenter mounter for OpenShift volumes
Using 127.0.0.1 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
The server is accessible via web console at:
https://master.ouatrahim.com:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
And oc version gives the output
oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://127.0.0.1:8443
openshift v3.9.0+0e3d24c-14
kubernetes v1.9.1+a0ce1bc657
But when i tried to access to the web console via https://master.ouatrahim.com:8443/ i keep getting a http redirect to 127.0.0.1
https://127.0.0.1:8443/oauth/authorize?client_id=openshift-web-console&response_type=code&state=eyJ0aGVuIjoiLyIsIm5vbmNlIjoiMTUyNTk2NjcwODI1MS0xODg4MTcxMDEyMjU3OTQ1MjM0NjIwNzM5NTQ5ODE0ODk5OTYxMTIxMTI2NDI3ODg3Mjc5MjAwMTgwODI4NTg0MTkyODAxOTA2NTY5NjU2In0&redirect_uri=https%3A%2F%2F127.0.0.1%3A8443%2Fconsole%2Foauth
I hope someone can help me solve this
You can bring up the cluster using your IP address like:
oc cluster up --public-hostname=192.168.122.154
This way you should be able to access using https://master.ouatrahim.com:8443/
use oc config view if the server is https://127.0.0.1:8443,please shutdown cluster by oc cluster down and update it(by 'vi /root/.kube/config') your host IP, then use oc cluster up --public-hostname=your host IP
my config:
[root#localhost .kube]# cat config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://10.1.7.62:8443
name: 10-1-7-62:8443
- cluster:
certificate-authority-data: LStLQo=
server: https://10.1.7.62:8443
export no_proxy=your vm ip. It should fix the issue.
It seems they use above variable to access the openshift through proxy. So, even if you configure with --public-hostname it is not working.
Below steps solved the issue for me:
1 - oc cluster down
2 - mv openshift.local.clusterup to /tmp
or
rm -r openshift.local.clusterup
3 - oc cluster up --public-hostname= --routing-suffix=.xip.io
Open Web Console URL as "https://:8443/console/"
Ref: https://github.com/openshift/origin/issues/19699