Kubernetes Private Docker Registry Push Error - docker

So I have deployed a Kubernetes cluster and installed a private Docker registry. Here is my registry controller:
---
apiVersion: v1
kind: ReplicationController
metadata:
name: registry-master
labels:
name: registry-master
spec:
replicas: 1
selector:
name: registry-master
template:
metadata:
labels:
name: registry-master
spec:
containers:
- name: registry-master
image: registry
ports:
- containerPort: 5000
command: ["docker-registry"]
And the service:
---
apiVersion: v1
kind: Service
metadata:
name: registry-master
labels:
name: registry-master
spec:
ports:
# the port that this service should serve on
- port: 5000
targetPort: 5000
selector:
name: registry-master
Now I sshed to one of Kubernetes' nodes and built a Ruby app container:
cd /tmp
git clone https://github.com/RichardKnop/sinatra-redis-blog.git
cd sinatra-redis-blog
docker build -t ruby-redis-app
When I try to tag it and push it to the registry:
docker tag ruby-redis-app registry-master/ruby-redis-app
docker push 10.100.129.115:5000/registry-master/ruby-redis-app
I am getting this error:
Error response from daemon: invalid registry endpoint https://10.100.129.115:5000/v0/: unable to ping registry endpoint https://10.100.129.115:5000/v0/
v2 ping attempt failed with error: Get https://10.100.129.115:5000/v2/: read tcp 10.100.129.115:5000: connection reset by peer
v1 ping attempt failed with error: Get https://10.100.129.115:5000/v1/_ping: read tcp 10.100.129.115:5000: connection reset by peer. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 10.100.129.115:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/10.100.129.115:5000/ca.crt
Any idea how to solve it? I have been struggling with this for several hours.
Richard

if you're using HTTPS, you must have created a self-signed certificate (with your own CA authority) or you have a CA signed certificate.
If so, you need to install this CA cert on the machine you're calling FROM
put your CA cert in
/etc/ssl/certs
and run
update-ca-certificates
sometimes I have had to put it also in
/usr/local/share/ca-certificates/
(in both cases your CA file EXTENSION should be .pem
For Docker you may also need to put a file in
/etc/docker/certs.d/<--your-site-url--->/ca.crt
and the file must be named ca.crt
(same file file as the .pem file but named ca.crt)

I saw a similar issue and it was related to my registry not supporting https. If your registry does not support https, then you'll have to specify it's an insecure registry to the docker daemon
echo 'DOCKER_OPTS="--insecure-registry 10.100.129.115:5000"' | sudo tee -a /etc/default/docker
And then restart your docker daemon.

If you are using Ubuntu, add this line into your /etc/default/docker file.
$DOCKER_OPTS=“--insecure-registry xxx.xxx.xxx.xxx:5000”
Where the xxx.xxx.xxx.xxx is your private registry ip.
And then restart your docker client.
sudo docker service restart

Related

k3s image pull from private registries

I've been looking at different references on how to enable k3s (running on my pi) to pull docker images from a private registry on my home network (server laptop on my network). If someone can please point my head in the right direction? This is my approach:
Created the docker registry on my server (and making accessible via port 10000):
docker run -d -p 10000:5000 --restart=always --local-docker-registry registry:2
This worked, and was able to push-pull images to it from the "server pc". I didn't add authentication TLS etc. yet...
(viewing the images via docker plugin on VS Code).
Added the inbound firewall rule on my laptop server, and tested that the registry can be 'seen' from my pi (so this also works):
$ curl -ks http://<server IP>:10000/v2/_catalog
{"repositories":["tcpserialpassthrough"]}
Added the registry link to k3s (k3s running on my pi) in registries.yaml file, and restarted k3s and the pi
$ cat /etc/rancher/k3s/registries.yaml
mirrors:
pwlaptopregistry:
endpoint:
- "http://<host IP here>:10000"
Putting the registry prefix to my image endpoint on a deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcpserialpassthrough
spec:
selector:
matchLabels:
app: tcpserialpassthrough
replicas: 1
template:
metadata:
labels:
app: tcpserialpassthrough
spec:
containers:
- name: tcpserialpassthrough
image: pwlaptopregistry/tcpserialpassthrough:vers1.3-arm
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8001
hostPort: 8001
protocol: TCP
command: ["dotnet", "/app/TcpConnector.dll"]
However, when I check the deployment startup sequence, it's still not able to pull the image (and possibly also still referencing docker hub?):
kubectl get events -w
LAST SEEN TYPE REASON OBJECT MESSAGE
8m24s Normal SuccessfulCreate replicaset/tcpserialpassthrough-88fb974d9 Created pod: tcpserialpassthrough-88fb974d9-b88fc
8m23s Warning FailedScheduling pod/tcpserialpassthrough-88fb974d9-b88fc 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
8m23s Warning FailedScheduling pod/tcpserialpassthrough-88fb974d9-b88fc 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
8m21s Normal Scheduled pod/tcpserialpassthrough-88fb974d9-b88fc Successfully assigned default/tcpserialpassthrough-88fb974d9-b88fc to raspberrypi
6m52s Normal Pulling pod/tcpserialpassthrough-88fb974d9-b88fc Pulling image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm"
6m50s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Error: ErrImagePull
6m50s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Failed to pull image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": failed to resolve reference "docker.io/pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
6m3s Normal BackOff pod/tcpserialpassthrough-88fb974d9-b88fc Back-off pulling image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm"
3m15s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Error: ImagePullBackOff
Wondered if the issue is with authorization, and added based on basic auth, following this youtube guide, but the same issue persists.
Also noted that that /etc/docker/daemon.json must be edited to allow unauthorized, non-TLS connections, via:
{
"Insecure-registries": [ "<host IP>:10000" ]
}
but seemed that this needs to be done on node side, whereas nodes don't have docker cli installed??
... this is so stupid, have no idea why a domain name and port needs to be specified as the "name" of your referred registry, but anyway this solved my issue (for reference):
$cat /etc/rancher/k3s/registries.yaml
mirrors:
"<host IP>:10000":
endpoint:
- "http://<host IP>:10000"
and restarting k3s:
systemctl restart k3s
Then in your deployment, referring to that in your image path as:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcpserialpassthrough
spec:
selector:
matchLabels:
app: tcpserialpassthrough
replicas: 1
template:
metadata:
labels:
app: tcpserialpassthrough
spec:
containers:
- name: tcpserialpassthrough
image: <host IP>:10000/tcpserialpassthrough:vers1.3-arm
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8001
hostPort: 8001
protocol: TCP
command: ["dotnet", "/app/TcpConnector.dll"]
imagePullSecrets:
- name: mydockercredentials
referring to registry's basic auth details saved as a secret:
$ kubectl create secret docker-registry mydockercredentials --docker-server host IP:10000 --docker-username username --docker-password password
You'll be able to verify the pull process via
$ kubectl get events -w

Deployment pod cannot connect ClusterIP service

I try to expose my server IP by using Ingress.
The server is an Express.js app. It listens at http://localhost:5000 locally when without docker.
Here are my Kubernetes config files:
server-deployment.yaml
apiVersion: v1
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 1
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: hongbomiao/hongbomiao-server:latest
ports:
- containerPort: 5000
env:
- name: NODE_ENV
value: development
server-cluster-ip-service.yaml
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: server-cluster-ip-service
port:
number: 5000
I got my minikube IP by
➜ minikube ip
192.168.64.12
When I open 192.168.64.12 in my browser, I got 502 Bad Gateway.
I got some debug idea after reading https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#kubectl-apply. Here is what I have tried:
➜ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h34m
server-cluster-ip-service ClusterIP 10.102.5.161 <none> 5000/TCP 4h39m
➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
server-deployment-bc6777445-pj59f 1/1 Running 0 4h39m
➜ kubectl exec -it server-deployment-bc6777445-pj59f -- sh
/app # apk add --no-cache curl
...
/app # curl 10.102.5.161:5000
curl: (28) Failed to connect to 10.102.5.161 port 5000: Operation timed out
It seems my deployment pod has issue connecting ClusterIP service now. Any help will be nice!
It turns out the issue is caused by my VPN.
I didn't change anything for the Kubernetes config in my question.
Also, letting the Express.js server explicitly listen at 0.0.0.0 is not necessary neither.
(Note #David Maze's comment under the question about 0.0.0.0 is still valuable)
const app = express()
.use(bodyParser.json())
.use(express.static(path.join(__dirname, '../dist')))
app.listen(5000); // This just works. No need explicitly change to app.listen(5000, '0.0.0.0');
At the time of writing, I was in China. To get rid of the VPN while still using Kubenetes / minikube, I found a way and posted it at GitHub here.
After turning off the VPN with this workaround solution, everything just works.
Copy my solution using minikube in China here:
Step 1 - Download the Aliyun version minikube
curl -Lo minikube https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.14.2/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Note: You can find if there is a new version to replace v1.14.2 in the command above at https://github.com/AliyunContainerService/minikube/wiki#%E5%AE%89%E8%A3%85minikube
Step 2 - Start the minikube
minikube start --image-mirror-country cn \
--driver=hyperkit \
--iso-url=https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.15.0.iso \
--registry-mirror=https://xxxxxxxx.mirror.aliyuncs.com
Note 1: You can find latest minikube version at https://github.com/kubernetes/minikube/blob/master/CHANGELOG.md, then replace v1.15.0 in the command above to newer version.
However, Aliyun's minikube version is a little behind. To verify if a new version exists, you can replace the version in the URL of https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.15.0.iso to different new versions, such as v1.15.1, and then open it in the browser.
Note 2: For the xxxxxxxx in the command above, you can find yours at
https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
(Need register an Aliyun account first)
Note 3: You can pass more parameters to this Aliyun version minikube start, check at https://github.com/AliyunContainerService/minikube/wiki#%E5%90%AF%E5%8A%A8
In my case, I am using the driver hyperkit on macOS, and Aliyun's iso-url, registry-mirror to speed up.

Kubernetes / Docker - SSL certificates for web service use

I have a Python web service that collects data from frontend clients. Every few seconds, it creates a Pulsar producer on our topic and sends the collected data. I have also set up a dockerfile to build an image and am working on deploying it to our organization's Kubernetes cluster.
The Pulsar code relies on certificate and key .pem files for TLS authentication, which are loaded over file paths in the test code. However, if the .pem files are included in the built Docker image, it will result in an obvious compliance violation from the Twistlock scan on our Kubernetes instance.
I am pretty inexperienced with Docker, Kubernetes, and security with certificates in general. What would be the best way to store and load the .pem files for use with this web service?
You can mount certificates in the Pod with Kubernetes secret.
First, you need to create a Kubernetes secret:
(Copy your certificate to somewhere kubectl is configured for your Kubernetes cluster. For example file mykey.pem and copy it to the /opt/certs folder.)
kubectl create secret generic mykey-pem --from-file=/opt/certs/
Confirm it was created correctly:
kubectl describe secret mykey-pem
Mount your secret in your deployment (for example nginx deployment):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: "/etc/nginx/ssl"
name: nginx-ssl
readOnly: true
ports:
- containerPort: 80
volumes:
- name: nginx-ssl
secret:
secretName: mykey-pem
restartPolicy: Always
After that .pem files will be available inside the container and you don't need to include them in the docker image.

Certificate signed by unknown authority error in Jenkins pipeline with Kubernetes cluster deployment

When I am trying to deploy my spring boot microservice using Jenkins and Kubernetes I am getting the following error:
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
My deployment.yaml file like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: spacestudysecurityauthcontrol-deployment
labels:
app: spacestudysecurityauthcontrol-deployment
spec:
replicas: 1
selector:
matchLabels:
app: spacestudysecurityauthcontrol-deployment
template:
metadata:
labels:
app: spacestudysecurityauthcontrol-deployment
annotations:
date: "+%H:%M:%S %d/%m/%y"
spec:
imagePullSecrets:
- name: "regcred"
containers:
- name: spacestudysecurityauthcontrol-deployment-container
image: spacestudymilletech010/spacestudysecurityauthcontrol:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 8065
readinessProbe:
tcpSocket:
port: 8065
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8065
initialDelaySeconds: 15
periodSeconds: 20
nodeSelector:
tenantName: tenant1
And my service like the following:
apiVersion: v1
kind: Service
metadata:
name: spacestudysecurityauthcontrol-service
spec:
type: NodePort
ports:
- port: 8065
targetPort: 8065
protocol: TCP
name: http
nodePort: 31026
selector:
app: spacestudysecurityauthcontrol-deployment
Why is this error happening and how can I correct my implementation?
This error generally means that the kubeconfig file used to authenticate to Kubernetes API server is having a CA certificate which is not able to validate the server certificate presented by Kubernetes API server. Double check if you are using correct kubeconfig file corresponding to the Kubernetes cluster you are trying to connect to.
This is nicely explained inside Troubleshooting kubeadm TLS certificate errors
Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The base64 --decode command can be used to decode the certificate and openssl x509 -text -noout can be used for viewing the certificate information.
Unset the KUBECONFIG environment variable using:
unset KUBECONFIG
Or set it to the default KUBECONFIG location:
export KUBECONFIG=/etc/kubernetes/admin.conf
Another workaround is to overwrite the existing kubeconfig for the “admin” user:
mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

How to fix "Failed to pull image" on microk8s

Im trying to follow the get started docker's tutorials, but I get stuck when you have to work with kuberetes. I'm using microk8s to create the clusters.
My Dockerfile:
FROM node:6.11.5WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
CMD [ "npm", "start" ]
My bb.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bb-demo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: bb-site
image: bulletinboard:1.0
---
apiVersion: v1
kind: Service
metadata:
name: bb-entrypoint
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
I create the image with
docker image build -t bulletinboard:1.0 .
And I create the pod and the service with:
microk8s.kubectl apply -f bb.yaml
The pod is created, but, when I look for the state of my pods with
microk8s.kubectl get all
It says:
NAME READY STATUS RESTARTS AGE
pod/bb-demo-7ffb568776-6njfg 0/1 ImagePullBackOff 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/bb-entrypoint NodePort 10.152.183.2 <none> 8080:30001/TCP 11m
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 4d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/bb-demo 0/1 1 0 11m
NAME DESIRED CURRENT READY AGE
replicaset.apps/bb-demo-7ffb568776 1 1 0 11m
Also, when I look for it at the kubernetes dashboard it says:
Failed to pull image "bulletinboard:1.0": rpc error: code = Unknown desc = failed to resolve image "docker.io/library/bulletinboard:1.0": no available registry endpoint: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Q: Why do I get this error? Im just following the tutorial without skipping anything.
Im already logged with docker.
You need to push this locally built image to the Docker Hub registry. For that, you need to create a Docker Hub account if you do not have one already.
Once you do that, you need to login to Docker Hub from your command line.
docker login
Tag your image so it goes to your Docker Hub repository.
docker tag bulletinboard:1.0 <your docker hub user>/bulletinboard:1.0
Push your image to Docker Hub
docker push <your docker hub user>/bulletinboard:1.0
Update the yaml file to reflect the new image repo on Docker Hub.
spec:
containers:
- name: bb-site
image: <your docker hub user>/bulletinboard:1.0
re-apply the yaml file
microk8s.kubectl apply -f bb.yaml
You can host a local registry server if you do not wish to use Docker hub.
Start a local registry server:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Tag your image:
sudo docker tag bulletinboard:1.0 localhost:5000/bulletinboard
Push it to a local registry:
sudo docker push localhost:5000/bulletinboard
Change the yaml file:
spec:
containers:
- name: bb-site
image: localhost:5000/bulletinboard
Start deployment
kubectl apply -f bb.yaml
A suggested solution is to add imagePullPolicy: Never to your Deployment as per the answer here but this didn't work for me, so I followed this guide since I was working in local development.

Resources