google container engine having problems pulling image from container registry - docker

I'm trying to create a deployment on GKE (running 1.6.0) which is looking like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: eu.gcr.io/<PROJECT>/<IMAGE>:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
resources:
requests:
cpu: 100m
Creating this fails with the following error message:
Failed to pull image "eu.gcr.io/<PROJECT>/<IMAGE>:latest": rpc error: code = 2 desc = failed to register layer: rename /var/lib/docker/image/overlay/layerdb/tmp/layer-629814250 /var/lib/docker/image/overlay/layerdb/sha256/bd2793152ee77e9d503e981352ff16122b220968ce9df1cc3b49b9704d7dfe28: directory not empty
Error syncing pod, skipping: failed to "StartContainer" for "api" with ErrImagePull: "rpc error: code = 2 desc = failed to register layer: rename /var/lib/docker/image/overlay/layerdb/tmp/layer-629814250 /var/lib/docker/image/overlay/layerdb/sha256/bd2793152ee77e9d503e981352ff16122b220968ce9df1cc3b49b9704d7dfe28: directory not empty"
Other deployments that look almost identical but use a different image are working as expected. What is wrong with the image I'm trying to pull? And how can I debug/fix this?

This may be caused by a known docker bug where shutdown occurs before the content is synced to disk on layer creation. The fix is included in docker v1.13.
One temporary workaround sugguested is to remove the empty files in the directory, and re-pull the image.

Related

download failed : x509: certificate signed by unknown authority

I just started learning Docker and Kubernetes. Installed minikube and docker on my windows machine. I am able to pull image from docker using docker pull command but getting below error with kubectl. Please help.
Warning Failed 18s (x2 over 53s) kubelet Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = error pulling image configuration: download failed after attempts=6: x509: certificate signed by unknown authority Warning Failed 18s (
This is my yml file.
apiVersion: v1
kind: Pod
metadata:
name: nginx1
spec:
containers:
name: nginx1
image: nginx:alpine
ports:
containerPort: 80
containerPort: 443
Thanks in advance.
enter image description here
Your yaml file looks messed up. Please ensure you write the yaml file properly. Can you test if this yaml file works for you? I have used a different image in the below mention yaml file:
apiVersion: v1
kind: Pod
metadata:
name: nginx1
namespace: test
spec:
containers:
- name: webserver
image: nginx:latest
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "200m"
limits:
memory: "128Mi"
cpu: "350m"

Unable to pull docker image from local registry for Kubernetes deployment

I've a K8s cluster on Linode and another VM for operating.
I've installed Docker & K8s on operating VM to build images and do deployment on cluster.
Note: I haven't installed minikube on this VM.
I'm able to build my image but not able to pull that from local registry to k8s pod.
Below are the things I've already done & tried to solve the problem.
Create and push docker image to local registry.
Run docker container from the image, but not getting pulled in K8s.
Created "regcred" secret and used it in deployment yaml.
create image and push with VM's IP(10.128.234.123:5000/app-frontend) and use the same in deployment image reference.
Change image pull policy to IfNotPresent
I get the following error in pod description:
Warning ErrImageNeverPull 11s (x4 over 13s) kubelet Container image "localhost:5000/app-frontend" is not present with pull policy of Never
Warning Failed 11s (x4 over 13s) kubelet Error: ErrImageNeverPull
Below is my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-frontend
labels:
app: app-frontend
spec:
replicas: 1
selector:
matchLabels:
app: app-frontend
template:
metadata:
labels:
app: app-frontend
spec:
containers:
- name: app-frontend
image: localhost:5000/docker-image
imagePullPolicy: Never
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
Any help or guidance will be grateful.
In the Docs I see this
While with imagePullPolicy set to Never, never pull the image.
Try this instead
imagePullPolicy: IfNotPresent
Also
image: localhost:5000/docker-image
But in point 4. you specify an IP

Kubernetes pull image from private insecure registry fails

I have an unsecured private docker registry hosted on a vm server (vm1). I am trying to create a k8s deployment from an image pushed on to this registry. Surprising the docker pull command works fine since I have configured /etc/docker/daemon.json with insecure-registries.
The detailed error through the kubectl describe command is as below. Any idea what could be going wrong?
Thanks.
Failed to pull image "vm1:5000/temp/leads:latest": rpc error: code = Unknown desc = failed to pull and unpack image "vm1:5000/temp/leads:latest": failed to resolve reference "vm1:5000/temp/leads:latest": failed to do request: Head "https://vm1:5000/v2/temp/leads/manifests/latest": http: server gave HTTP response to HTTPS client
The docker pull command is
docker pull vm1:5000/temp/leads:latest
The k8s manifest file is as follows
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: oleads
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: vm1:5000/temp/leads:latest
resources:
requests:
memory: "64Mi"
cpu: 0.5
limits:
memory: "512Mi"
cpu: 0.5
ports:
- containerPort: 8980
imagePullPolicy: Always
I realised that the kubernetes engine I am using k3s uses a different container runtime. It uses containerd instead of docker.
With k3s the config for using private registries is different. It is mentioned here.
The config I had to add in /etc/rancher/k3s/registries.yaml file is
mirrors:
vm1:5000:
endpoint:
- "http://vm1:5000"
Restarting the k3s service after adding this file resolved the issue and k8s was able to pull the image from my private insecured docker registry.
we had the same issue , the solution could be adding the insecure registry with docker deamon.
Activity on all nodes
create a file in : /etc/docker/daemon.json and add the insecure registry details :
{ "insecure-registries":["vm1:5000"] }
and restart docker on all nodes .

k3s image pull from private registries

I've been looking at different references on how to enable k3s (running on my pi) to pull docker images from a private registry on my home network (server laptop on my network). If someone can please point my head in the right direction? This is my approach:
Created the docker registry on my server (and making accessible via port 10000):
docker run -d -p 10000:5000 --restart=always --local-docker-registry registry:2
This worked, and was able to push-pull images to it from the "server pc". I didn't add authentication TLS etc. yet...
(viewing the images via docker plugin on VS Code).
Added the inbound firewall rule on my laptop server, and tested that the registry can be 'seen' from my pi (so this also works):
$ curl -ks http://<server IP>:10000/v2/_catalog
{"repositories":["tcpserialpassthrough"]}
Added the registry link to k3s (k3s running on my pi) in registries.yaml file, and restarted k3s and the pi
$ cat /etc/rancher/k3s/registries.yaml
mirrors:
pwlaptopregistry:
endpoint:
- "http://<host IP here>:10000"
Putting the registry prefix to my image endpoint on a deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcpserialpassthrough
spec:
selector:
matchLabels:
app: tcpserialpassthrough
replicas: 1
template:
metadata:
labels:
app: tcpserialpassthrough
spec:
containers:
- name: tcpserialpassthrough
image: pwlaptopregistry/tcpserialpassthrough:vers1.3-arm
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8001
hostPort: 8001
protocol: TCP
command: ["dotnet", "/app/TcpConnector.dll"]
However, when I check the deployment startup sequence, it's still not able to pull the image (and possibly also still referencing docker hub?):
kubectl get events -w
LAST SEEN TYPE REASON OBJECT MESSAGE
8m24s Normal SuccessfulCreate replicaset/tcpserialpassthrough-88fb974d9 Created pod: tcpserialpassthrough-88fb974d9-b88fc
8m23s Warning FailedScheduling pod/tcpserialpassthrough-88fb974d9-b88fc 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
8m23s Warning FailedScheduling pod/tcpserialpassthrough-88fb974d9-b88fc 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
8m21s Normal Scheduled pod/tcpserialpassthrough-88fb974d9-b88fc Successfully assigned default/tcpserialpassthrough-88fb974d9-b88fc to raspberrypi
6m52s Normal Pulling pod/tcpserialpassthrough-88fb974d9-b88fc Pulling image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm"
6m50s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Error: ErrImagePull
6m50s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Failed to pull image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": failed to resolve reference "docker.io/pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
6m3s Normal BackOff pod/tcpserialpassthrough-88fb974d9-b88fc Back-off pulling image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm"
3m15s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Error: ImagePullBackOff
Wondered if the issue is with authorization, and added based on basic auth, following this youtube guide, but the same issue persists.
Also noted that that /etc/docker/daemon.json must be edited to allow unauthorized, non-TLS connections, via:
{
"Insecure-registries": [ "<host IP>:10000" ]
}
but seemed that this needs to be done on node side, whereas nodes don't have docker cli installed??
... this is so stupid, have no idea why a domain name and port needs to be specified as the "name" of your referred registry, but anyway this solved my issue (for reference):
$cat /etc/rancher/k3s/registries.yaml
mirrors:
"<host IP>:10000":
endpoint:
- "http://<host IP>:10000"
and restarting k3s:
systemctl restart k3s
Then in your deployment, referring to that in your image path as:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcpserialpassthrough
spec:
selector:
matchLabels:
app: tcpserialpassthrough
replicas: 1
template:
metadata:
labels:
app: tcpserialpassthrough
spec:
containers:
- name: tcpserialpassthrough
image: <host IP>:10000/tcpserialpassthrough:vers1.3-arm
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8001
hostPort: 8001
protocol: TCP
command: ["dotnet", "/app/TcpConnector.dll"]
imagePullSecrets:
- name: mydockercredentials
referring to registry's basic auth details saved as a secret:
$ kubectl create secret docker-registry mydockercredentials --docker-server host IP:10000 --docker-username username --docker-password password
You'll be able to verify the pull process via
$ kubectl get events -w

Unable to connect to Kubernetes - Invalid Configuration

I here for hours every day, reading and learning, but this is my first question, so bear with me.
I'm simply trying to get my Kubernetes cluster to start up.
Below is my skaffold.yaml file in the root of the project:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: omesadev/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
Below is my auth-depl.yaml file in the infra/k8s/ directory:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: omesadev/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
Below is the error message I'm receiving in the cli:
exiting dev mode because first deploy failed: unable to connect to Kubernetes: getting client config for Kubernetes client: error creating REST client config for kubeContext "": invalid configuration: [unable to read client-cert C:\Users\omesa\.minikube\profiles\minikube\client.crt for minikube due to open C:\Users\omesa\.minikube\profiles\minikube\client.crt: The system cannot find the path specified., unable to read client-key C:\Users\omesa\.minikube\profiles\minikube\client.key for minikube due to open C:\Users\omesa\.minikube\profiles\minikube\client.key: The system cannot find the path specified., unable to read certificate-authority C:\Users\omesa\.minikube\ca.crt for minikube due to open C:\Users\omesa\.minikube\ca.crt: The system cannot find the file specified.
I've tried to install kubernetes, minikube, and kubectl. I've added them to the path and removed them a few times in different ways because I thought my configuration or usage could have been incorrect.
Then, I read that if I'm using the Docker GUI that Kubernetes should be running in that, so I checked the settings in the Docker GUI to ensure Kubernetes was running through Docker and it is.
I have Hyper-V set up. I've used it in the past successfully with Docker and with Virtualbox, so I know my Hyper-V is not the issue.
I've also attached an image of my file directory, but I'm pretty sure everything is good to go here too.
src tree
Thanks in advance!
Enable Kubernetes!
The reason why you are getting is that Kubernetes is not enabled.
Posting #Jim solution from comments as community wiki for better visibility:
The problem was, I had two different contexts inside of my kubectl
config and the project I was trying to launch was using the wrong
cluster/context. I don't know how the minikube cluster and context
were created, but I deleted them and set the new context to
docker-desktop with "kubectl config use-context docker-desktop"
Helpful links:
Organizing Cluster Access Using kubeconfig Files
Configure Access to Multiple Clusters

Resources