Im trying to follow the get started docker's tutorials, but I get stuck when you have to work with kuberetes. I'm using microk8s to create the clusters.
My Dockerfile:
FROM node:6.11.5WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
CMD [ "npm", "start" ]
My bb.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bb-demo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: bb-site
image: bulletinboard:1.0
---
apiVersion: v1
kind: Service
metadata:
name: bb-entrypoint
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
I create the image with
docker image build -t bulletinboard:1.0 .
And I create the pod and the service with:
microk8s.kubectl apply -f bb.yaml
The pod is created, but, when I look for the state of my pods with
microk8s.kubectl get all
It says:
NAME READY STATUS RESTARTS AGE
pod/bb-demo-7ffb568776-6njfg 0/1 ImagePullBackOff 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/bb-entrypoint NodePort 10.152.183.2 <none> 8080:30001/TCP 11m
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 4d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/bb-demo 0/1 1 0 11m
NAME DESIRED CURRENT READY AGE
replicaset.apps/bb-demo-7ffb568776 1 1 0 11m
Also, when I look for it at the kubernetes dashboard it says:
Failed to pull image "bulletinboard:1.0": rpc error: code = Unknown desc = failed to resolve image "docker.io/library/bulletinboard:1.0": no available registry endpoint: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Q: Why do I get this error? Im just following the tutorial without skipping anything.
Im already logged with docker.
You need to push this locally built image to the Docker Hub registry. For that, you need to create a Docker Hub account if you do not have one already.
Once you do that, you need to login to Docker Hub from your command line.
docker login
Tag your image so it goes to your Docker Hub repository.
docker tag bulletinboard:1.0 <your docker hub user>/bulletinboard:1.0
Push your image to Docker Hub
docker push <your docker hub user>/bulletinboard:1.0
Update the yaml file to reflect the new image repo on Docker Hub.
spec:
containers:
- name: bb-site
image: <your docker hub user>/bulletinboard:1.0
re-apply the yaml file
microk8s.kubectl apply -f bb.yaml
You can host a local registry server if you do not wish to use Docker hub.
Start a local registry server:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Tag your image:
sudo docker tag bulletinboard:1.0 localhost:5000/bulletinboard
Push it to a local registry:
sudo docker push localhost:5000/bulletinboard
Change the yaml file:
spec:
containers:
- name: bb-site
image: localhost:5000/bulletinboard
Start deployment
kubectl apply -f bb.yaml
A suggested solution is to add imagePullPolicy: Never to your Deployment as per the answer here but this didn't work for me, so I followed this guide since I was working in local development.
Related
I've created my own image just called v2 but when I do kubectl get pods, it keeps erroring out...with Failed to pull image "v2": rpc error: code = Unknown desc = Error response from daemon: pull access denied for v2, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
I'm using minukube by the way
This is my deployment file, also called v2.yaml
apiVersion: v1
kind: Service
metadata:
name: v2
spec:
selector:
name: v2
ports:
- port: 8080
targetPort: 80
---
# ... Deployment YAML definition
apiVersion: apps/v1
kind: Deployment
metadata:
name: v2
spec:
replicas: 1
selector:
matchLabels:
name: v2
template:
metadata:
labels:
name: v2
spec:
containers:
- name: v2
image: v2
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
---
# ... Ingress YAML definition
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: v2
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /second
pathType: Prefix
backend:
service:
name: v2
port:
number: 8080
any help gratefully received
My suspicion is that you have built your container image against your local Docker daemon, rather than minikube's. Hence, because your imagePullPolicy is set to IfNotPresent, the node will try pulling it from Docker hub (the default container registry).
You can run minikube ssh to open a shell and then run docker image ls to verify the image is not present in the minikube Docker daemon.
The solution here is to first run the following command from your local shell (i.e. not the one in minikube):
$ eval $(minikube -p minikube docker-env)
It will set up your current shell to use minikube's docker daemon. After that, in the same shell, rebuild your image. Now when minikube tries pulling the image, it should find it and bring up the pod successfully.
As the error message indicate v2, repository does not exist it is because of image: v2 . there is no image available in docker hub with name v2. if it is in your repository on docker hub then mention it in the form <reponame>/v2.
A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c
I've been looking at different references on how to enable k3s (running on my pi) to pull docker images from a private registry on my home network (server laptop on my network). If someone can please point my head in the right direction? This is my approach:
Created the docker registry on my server (and making accessible via port 10000):
docker run -d -p 10000:5000 --restart=always --local-docker-registry registry:2
This worked, and was able to push-pull images to it from the "server pc". I didn't add authentication TLS etc. yet...
(viewing the images via docker plugin on VS Code).
Added the inbound firewall rule on my laptop server, and tested that the registry can be 'seen' from my pi (so this also works):
$ curl -ks http://<server IP>:10000/v2/_catalog
{"repositories":["tcpserialpassthrough"]}
Added the registry link to k3s (k3s running on my pi) in registries.yaml file, and restarted k3s and the pi
$ cat /etc/rancher/k3s/registries.yaml
mirrors:
pwlaptopregistry:
endpoint:
- "http://<host IP here>:10000"
Putting the registry prefix to my image endpoint on a deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcpserialpassthrough
spec:
selector:
matchLabels:
app: tcpserialpassthrough
replicas: 1
template:
metadata:
labels:
app: tcpserialpassthrough
spec:
containers:
- name: tcpserialpassthrough
image: pwlaptopregistry/tcpserialpassthrough:vers1.3-arm
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8001
hostPort: 8001
protocol: TCP
command: ["dotnet", "/app/TcpConnector.dll"]
However, when I check the deployment startup sequence, it's still not able to pull the image (and possibly also still referencing docker hub?):
kubectl get events -w
LAST SEEN TYPE REASON OBJECT MESSAGE
8m24s Normal SuccessfulCreate replicaset/tcpserialpassthrough-88fb974d9 Created pod: tcpserialpassthrough-88fb974d9-b88fc
8m23s Warning FailedScheduling pod/tcpserialpassthrough-88fb974d9-b88fc 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
8m23s Warning FailedScheduling pod/tcpserialpassthrough-88fb974d9-b88fc 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
8m21s Normal Scheduled pod/tcpserialpassthrough-88fb974d9-b88fc Successfully assigned default/tcpserialpassthrough-88fb974d9-b88fc to raspberrypi
6m52s Normal Pulling pod/tcpserialpassthrough-88fb974d9-b88fc Pulling image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm"
6m50s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Error: ErrImagePull
6m50s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Failed to pull image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": failed to resolve reference "docker.io/pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
6m3s Normal BackOff pod/tcpserialpassthrough-88fb974d9-b88fc Back-off pulling image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm"
3m15s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Error: ImagePullBackOff
Wondered if the issue is with authorization, and added based on basic auth, following this youtube guide, but the same issue persists.
Also noted that that /etc/docker/daemon.json must be edited to allow unauthorized, non-TLS connections, via:
{
"Insecure-registries": [ "<host IP>:10000" ]
}
but seemed that this needs to be done on node side, whereas nodes don't have docker cli installed??
... this is so stupid, have no idea why a domain name and port needs to be specified as the "name" of your referred registry, but anyway this solved my issue (for reference):
$cat /etc/rancher/k3s/registries.yaml
mirrors:
"<host IP>:10000":
endpoint:
- "http://<host IP>:10000"
and restarting k3s:
systemctl restart k3s
Then in your deployment, referring to that in your image path as:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcpserialpassthrough
spec:
selector:
matchLabels:
app: tcpserialpassthrough
replicas: 1
template:
metadata:
labels:
app: tcpserialpassthrough
spec:
containers:
- name: tcpserialpassthrough
image: <host IP>:10000/tcpserialpassthrough:vers1.3-arm
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8001
hostPort: 8001
protocol: TCP
command: ["dotnet", "/app/TcpConnector.dll"]
imagePullSecrets:
- name: mydockercredentials
referring to registry's basic auth details saved as a secret:
$ kubectl create secret docker-registry mydockercredentials --docker-server host IP:10000 --docker-username username --docker-password password
You'll be able to verify the pull process via
$ kubectl get events -w
I'm learning Kubernetes and want to set up a Docker registry to run within my cluster, deploy any custom code to this private registry, then have my nodes pull images from this private registry to create pods. I've described my setup in this StackOverflow question
Originally I was caught up trying to figure out SSL certificates, but for now I've postponed that and I'm trying to work with an insecure registry. To that end I've created the following pod to run my registry (I know it's a pod and not a replica set or deployment -- this is only for experimental purposes and I'll make it cleaner once it's working):
apiVersion: v1
kind: Pod
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
containers:
- name: docker-registry
image: registry:2
ports:
- containerPort: 80
hostPort: 80
env:
- name: REGISTRY_HTTP_ADDR
value: 0.0.0.0:80
I then created the following NodePort service:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-external
labels:
app: docker-registry
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 32000
selector:
app: docker-registry
I have a load balancer set up in front of my Kubernetes cluster which I configured to route traffic on port 80 to port 32000. So I can hit this registry at http://example.com
I then updated my local /etc/docker/daemon.json as follows:
{
"insecure-registries": ["example.com"]
}
With this I was able to push an image to my registry successfully:
> docker pull ubuntu
> docker tag ubuntu example.com/my-ubuntu
> docker push exapmle.com/my-ubuntu
The push refers to repository [example.com/my-ubuntu]
cc9d18e90faa: Pushed
0c2689e3f920: Pushed
47dde53750b4: Pushed
latest: digest: sha256:1d7b639619bdca2d008eca2d5293e3c43ff84cbee597ff76de3b7a7de3e84956 size: 943
Now I want to try and pull this image when creating a pod. So I created the following ClusterIP service to make my registry accessible within my cluster:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-internal
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: docker-registry
Then I created a secret:
apiVersion: v1
kind: Secret
metadata:
name: local-docker
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ewoJImluc2VjdXJlLXJlZ2lzdHJpZXMiOiBbImRvY2tlci1yZWdpc3RyeS1pbnRlcm5hbCJdCn0K
The base64 bit decodes to:
{
"insecure-registries": ["docker-registry-internal"]
}
Finally, I created the following pod:
apiVersion: v1
kind: Pod
metadata:
name: test-docker
labels:
name: test
spec:
imagePullSecrets:
- name: local-docker
containers:
- name: test
image: docker-registry-internal/my-ubuntu
When I tried to create this pod (kubectl create -f test-pod.yml) and looked at my cluster, this is what I saw:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
test-docker 0/1 ErrImagePull 0 4s
docker-registry 1/1 Running 0 34m
> kubectl describe pod test-docker
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m33s default-scheduler Successfully assigned default/test-docker to pool-uqa-dev-3sli8
Normal Pulling 3m22s (x2 over 3m32s) kubelet Pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m22s (x2 over 3m32s) kubelet Failed to pull image "docker-registry-internal/my-ubuntu": rpc error: code = Unknown desc = Error response from daemon: pull access denied for docker-registry-internal/my-ubuntu, repository does not exist or may require 'docker login'
Warning Failed 3m22s (x2 over 3m32s) kubelet Error: ErrImagePull
Normal SandboxChanged 3m19s (x7 over 3m32s) kubelet Pod sandbox changed, it will be killed and re-created.
Normal BackOff 3m18s (x6 over 3m30s) kubelet Back-off pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m18s (x6 over 3m30s) kubelet Error: ImagePullBackOff
It's clearly failing to find the host "docker-registry-internal", despite the ClusterIP service.
I tried inspecting a pod from the inside using a trick I found online:
> kubectl run -i --tty --rm debug --image=ubuntu --restart=Never -- bash
If you don't see a command prompt, try pressing enter.
root#debug:/# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.67 debug
It doesn't seem like ClusterIP services are being added to the /etc/hosts file, so I'm not sure how services are supposed to find one another?
I tried watching several Kubernetes tutorials on general service communication (e.g. an app pod communicating with a redis pod) and every time all they did was supply the service name as a host and it magically connected. I'm not sure if I'm missing something. Bear in mind I'm brand new to Kubernetes so the internals are still mystical to me.
i have express app running at docker at http://127.0.0.1:3000/
i am trying to access it from microk8s services from http://127.0.0.1:30002/ but getting not found . here is my deployment file for kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-node
namespace: default
spec:
replicas: 1
selector:
matchLabels:
simple: node
template:
metadata:
labels:
simple: node
spec:
containers:
- name: simple-node
image: simple-node:1.0
---
apiVersion: v1
kind: Service
metadata:
name: simple-node
namespace: default
spec:
type: NodePort
selector:
simple: node
ports:
- port: 3000
targetPort: 3000
nodePort: 30002
here is my services and deployment at microk8s .
You have to tag and push image to docker registry.
The containerd daemon used by MicroK8s is configured to trust this insecure registry. To upload images we have to tag
simple-node before pushing it.
You can install the registry with:
microk8s enable registry
Execute commands below:
$ docker build .
$ docker tag aaabbccc simple-node:1.0
$ docker push simple-node:1.0
Pushing to this insecure registry may fail in some versions of Docker unless the daemon is explicitly configured to trust this registry. To address this you need to edit /etc/docker/daemon.json and add:
{
"insecure-registries" : ["localhost:32000"]
}
Then restart Docker to load new configuration.
$ sudo systemctl restart docker
Take a look: registry-built-in.