How to get docker image from Harbor registry in Kubernetes cluster? - docker

Installed Harbor on one host(192.168.33.10).
Installed Kubernetes cluster on other hosts.
Pushed docker images to Harbor host from client successfully. On Kubernets master host, I can also pull that image from Harbor host successfully:
$ docker pull 192.168.33.10/hello-world/hello-world
Using default tag: latest
latest: Pulling from hello-world/hello-world
3d19aeb159d4: Pull complete
Digest: sha256:d9f41d096c0e1881e7a24756db9b7315d91c8d4bf1537f6eb10c36edeedde59f
Status: Downloaded newer image for 192.168.33.10/hello-world/hello-world:latest
But I created a Kubernetes deployment yaml file as:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: 192.168.33.10/hello-world/hello-world
name: hello-world
imagePullPolicy: Always
Then run kubectl create -f deployment.yaml
From Kubernetes dashboard it showed:
Failed to pull image "192.168.33.10/hello-world/hello-world": rpc error: code = 2 desc = Error response from daemon: {"message":"Get https://192.168.33.10/v2/: dial tcp 192.168.33.10:443: getsockopt: connection refused"}
Error syncing pod
I already set insecure-registries in /etc/docker/daemon.json:
{ "insecure-registries":["192.168.33.10"] }
How can get that from Kubernetes?
Edit
I am using Kubernetes on Rancher server cluster. Even I set Harbor server's IP, username and password, it can't access, too:

Related

Kubernetes commands are not running inside the Jenkins container

enter image description hereI have deployed Jenkins container inside the k8s cluster successfuly but I can't able to execute "kubectl get pod -A" where as kubectl also installed inside the Jenkins container. How can I run k8s command inside the Jenkins container so I can able to apply CI/CD for k8s.
Below is the errors when I run k8s command inside the Jenkins container which is running inside the k8s pod.
You need to create the cluster role binding for the service account jenkins.
Create a file with the name sa.yaml and add below content in it
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins
namespace: jenkins
Create the role binding using the command.
kubectl apply -f sa.yaml

Unable to connect to redis cluster on kubernetes from my golang application deployed within the same cluster

I deployed a redis cluster on Kubernetes with bitnami helm charts (https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster).
I can successfully connect to the Redis cluster from within the Kubernetes cluster by running the below commands:
kubectl run my-redis-release-client --rm -it --image docker.io/bitnami/redis:4.0.11-debian-9 -- bash
redis-cli -h redis-cluster-0.redis-cluster-headless.redis
But I am unable to connect to redis cluster from my golang application deployed within the same cluster.
The redis connection string uri I used on my golang application is "redis://redis-cluster-0.redis-cluster-headless.redis:6379". This is following the "redis-pod-name.redis-service-name.namespace" convention.
NOTE: I want to be able to access the redis cluster from only within the Kubernetes cluster. I don’t want to grant external access. Please help...
Headless service is if you don't need load-balancing and a single Service IP. Headless service is not for accessing the redis cluster from only within the Kubernetes cluster
You can create a service to expose redis. Below is an example to create a ClusterIP type which only let's you connect to it from within the cluster and not from outside the cluster.
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
The pod or deployment of redis need to have matching label app: redis
Then you can connect to it using redis.default.svc.cluster.local:6379 to connect to it from Golang app.

How to curl container deployed on Kubernetes?

I built my Docker image and uploaded it to Amazon ECS (image repository).
I've written a deployment.yaml file and ran kubectl apply -f deployment.yaml.
Worth noting I've used kops to deploy the K8s cluster to AWS EC2
I can see the containers are running in Kubernetes pods using the Kubernetes Dashboard. Also kubectl get pods -o wide also shows me the pods.
The image I deployed is a simple API that exposes one route. My problem is that I have no idea how to query the container I just deployed.
Dockerfile of deployed image:
FROM node:lts
COPY package*.json tsconfig.json ./
RUN npm ci
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/index.js"]
deployment.yaml (kubectl apply -f deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: vuekcal
spec:
selector:
matchLabels:
app: vuekcal
replicas: 2
template:
metadata:
labels:
app: vuekcal
spec:
containers:
- name: search
image: [my-repo-id].dkr.ecr.eu-central-1.amazonaws.com/k8s-search
ports:
- containerPort: 3000
What I tried:
Run kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vuekcal-6956589484-7f2kx 1/1 Running 0 16m 100.96.2.6 ip-172-20-54-21.eu-central-1.compute.internal <none> <none>
vuekcal-6956589484-f4pqf 1/1 Running 0 16m 100.96.1.7 ip-172-20-59-29.eu-central-1.compute.internal <none> <none>
If get and IP address from the IP column and try to curl it, nothing happens:
I assume this is because those IPs are local.
Finding the K8s node that is running my K8s pod with my container and trying to curl that node's public ip address
And same thing: no response.
Everything is fine if I run the container locally docker run k8s-search.
I have no idea what to do here. How do I query the image that deployment.yaml sets up inside a Kubernetes node?
To access the pod from outside the cluster you need to create either Nodeport or LoadBalancer type service.
kubectl expose deployment vuekcal --type=NodePort --name=example-service
Then access it via curl http://<public-node-ip>:<node-port>
!Make sure you ran the kubectl expose command above!
Public node IP
To get the public node IP, run the following command:
kubectl get nodes -o wide
and look at the "EXTERNAL-IP" column. This is the public ip of the node that is running your container. This is where you should try to connect. For example, the extrenal IP of your node could be 133.71.33.7. Remember this IP.
NodePort
It's different than the containerPort in your deployment.yaml.
To find the NodePort, run this command:
kubectl describe service example-service
Replace example-service with whatever you wrote in --name= when running kubectl expose deployment ... (first command in this post)
After you run the command, you'll see something like this:
This is the port you should use when connecting.
Putting it together
133.73.133.7:31110

Can't expose port on local Kubernetes cluster (via Docker Desktop)

I am using the local Kubernetes cluster from Docker Desktop on Windows 10. No virtual machines, no minikubes.
I need to expose a port on my localhost for some service.
For example, I take kubernetes-bootcamp image from the official tutorial:
docker pull jocatalin/kubernetes-bootcamp:v2
Put it in the local registry:
docker tag jocatalin/kubernetes-bootcamp:v2 localhost:5000/kubernetes-bootcamp
docker push localhost:5000/kubernetes-bootcamp
Then create a deployment with this image:
kubectl create deployment kubernetes-bootcamp --image=localhost:5000/kubernetes-bootcamp
Then let's expose a port for accessing our deployment:
kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
kubectl get services
kubernetes-bootcamp NodePort 10.102.167.98 <none> 8080:32645/TCP 8s
So we found out that the exposed port for our deployment is 32645. Let's try to request it:
curl localhost:32645
Failed to connect to localhost port 32645: Connection refused
And nothing is work.
But if I try port-forward everything is working:
kubectl port-forward deployment/kubernetes-bootcamp 7000:8080
Forwarding from 127.0.0.1:7000 -> 8080
Forwarding from [::1]:7000 -> 8080
Handling connection for 7000
Another console:
curl localhost:7000
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-7b5598d7b5-8qf9j | v=2
What am I doing wrong? I have found out several posts like mine, but none of them help me.
try to run the this CMD:
kubectl get svc | grep kubernetes-bootcamp
after this expose the pod to your network by using the CMD:
kubectl expose pod (podname) --type=NodePort
After that, you can check the URL by using the cmd example
minikube 0r kubectl service (service name) --url
So I have found out the problem root - local Kubernetes cluster somehow work the inappropriate way.
How I solve the problem:
Remove C:\ProgramData\DockerDesktop\pki
Recreate all pods, services, deployments
Now the same script I use before works great.

Minikube not pull image from local docker container registry

i am trying to deploy containers to local kubernetes, for now i have install docker deamon, minikube and minikube dashboard. this all are working fine. i had also setup local container repository on port 5000. i had also push 2 images of my application. i can see them on browser http://localhost:5000/v2/_catalog
now when i am trying to up pod using minikube.
kubectl apply -f ./docker-compose-k.yml --record
I am getting error on dashboard like this:-
Failed to pull image "localhost:5000/coremvc2": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Here is my compose file:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: core23
labels:
app: codemvc
spec:
replicas: 1
selector:
matchLabels:
app: coremvc
template:
metadata:
labels:
app: coremvc
spec:
containers:
- name: coremvc
image: localhost:5000/coremvc2
ports:
- containerPort: 80
imagePullPolicy: Always
i don't know why this images are not pulled as docker deamon and kubernetes both are on same machine. i have also try this with dockerhub image and it's working fine, but i want to do this using local images.
please give me hint or any guideline.
Thank you,
Based on the comment, you started minikube with minikube start (without specifying the driver).
That means that the minikube is running inside a Virtualbox VM. In order to make your use case work, you have two choices :
The hard way Set-up the connection between you VM and your host and use your host IP
The easy way Connect to your VM using minikube ssh and install your registry there. Then your deployment should work with your VM's IP.
If you don't want to use Virtual box, you should read the documentation about other existing drivers and how to use them.
Hope this helps !
The issue is that you have setup docker registry on your host machine and minikube runs in virtualized environment (according to your example it is Virtualbox).
That is why you are receiving "connection refused" error upon attempt to fetch image on port 5000. The root cause is that there is no process "inside" minikube that listens on 5000. Your registry is deployed "outside" of minikube.
As Marc told, there are 2 ways to fix ita and I have reproduced both. The Hard way will get you to:
Failed to pull image "10.150.0.4:5000/my-alpine": rpc error: code = Unknown desc = Error response from daemon: Get https://10.150.0.4:5000/v2/: http: server gave HTTP response to HTTPS client
So you'll have to set-up secure Docker registry according to The Docker Documentation
The easy way is to set it up on top of your minikube.
my#linux-vm2:~$ minikube ssh
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
...
Status: Downloaded newer image for registry:2
$ docker pull nginx:latest
...
Status: Downloaded newer image for nginx:latest
$ docker tag alpine:latest localhost:5000/my-nginx
$ docker push localhost:5000/my-nginx
$ logout
my#linux-vm2:~$ kubectl apply -f ./docker-compose-k.yml --record
deployment.apps/core23 created
my#linux-vm2:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
core23-b44b794cb-vmhwl 1/1 Running 0 4s
my #linux-vm2:~$ kubectl describe pods
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/core23-b44b794cb-vmhwl to minikube
Normal Pulling 9s kubelet, minikube Pulling image "localhost:5000/my-nginx"
Normal Pulled 9s kubelet, minikube Successfully pulled image "localhost:5000/my-nginx"
Normal Created 9s kubelet, minikube Created container coremvc
Normal Started 9s kubelet, minikube Started container coremvc
Please note that I've been using nginx instead of coremvc2 in this example (but still steps are the same).
To sum it up, it is possible to achieve the result you need in a few different ways. Please try and let us know how it went. Cheers :)
localhost:5000 is pointing to the pod itself.
If the Docker registry is running on the same host where minikube is running:
Get the IP address of the host (e.g. using ifconfig). Say, it is 10.0.2.15.
Tag the image:
docker tag coremvc2:latest 10.0.2.15:5000/coremvc2:latest
Push the so-tagged image to the local registry:
docker push 10.0.2.15:5000/coremvc2:latest
Change in the Deployment:
image: localhost:5000/coremvc2
to
image: 10.0.2.15:5000/coremvc2:latest
EDIT: If getting "...http: server gave HTTP response to HTTPS client" error, you could configure the local Docker registry as insecure registry for the local Docker daemon by editing /etc/docker/daemon.json (create it if it doesn’t exist):
{
... any other settings or remove this line ...
"insecure-registries": ["10.0.2.15:5000"]
}
...then restart docker:
sudo service docker restart
Not sure how you run the local Docker registry, but this is one way:
docker run -d -p 5000:5000 --name registry registry:2

Resources