enter image description hereI have deployed Jenkins container inside the k8s cluster successfuly but I can't able to execute "kubectl get pod -A" where as kubectl also installed inside the Jenkins container. How can I run k8s command inside the Jenkins container so I can able to apply CI/CD for k8s.
Below is the errors when I run k8s command inside the Jenkins container which is running inside the k8s pod.
You need to create the cluster role binding for the service account jenkins.
Create a file with the name sa.yaml and add below content in it
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins
namespace: jenkins
Create the role binding using the command.
kubectl apply -f sa.yaml
Related
I'm running minikube using
minikube start --driver=docker
Then I use the followint sample commands to create and expose service
kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube1 --type=NodePort --port=8080
Problem
Command minikube service hello-minikube1 --url doesn't return a service url. Using <minikube ip>:<nodePort> also doesn't work - connections just stucks.
I tried creating pods with different images and still can't access external service for it
I deployed a redis cluster on Kubernetes with bitnami helm charts (https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster).
I can successfully connect to the Redis cluster from within the Kubernetes cluster by running the below commands:
kubectl run my-redis-release-client --rm -it --image docker.io/bitnami/redis:4.0.11-debian-9 -- bash
redis-cli -h redis-cluster-0.redis-cluster-headless.redis
But I am unable to connect to redis cluster from my golang application deployed within the same cluster.
The redis connection string uri I used on my golang application is "redis://redis-cluster-0.redis-cluster-headless.redis:6379". This is following the "redis-pod-name.redis-service-name.namespace" convention.
NOTE: I want to be able to access the redis cluster from only within the Kubernetes cluster. I don’t want to grant external access. Please help...
Headless service is if you don't need load-balancing and a single Service IP. Headless service is not for accessing the redis cluster from only within the Kubernetes cluster
You can create a service to expose redis. Below is an example to create a ClusterIP type which only let's you connect to it from within the cluster and not from outside the cluster.
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
The pod or deployment of redis need to have matching label app: redis
Then you can connect to it using redis.default.svc.cluster.local:6379 to connect to it from Golang app.
Shortly, I use GOOGLE COMPUTE ENGINE (external IP: 34.73.89.55, all ports and protocols are opened), then I install Docker, minikube, kubectl. Then:
minikube start --driver=docker
minikube tunnel
kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube1 --type=LoadBalancer --port=8080
kubectl get svc
and I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube1 LoadBalancer 10.110.130.109 10.110.130.109 8080:31993/TCP 9m22s
My question is, why the EXTERNAL-IP did not match with the host's external IP: 34.73.89.55? How can I access this service remotely by the host's external IP (ex: I'm at home and access via browser)?
Ps: I would like to use GOOGLE COMPUTE ENGINE.
EDIT:
I also try:
sudo minikube start --driver=none
sudo kubectl create deployment hello-minikube1 --image=k8s.gcr.io/echoserver:1.4
sudo kubectl expose deployment hello-minikube1 --type=NodePort --port=8080
wget 127.0.0.1:8080
=>not work
By default minikube expects to run in a separate VM. This can be changed by explicitly specifying a driver.
Why the EXTERNAL-IP did not match with the host's external IP?
Because minikube uses a tunnel which creates a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP. For a
detailed example see this documentation.
How can I access this service remotely by the host's external IP?
I see two options here:
More recommended: Set --driver=none
Minikube also supports a --driver=none option that runs the
Kubernetes components on the host and not in a VM. Using this driver
requires Docker and a Linux environment but not a hypervisor.
Might be less ideal: Use port forwarding (either using iptables or proxy). This might be less ideal.
Also remember that minikube was created for testing purposes on locahost. Keep that in mind while using it.
EDIT:
When going for --driver=none you can:
Use NodePort type instead of LoadBalancer.
Continue using Loadbalancer with a modified Service by adding:
spec:
externalIPs:
- <host_address>
For example:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: hello-minikube1
name: hello-minikube1
spec:
externalIPs:
- <host_address>
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: hello-minikube1
type: LoadBalancer
status:
loadBalancer: {}
The above was tested and resulted in EXTERNAL IP = HOST IP.
Please let me know if that helped.
I built my Docker image and uploaded it to Amazon ECS (image repository).
I've written a deployment.yaml file and ran kubectl apply -f deployment.yaml.
Worth noting I've used kops to deploy the K8s cluster to AWS EC2
I can see the containers are running in Kubernetes pods using the Kubernetes Dashboard. Also kubectl get pods -o wide also shows me the pods.
The image I deployed is a simple API that exposes one route. My problem is that I have no idea how to query the container I just deployed.
Dockerfile of deployed image:
FROM node:lts
COPY package*.json tsconfig.json ./
RUN npm ci
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/index.js"]
deployment.yaml (kubectl apply -f deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: vuekcal
spec:
selector:
matchLabels:
app: vuekcal
replicas: 2
template:
metadata:
labels:
app: vuekcal
spec:
containers:
- name: search
image: [my-repo-id].dkr.ecr.eu-central-1.amazonaws.com/k8s-search
ports:
- containerPort: 3000
What I tried:
Run kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vuekcal-6956589484-7f2kx 1/1 Running 0 16m 100.96.2.6 ip-172-20-54-21.eu-central-1.compute.internal <none> <none>
vuekcal-6956589484-f4pqf 1/1 Running 0 16m 100.96.1.7 ip-172-20-59-29.eu-central-1.compute.internal <none> <none>
If get and IP address from the IP column and try to curl it, nothing happens:
I assume this is because those IPs are local.
Finding the K8s node that is running my K8s pod with my container and trying to curl that node's public ip address
And same thing: no response.
Everything is fine if I run the container locally docker run k8s-search.
I have no idea what to do here. How do I query the image that deployment.yaml sets up inside a Kubernetes node?
To access the pod from outside the cluster you need to create either Nodeport or LoadBalancer type service.
kubectl expose deployment vuekcal --type=NodePort --name=example-service
Then access it via curl http://<public-node-ip>:<node-port>
!Make sure you ran the kubectl expose command above!
Public node IP
To get the public node IP, run the following command:
kubectl get nodes -o wide
and look at the "EXTERNAL-IP" column. This is the public ip of the node that is running your container. This is where you should try to connect. For example, the extrenal IP of your node could be 133.71.33.7. Remember this IP.
NodePort
It's different than the containerPort in your deployment.yaml.
To find the NodePort, run this command:
kubectl describe service example-service
Replace example-service with whatever you wrote in --name= when running kubectl expose deployment ... (first command in this post)
After you run the command, you'll see something like this:
This is the port you should use when connecting.
Putting it together
133.73.133.7:31110
Installed Harbor on one host(192.168.33.10).
Installed Kubernetes cluster on other hosts.
Pushed docker images to Harbor host from client successfully. On Kubernets master host, I can also pull that image from Harbor host successfully:
$ docker pull 192.168.33.10/hello-world/hello-world
Using default tag: latest
latest: Pulling from hello-world/hello-world
3d19aeb159d4: Pull complete
Digest: sha256:d9f41d096c0e1881e7a24756db9b7315d91c8d4bf1537f6eb10c36edeedde59f
Status: Downloaded newer image for 192.168.33.10/hello-world/hello-world:latest
But I created a Kubernetes deployment yaml file as:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: 192.168.33.10/hello-world/hello-world
name: hello-world
imagePullPolicy: Always
Then run kubectl create -f deployment.yaml
From Kubernetes dashboard it showed:
Failed to pull image "192.168.33.10/hello-world/hello-world": rpc error: code = 2 desc = Error response from daemon: {"message":"Get https://192.168.33.10/v2/: dial tcp 192.168.33.10:443: getsockopt: connection refused"}
Error syncing pod
I already set insecure-registries in /etc/docker/daemon.json:
{ "insecure-registries":["192.168.33.10"] }
How can get that from Kubernetes?
Edit
I am using Kubernetes on Rancher server cluster. Even I set Harbor server's IP, username and password, it can't access, too: