I am trying to build ci/cd locally with jenkins and minikube.
I run minikube on my machine (host) with docker driver, and run jenkins in a container too.
Both on the same docker network.
To run kubectl commands inside a jenkins pipeline I need to
access the minikube from my container that is running jenkins.
I've tried to use the container name as a host but it didn't work.
I'm out of ideas for attempts can someone help me?
Went in to same issue: cannot access $(minikube ip) from external docker container while access from host machine is fine.
running the docker container with --network host option solved the issue.
Running kubectl commands from a pod (container) is possible and simple to achieve. Although it's more practical and recommended to use Kubernetes API instead.
For both of them you are required to give the right permissions to your pods so they can authenticate to be able to make k8s API calls (kubectl is just an application that talks to your cluster through the API).
Here is a good example by mster:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-101
spec:
replicas: 3
template:
metadata:
labels:
app: k8s-101
spec:
serviceAccountName: k8s-101-role
containers:
- name: k8s-101
imagePullPolicy: Always
image: yourrepo/yourcontainer
ports:
- name: app
containerPort: 3000
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-101-role
subjects:
- kind: ServiceAccount
name: k8s-101-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-101-role
Here we are giving cluster-role rights to the Deployment Pods and consider it as a bad example as it's dangerous, it exposes your cluster.
Next you have to prepare your containers to have kubectl built in:
Download & Build kubectl inside the container
Build your application, copying kubectl to your container
Voila! kubectl provides a rich cli for managing your kubernetes cluster
If you prefer to talk directly to the API, you don't need to do anything else. Just go to the documentation to understand how to make calls, and also check Access Clusters Using the Kubernetes API.
Related
I have a local kubernetes cluster up and running using k3s. It works like a charm so far.
On it I'm running a custom Docker registry from which I want to pull images for other deployments.
The registry is exposed to the host by means of a NodePort service. Internally it has port 5000, externally it's on port 31320.
I can push docker images to the registry from the host by tagging them as myhostname:31320/myimage:latest. This works great too.
Now I want to use this image in a basic Job deployment. I'm using the whole tag myhostname:31320/myimage:latest as container image entry like this:
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world
spec:
template:
metadata:
name: hello-world-pod
spec:
containers:
- name: hello-world
image: myhostname:31320/myimage:latest
restartPolicy: Never
Unfortunately, I keep getting a 400 BadRequest error stating: image can't be pulled. If I try using the internal service name of the registry and the internal port instead, like in private-registry:5000/myimage:latest, I'm getting the same error.
I suppose I cannot use private-registry:5000/myimage:latest because that's just not the tag of the image. I cannot push the image to private-registry:5000/myimage:latest because the host private-registry is only known inside the cluster and the port 5000 is not exposed to the host.
So... I'm stuck. What am I going to do about this? How do I get to push images from the host to the registry and allow them to be pulled from inside the cluster?
Kubernetes has a rich documentation on how to implement multiple registries to allow further deployments/pods to access to public or even private registries, to do so you can create an image pull secret k8s ressource (docs), you can either create it by running this command:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword>
or by deploying this resource in your cluster:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: awesomeapps
data:
# Make sure the you convert the whole file to base64!
# cat registry.json | base64 -d
.dockerconfigjson: <registry.json>
type: kubernetes.io/dockerconfigjson
registry.json example
{
"auths": {
"your.private.registry.example.com": {
"username": "janedoe",
"password": "xxxxxxxxxxx",
"email": "jdoe#example.com",
"auth": "c3R...zE2"
}
}
}
And now you can simply attache this imagePullSecret resource you can attache it to your deployment:
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world
spec:
template:
metadata:
name: hello-world-pod
spec:
imagePullSecrets:
- name: regcred
containers:
- name: hello-world
image: myhostname:31320/myimage:latest
restartPolicy: Never
PS
You might also consider adding your registry in docker daemon as insecure registry if you encounter other issues.
you can check this SO question
A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c
I'm running my Kubernetes Cluster using Docker and not using Minikube Cluster (requires a lot of memory) however after applying the required files, I can't get an external URL (like I used to have when I used Minikube) to run it in my Chrome browser.
Consider the followings:
The Pod:
apiVersion: v1
kind: Pod
metadata:
name: webapp-release-0-5
labels:
app: webapp
release: "0-5"
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0-5
And its Service :
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
app: webapp
release: "0-5"
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
After applying both from Command line (WINDOWS 10 CLI) :
>>> kubectl get services
fleetman-webapp NodePort 10.96.227.189 <none> 80:30080/TCP 5m57s
>>> kubectl get po
webapp 1/1 Running 0 12m
webapp-release-0-5 1/1 Running 0 12m
However, I don't have an external URL of this POD to put in my browser to check out the App, like I used to have in Minikube.
How can we generate such URL?
It's particularly hard to make it work in a bespoke way. I would suggest to use kind which creates kubernetes nodes as docker container. You can access NodePort service via http://<NODEIP>:<NODEPORT>. To get NODEIP use kubectl get nodes
I create a yaml file to create rabbitmq kubernetes cluster. I can see pods. But when I write kubectl get deployment. I cant see there. I can't access to rabbitmq ui page.
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbit
name: rabbit
spec:
ports:
- port: 5672
protocol: TCP
name: mqtt
- port: 15672
protocol: TCP
name: ui
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbit
spec:
serviceName: rabbit
replicas: 3
selector:
matchLabels:
app: rabbit
template:
metadata:
labels:
app: rabbit
spec:
containers:
- name: rabbitmq
image: rabbitmq
nodeSelector:
rabbitmq: "clustered"
#arghya-sadhu's answer is correct.
NB I'm unfamiliar with RabbitMQ but you may need to use a different image (see 'Management Plugin`) to include the UI.
See below for more details.
You should be able to hack your way to the UI on one (!) of the Pods via:
PORT=8888
kubectl port-forward pod/rabbit-0 --namespace=${NAMESPACE} ${PORT}:15672
And then browse localhost:${PORT} (if 8888 is unavailable, try another).
I suspect (!) this won't work unless you use the image with the management plugin.
Plus
The Service needs to select the StatefulSet's Pods
Within the Service spec you should add perhaps:
selector:
app: rabbit
Presumably (!?) you are using a private repo (because you have imagePullSecrets).
If you don't and wish to use DockerHub, you may remove the imagePullSecrets section.
It's useful to document (!) container ports albeit not mandatory:
In the StatefulSet
ports:
- containerPort: 5672
- containerPort: 15672
Debug
NAMESPACE="default" # Or ...
Ensure the StatefulSet is created:
kubectl get statesfulset/rabbit --namespace=${NAMESPACE}
Check the Pods:
kubectl get pods --selector=app=rabbit --namespace=${NAMESPACE}
You can check the the Pods are bound to a (!) Service:
kubectl describe endpoints/rabbit --namespace=${NAMESPACE}
NB You should see 3 addresses (one per Pod)
Get the NodePort either:
kubectl get service/rabbit --namespace=${NAMESPACE} --output=json
kubectl describe service/rabbit --namespace=${NAMESPACE}
You will need to use the NodePort to access both the MQTT endpoint and the UI.
statefulsets and deployments are different kubernetes resources. You have created statefulsets. That's why you don't see deployments. If you do
kubectl get statefulset you should see it and also both statefulset and deployment creates pod finally so you should be able to see rabbitmq pods if you do kubectl get pods
Since you have created a Nodeport service. You should be able to access it via http://nodeip:nodeport where nodeip is ip of any worker node in your kubernetes cluster.
You can get to know what is the Nodeport(a number between 30000-32767) by
kubectl describe services rabbit
Here is the doc on accessing a Nodeport service from outside the cluster.
I have created a kubernetes cluster and deployed jenkins by following file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-ci
spec:
replicas: 1
template:
metadata:
labels:
run: jenkins-ci
spec:
containers:
- name: jenkins-ci
image: jenkins:2.32.2
ports:
- containerPort: 8080
and service by
apiVersion: v1
kind: Service
metadata:
name: jenkins-cli-lb
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
nodePort: 30000
# label keys and values that must match in order to receive traffic for this service
selector:
run: jenkins-ci
Now i can access jenkins UI in my browser without any problems. My issue I came into situation in which need to restart jenkins service manually??
Just kubectl delete pods -l run=jenkins-ci - Will delete all pods with this label (your jenkins containers).
Since they are under Deployment, it will re-create the containers. Network routing will be adjusted automatically (again because of the label selector).
See https://kubernetes.io/docs/reference/kubectl/cheatsheet/
You can use command below to enter the pod container.
$ kubectl exec -it kubernetes pod -- /bin/bash
After apply service Jenkins restart command.
For more details please refer :how to restart service inside pod in kubernetes cluster.