kubectl deploy from within kubernetes container - jenkins

How do you deploy from within Kubernetes container - using CI/CD?
Senario:
I am building within Kubernetes using Kaniko
Now how to run kubectl within Kuberneters. And I do have the right serviceAccount for it. First problem is to have a container ready for executing kubectl.
Note: - /bin/cat
I found this, but it give errors:
apiVersion: v1
kind: Pod
metadata:
name: kubectl-deploy
spec:
containers:
- name: kubectl
image: bitnami/kubectl:latest
imagePullPolicy: Always
command:
- /bin/cat
tty: true
Errors:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 78s default-scheduler Successfully assigned default/kubectl-deploy to master
Normal Pulled 76s kubelet Successfully pulled image "bitnami/kubectl:latest" in 874.059036ms
Normal Pulled 74s kubelet Successfully pulled image "bitnami/kubectl:latest" in 860.59161ms
Normal Pulled 60s kubelet Successfully pulled image "bitnami/kubectl:latest" in 859.31958ms
Normal Pulling 33s (x4 over 77s) kubelet Pulling image "bitnami/kubectl:latest"
Normal Created 32s (x4 over 76s) kubelet Created container kubectl
Normal Started 32s (x4 over 76s) kubelet Started container kubectl
Normal Pulled 32s kubelet Successfully pulled image "bitnami/kubectl:latest" in 849.398179ms
Warning BackOff 7s (x7 over 73s) kubelet Back-off restarting failed container

I found this, but it give errors
When you run a Pod in Kubernetes, by default, it expect it to be a long running service. But in your case, you run a one-off command that terminates immediately. To run one-off commands in Kubernetes, it is easiest to run them as Kubernetes Jobs.
First problem is to have a container ready for executing kubectl.
Since you are using Tekton, have a look at the "deploy task" from Tekton Hub, it is configured with an image that includes kubectl.

Related

Pull docker image from gitlab repository

I am trying to pull an image locally from a gitlab repository.
The yaml file looks like this:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: tester
image: registry.gitlab.com/<my-project>/<components>
imagePullPolicy: Always
securityContext:
privileged: true
imagePullSecrets:
- name: my-token
---
apiVersion: v1
data:
.dockerconfigjson: <my-key>
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
name: my-token
labels:
app: tester
Then I execute: kubectl apply -f pullImage.yaml
The kubectl describe pod private-reg returns:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m1s default-scheduler Successfully assigned default/private-reg to antonis-dell
Normal Pulled 6m46s kubelet Successfully pulled image "registry.gitlab.com/<my-project>/<components>" in 1m14.136699854s
Normal Pulled 6m43s kubelet Successfully pulled image "registry.gitlab.com/<my-project>/<components>" in 1.808412857s
Normal Pulled 6m27s kubelet Successfully pulled image "registry.gitlab.com/<my-project>/<components>" in 3.046153429s
Normal Pulled 5m56s kubelet Successfully pulled image "registry.gitlab.com/<my-project>/<components>" in 4.143342874s
Normal Created 5m56s (x4 over 6m46s) kubelet Created container ches
Normal Started 5m56s (x4 over 6m46s) kubelet Started container ches
Normal Pulling 5m16s (x5 over 8m1s) kubelet Pulling image "registry.gitlab.com/<my-project>/<components>"
Normal Pulled 5m13s kubelet Successfully pulled image "regregistry.gitlab.com/<my-project>/<components>" in 2.783360345s
Warning BackOff 2m54s (x19 over 6m42s) kubelet Back-off restarting failed container
However I cannot find the image locally.
The docker image ls returns:
REPOSITORY TAG IMAGE ID CREATED SIZE
moby/buildkit buildx-stable-1 440639846006 6 days ago 142MB
registry 2 1fd8e1b0bb7e 12 months ago 26.2MB
I excpect that image registry.gitlab.com/<my-project>/<components> would be there.
Am I missing something here?

Docker Image Deployment In K8's Pod not happening

Docker Image: -
docker images | grep -i "gcc"
gcc-docker latest 84c4359e6fc9 21 mites ago 1.37GB
docker run -it gcc-docker:latest
hello,world
Kubernetes pod deployed:-
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/hello-world to master-node
Normal Pulling 4s kubelet, master-node Pulling image "gcc-docker:latest"
Warning Failed 0s kubelet, master-node Failed to pull image "gcc-docker:latest": rpc error: code = Unknown desc = Erroresponse from daemon: pull access denied for gcc-docker, repository does not exist or may require 'docker login': denied: requested acce to the resource is denied
Warning Failed 0s kubelet, master-node Error: ErrImagePull
Normal BackOff 0s kubelet, master-node Back-off pulling image "gcc-docker:latest"
Warning Failed 0s kubelet, master-node Error: ImagePullBackOff
-->yaml files used to deploy pod
apiVersion: v1
kind: Pod
metadata:
name: hello-world
labels:
type: hello-world
spec:
containers:
- name: hello-world
image: gcc-docker:latest
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 60']
ports:
- containerPort: 80
I tried pulling gcc-docker and got the same error.You may have this image present on your system already and now its not on dockerhub.
if you know the repository for this image, try to use the same and for authentication create secrets of docker type and use them as image pull secrets.
Also, one more thing you are running the container on the master node, and I assume it's minikube or some local setup.
Minikube uses a dedicated VM to run Kubernetes which is not the same as the machine on which you have installed minikube.
So images available on your laptop will not be available to minikube.

kubernetes unable to pull image docker private registry

I tried to deploy 'deployments' in kubernetes which is pull docker image from private registry (I don't know who did this setup) but during "docker pull images" through kubernetes i'm getting following error
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 85s default-scheduler Successfully assigned default/trusted-enc-assettag1-deployment-8467b74958-6fbp7 to k8s-node
Normal BackOff 24s (x2 over 61s) kubelet, k8s-node Back-off pulling image "10.105.168.81:5000/simplehttpserverenc:enc_v1"
Warning Failed 24s (x2 over 61s) kubelet, k8s-node Error: ImagePullBackOff
Normal Pulling 12s (x3 over 82s) kubelet, k8s-node Pulling image "10.105.168.81:5000/simplehttpserverenc:enc_v1"
Warning Failed 0s (x3 over 62s) kubelet, k8s-node Failed to pull image "10.105.168.81:5000/simplehttpserverenc:enc_v1": rpc error: code = Unknown desc = Error response from daemon: Get https://10.105.168.81:5000/v2/: net/http: TLS handshake timeout
Warning Failed 0s (x3 over 62s) kubelet, k8s-node Error: ErrImagePull
[root#k8s-master ~]# docker pull 10.105.168.81:5000/simplehttpserverenc:enc_v1
ImagePullBackOff and net/http: TLS handshake timeout error.
Initially this "net/http: TLS handshake timeout" error is observed in docker pull as well. I referred some answers and
configured certificate(/etc/docker/certs.d//ca.crt ) and
proxy (/etc/systemd/system/docker.service.d/proxy.conf)
after that able to perform docker pull from private image.
[root#k8s-master ~]# docker pull 10.105.168.81:5000/simplehttpserverenc:enc_v1
enc_v1: Pulling from simplehttpserverenc
54fec2fa59d0: Pull complete
cd3f35d84cab: Pull complete
a0afc8e92ef0: Pull complete
9691f23efdb7: Pull complete
6512e60b314b: Pull complete
a8ac6632d329: Pull complete
68f4c4e0aa8c: Pull complete
Digest: sha256:0358708cd11e96f6cf6f22b29d46a8eec50d7107597b866e1616a73a198fe797
Status: Downloaded newer image for 10.105.168.81:5000/simplehttpserverenc:enc_v1
10.105.168.81:5000/simplehttpserverenc:enc_v1
[root#k8s-master ~]#
But still unable to perform this docker pull through kubernetes. How to solve this ?
If you use docker as container engine in your k8s, AFAIK it's the same with Understand the configuration. Because the image pulling is conducted by the container engine and it depends the proprietary configuration of each one on the certificates. How about pulling the same image on the worker node in your k8s ? Is it possible to pull the one without errors ?
As your dockerconfigjson is not working properly. Try this method :
kubectl create secret docker-registry regcred --docker-server=10.105.168.81:5000 --docker-username=<your-name> --docker-password=<your-pword>
And in Kubernetes manifest :
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: 10.105.168.81:5000/simplehttpserverenc:enc_v1
imagePullSecrets:
- name: regcred
I had encounted this many times, when I forgot to configure these secrets. Also if you have any othernamespace, you will have to generate secrets for each of these namespaces separately passing -n <your-ns> to above kubectl create secret
Edit : As you can not pull the image from worker node.
Make sure you copied docker-registry ca.crt to /etc/docker/certs.d/ca.crt
and then try docker pull .

EKS Docker Image Pull CrashLoopBackOff

I'm trying to deploy a Docker image from ECR to my EKS. When attempting to deploy my docker image to a pod, I get the following events from a CrashLoopBackOff:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 62s default-scheduler Successfully assigned default/mlflow-tracking-server to <EC2 IP>.internal
Normal SuccessfulAttachVolume 60s attachdetach-controller AttachVolume.Attach succeeded for volume "<PVC>"
Normal Pulling 56s kubelet, <IP>.ec2.internal Pulling image "<ECR Image UI>"
Normal Pulled 56s kubelet, <IP>.ec2.internal Successfully pulled image "<ECR Image UI>"
Normal Created 7s (x4 over 56s) kubelet, <IP>.ec2.internal Created container mlflow-tracking-server
Normal Pulled 7s (x3 over 54s) kubelet, <IP>.ec2.internal Container image "<ECR Image UI>" already present on machine
Normal Started 6s (x4 over 56s) kubelet, <IP>.ec2.internal Started container mlflow-tracking-server
Warning BackOff 4s (x5 over 52s) kubelet, <IP>.ec2.internal Back-off restarting failed container
I don't understand why it keeps looping like this and failing. Would anyone know why this is happening?
CrashLoopBackError can be related to these possible reasons:
the application inside your pod is not starting due to an error;
the image your pod is based on is not present in the registry, or the
node where your pod has been scheduled cannot pull from the registry;
some parameters of the pod has not been configured correctly.
In your case it seems an application error, inside the container.
Try to view the logs with:
kubectl logs <your_pod> -n <namespace>
For more info on how to troubleshoot this kind of error refer to:
https://pillsfromtheweb.blogspot.com/2020/05/troubleshooting-kubernetes.html
The process inside container is crashing. Could be reason of entrypoint on docker base images.
You can try something like this to check the logs of container
kubectl logs -f <pod_name>

Kubernetes can't pull images from docker hub repository

Hello Guys hope you well!
I need the that my master machine order the slave to pull the image from my docker hub repo and I get the error below, It doesn't let the slave pull from the repo, but when I go to the slave, manually pull he pulls
This from kubernetes master:
The first lines are a describe from pod my-app-6c99bd7b9c-dqd6l which is running now because I pulled manually the image from the docker hub, but I want Kubernetes to do it.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/my-app2-74969ddd4f-l6d6l to kubeslave.machine.pt
Normal SandboxChanged <invalid> kubelet, kubeslave.machine.pt Pod sandbox changed, it will be killed and re-created.
Warning Failed <invalid> (x3 over <invalid>) kubelet, kubeslave.machine.pt Failed to pull image "bedjase/repository/my-java-app:my-java-app": rpc error: code = Unknown desc = Error response from daemon: pull access denied for bedjase/repository/my-java-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed <invalid> (x3 over <invalid>) kubelet, kubeslave.machine.pt Error: ErrImagePull
Normal BackOff <invalid> (x7 over <invalid>) kubelet, kubeslave.machine.pt Back-off pulling image "bedjase/repository/my-java-app:my-java-app"
Warning Failed <invalid> (x7 over <invalid>) kubelet, kubeslave.machine.pt Error: ImagePullBackOff
Normal Pulling <invalid> (x4 over <invalid>) kubelet, kubeslave.machine.pt Pulling image "bedjase/repository/my-java-app:my-java-app"
[root#kubernetes ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-6c99bd7b9c-dqd6l 1/1 Running 0 14m
my-app2-74969ddd4f-l6d6l 0/1 ImagePullBackOff 0 2m20s
nginx-86c57db685-bxkpl 1/1 Running 0 8h
This from slave:
[root#kubeslave docker]# docker pull bedjase/repository:my-java-app
my-java-app: Pulling from bedjase/repository
50e431f79093: Already exists
dd8c6d374ea5: Already exists
c85513200d84: Already exists
55769680e827: Already exists
e27ce2095ec2: Already exists
5943eea6cb7c: Already exists
3ed8ceae72a6: Already exists
7ba151cdc926: Already exists
Digest: sha256:c765d09bdda42a4ab682b00f572fdfc4bbcec0b297e9f7716b3e3dbd756ba4f8
Status: Downloaded newer image for bedjase/repository:my-java-app
docker.io/bedjase/repository:my-java-app
I already made the login in both master and slave to docker hub repo and succeed.
Both have /etc/hosts ok, also nodes are connected and ready:
[root#kubernetes ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes.machine.pt Ready master 26h v1.17.4
kubeslave.machine.pt Ready <none> 26h v1.17.4
Am I missing some point here?
For private images you must create a secret with username and password of Docker Hub to Kubernetes be able to pull the image.
The command bellow create a secret name regcred with your Docker Hub credentials, replace the fields <<your-name>>, <your-password> and <your-email>:
kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username=<your-name> --docker-password=<your-password> --docker-email=<your-email>
After that you need to add in your pod/deployment spec that you want to use this credentials to pull your private image adding the imagePullSecrets with the credentials created above, see this example:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
References:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret
Just to add to the other answers,
1) Create a secret with the following command:
Create a secret for pulling docker images
2) Create your pod that uses this secret as described here:
use the secret in pod
A detailed script to create the secret and another script to patch all the service accounts can be found in my answer here:
How to pull image from dockerhub in kubernetes?
Patching all the service accounts will allow all your k8s namespaces to pull any image from dockerhub without changing the k8s deploy manifests.

Resources