I created a sample Node.js project in GitHub and created a docker image for the same. I uploaded the docker image as a package in the same repository. This is a public repo. I created a kubernetes config yaml file with this image as the pods image. Following is the yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
selector:
matchLabels:
component: node-server
template:
metadata:
labels:
component: node-server
spec:
containers:
- name: node-server
image: docker.pkg.github.com/lethalbrains/intense_omega/io_service:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: dockerconfigjson-github-com
---
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
selector:
component: node-server
ports:
- port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/inress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /api/
backend:
serviceName: server-cluster-ip-service
servicePort: 3000
After I apply this file using Kubectl and check the pods details, I get an ImagePullBackOff error.
I even tried using this option of using dockerconfigjson secret with Github Personal Access Token but still the sam result.
Edit:
Added error message from pods describe
This seems to be an issue with GitHub registry which is being discussed here.
What I can recommend is to push the image to docker hub or if create private repo which you can read about at Using a private Docker Registry with Kubernetes.
There seems to be a workaround but I did not tested that.
It's published by #sudomaxime and available here:
Here's a nasty little workaround for thoses who:
Don't mind loosing blue/green deploys until this is resolved
Don't mind 10-15 secs app start-up time
Use docker swarm / docker stack deploys
Use CI scripts for deployment
In your CI scripts call:
$ docker stack rm {{ your_stack_name }}
$ until [ -z $(docker stack ps {{ your_stack_name }} -q) ]; do sleep 1; done
$ docker stack deploy --with-registry-auth -c docker-compose.yml {{ your_stack_name }}
Basically you ask Docker scheduler to stop all the services under {{ your_stack_name }} orchestrator. A little knack of docker swarm is that docker stack rm will immediately return even if some services are not properly closed chich may cause networking errors when you try to deploy again. That's why we use a small inline script until [ -z $(docker stack ps {{ your_stack_name }} -q) ]; do sleep 1; done to wait for the proper return.
Hopes it saves a few folks headaches. I guess a similar temporary fix will help you out.
This is quite a frustrating issue, for our apps that MUST use blue/green deploys we bought a private repo to fix the problem.
Related
I have a pod running Linux, I have let others use it. Now I need to save the changes made by others. Since sometimes I need to delete/restart the pod, the changes are reverted and new pod get created. So I want to save the pod container as docker image and use that image to create a pod.
I have tried kubectl debug node/pool-89899hhdyhd-bygy -it --image=ubuntu then install docker, dockerd inside but they don't have root permission to perform operations, installed crictl they where listing the containers but they don't have options to save them.
Also created a privileged docker image, created a pod from it, then used the command kubectl exec --stdin --tty app-7ff786bc77-d5dhg -- /bin/sh then tried to get running container, but it was not listing the containers. Below is the deployment i used to the privileged docker container
kind: Deployment
apiVersion: apps/v1
metadata:
name: app
labels:
app: backend-app
backend-app: app
spec:
replicas: 1
selector:
matchLabels:
app: backend-app
task: app
template:
metadata:
labels:
app: backend-app
task: app
spec:
nodeSelector:
kubernetes.io/hostname: pool-58i9au7bq-mgs6d
volumes:
- name: task-pv-storage
hostPath:
path: /run/docker.sock
type: Socket
containers:
- name: app
image: registry.digitalocean.com/my_registry/docker_app#sha256:b95016bd9653631277455466b2f60f5dc027f0963633881b5d9b9e2304c57098
ports:
- containerPort: 80
volumeMounts:
- name: task-pv-storage
mountPath: /var/run/docker.sock
Is there any way I can achieve this, get the pod container and save it as a docker image? I am using digitalocean to run my kubernetes apps, I do not ssh access to the node.
This is not a feature of Kubernetes or CRI. Docker does support snapshotting a running container to an image however Kubernetes no longer supports Docker.
Thank you all for your help and suggestions. I found a way to achieve it using the tool nerdctl - https://github.com/containerd/nerdctl.
I was able to publish a Docker image using the jenkins pipeline, but not pull the docker image from the nexus.I used kaniko to build the image.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
hostNetwork: false
containers:
- name: test-app
image: ip_adress/demo:0.1.0
imagePullPolicy: Always
resources:
limits: {}
imagePullSecrets:
- name: registrypullsecret
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: test-app
name: test-app-service
namespace: jenkins
spec:
ports:
- nodePort: 32225
port: 8081
protocol: TCP
targetPort: 8081
selector:
app: test-app
type: NodePort
Jenkins pipeline main script
stage ('Build Image'){
container('kaniko'){
script {
sh '''
/kaniko/executor --dockerfile `pwd`/Dockerfile --context `pwd` --destination="$ip_adress:8082/demo:0.1.0" --insecure --skip-tls-verify
'''
}
stage('Kubernetes Deployment'){
container('kubectl'){
withKubeConfig([credentialsId: 'kube-config', namespace:'jenkins']){
sh 'kubectl get pods'
sh 'kubectl apply -f deployment.yml'
sh 'kubectl apply -f service.yml'
}
I've created a dockerfile of a Spring boot Java application. I've sent the image to Nexus using the Jenkins pipeline, but I can't deploy it.
kubectl get pod -n jenkins
test-app-... 0/1 ImagePullBackOff
kubectl describe pod test-app-.....
Error from server (NotFound): pods "test-app-.." not found
docker pull $ip_adress:8081/repository/docker-releases/demo:0.1.0 ```
Error response from daemon: Get "https://$ip_adress/v2/": http:server
gave HTTP response to HTTPS client
ip adress: private ip address
How can I send as http?
First of all try to edit /etc/containerd/config.toml and add your registry ip:port like this { "insecure-registries": ["172.16.4.93:5000"] }
if there was still a problem, add your nexus registry credential to yaml kubernetes file like link below
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
If we want to use a private registry to pull the images in kubernetes we need to configure the registry endpoint and credentials as a secret and use it in pod deployment configuration.
Note: The secrets must have to be in the same namespace as Pod
Refer this official k8 document to know more details about configuring private registry in Kubernetes
In your case you are using secret registrypullsecret . So check the secret one more time whether it is configured properly or not. If not, try following the documentation mentioned above.
A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c
Heres image of my Kubernetes services.
Todo-front-2 is working instance of my app, which I deployed with command line:
kubectl run todo-front --image=todo-front:v7 --image-pull-policy=Never
kubectl expose deployment todo-front --type=NodePort --port=3000
And it's working great. Now I want to move on and use todo-front.yaml file to deploy and expose my service. Todo-front service refers to my current try on it. My deployment file looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: todo-front
spec:
replicas: 1
selector:
matchLabels:
app: todo-front
template:
metadata:
labels:
app: todo-front
spec:
containers:
- name: todo-front
image: todo-front:v7
env:
- name: REACT_APP_API_ROOT
value: "http://localhost:12000"
imagePullPolicy: Never
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: todo-front
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
selector:
app: todo-front
I deploy it using:
kubectl apply -f deployment/todo-front.yaml
Here is the output
But when I run
minikube service todo-front
It redirects me to URL saying "Site can't be reached".
I can't figure out what I'm doing wrong. Ports should be ok, and my cluster should be ok since I can get it working by only using command-line without external YAML files. Both deployments are also using the same docker-image. I have also tried changing all ports now "3000" to something different, in case they clash with existing deployment todo-front-2, no luck.
Here is also a screenshot of pods and their status:
Anyone with more experience with Kube and Docker cares to take a look? Thank you!
You can run below commands to generate the yaml files without applying it to the cluster and then compare it with the yamls you manually created and see if there is a mismatch. Also instead of creating yamls manually yourself you can apply the generated yamls itself.
kubectl run todo-front --image=todo-back:v7 --image-pull-policy=Never --dry-run -o yaml > todo-front.yaml
kubectl expose deployment todo-front --type=NodePort --port=3000 --dry-run -o yaml > todo-depoloyment.yaml
I've build docker image locally:
docker build -t backend -f backend.docker
Now I want to create deployment with it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
spec:
selector:
matchLabels:
tier: backend
replicas: 2
template:
metadata:
labels:
tier: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: IfNotPresent # This should be by default so
ports:
- containerPort: 80
kubectl apply -f file_provided_above.yaml works, but then I have following pods statuses:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-deployment-66cff7d4c6-gwbzf 0/1 ImagePullBackOff 0 18s
Before that it was ErrImagePull. So, my question is, how to tell it to use local docker images? Somewhere on the internet I read that I need to build images using microk8s.docker but it seems to be removed.
Found docs on how to use private registry: https://microk8s.io/docs/working
First it needs to be enabled:
microk8s.enable registry
Then images pushed to registry:
docker tag backend localhost:32000/backend
docker push localhost:32000/backend
And then in above config image: backend needs to be replaced with image: localhost:32000/backend