Image Pulling issue in Kubernetes deployment from Dockerhub registry - docker

I am currently trying to implement the CI/CD pipeline using docker , Kubernetes and Jenkins. When I created the pipeline deployment Kubernetes deployment YAML file, I was not included the time stamp. Only I was using the imagePullPolicy as latest in YAML file. Regarding with latest pull I had already one discussion here, The following is the link for that discussion,
Docker image not pulling latest from dockerhub.com registry
After This discussion , I included the time stamp in my deployment YAML like the following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-kube-deployment
labels:
app: test-kube-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-kube-deployment
template:
metadata:
labels:
app: test-kube-deployment
annotations:
date: "+%H:%M:%S %d/%m/%y"
spec:
imagePullSecrets:
- name: "regcred"
containers:
- name: test-kube-deployment-container
image: spacestudymilletech010/spacestudykubernetes:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 8085
protocol: TCP
Here I modified my script to include the time stamp by adding the following in template,
annotations:
date: "+%H:%M:%S %d/%m/%y"
My service file like following,
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- port: 8085
targetPort: 8085
protocol: TCP
name: http
selector:
app: test-kube-deployment
My jenkinsfile conatining the following,
stage ('imagebuild')
{
steps
{
sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes:latest /var/lib/jenkins/workspace/jpipeline/pipeline'
sh 'docker login --username=<my-username> --password=<my-password>'
sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'
}
}
stage ('Test Deployment')
{
steps
{
sh 'kubectl apply -f deployment/testdeployment.yaml'
sh 'kubectl apply -f deployment/testservice.yaml'
}
}
But still the deployment not pulling the latest one from Dockerhub registry. How I can modify these script for resolving the latest pulling problem?

The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always.
omit the imagePullPolicy and use :latest as the tag for the image to use.
omit the imagePullPolicy and the tag for the image to use.
enable the AlwaysPullImages admission controller.
Basically, either use :latest or then use imagePullPolicy: Always
Try it and let me know how it goes!
Referenced from here

There is many articles and docs that explain how to properly build and publish the docker image using Jenkins.
You should first read Using Docker with Pipeline which shows you an example with environment variable ${env.BUILD_ID}
node {
checkout scm
docker.withRegistry('https://registry.example.com', 'credentials-id') {
def customImage = docker.build("my-image:${env.BUILD_ID}")
/* Push the container to the custom Registry */
customImage.push()
}
}
Or to put it as a stage:
stage('Push image') {
docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
I really do recommend reading Building your first Docker image with Jenkins 2: Guide for developers, which I think will answer many if not all of your questions.

Related

How to upload and download docker images using nexus registry/repository?

I was able to publish a Docker image using the jenkins pipeline, but not pull the docker image from the nexus.I used kaniko to build the image.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
hostNetwork: false
containers:
- name: test-app
image: ip_adress/demo:0.1.0
imagePullPolicy: Always
resources:
limits: {}
imagePullSecrets:
- name: registrypullsecret
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: test-app
name: test-app-service
namespace: jenkins
spec:
ports:
- nodePort: 32225
port: 8081
protocol: TCP
targetPort: 8081
selector:
app: test-app
type: NodePort
Jenkins pipeline main script
stage ('Build Image'){
container('kaniko'){
script {
sh '''
/kaniko/executor --dockerfile `pwd`/Dockerfile --context `pwd` --destination="$ip_adress:8082/demo:0.1.0" --insecure --skip-tls-verify
'''
}
stage('Kubernetes Deployment'){
container('kubectl'){
withKubeConfig([credentialsId: 'kube-config', namespace:'jenkins']){
sh 'kubectl get pods'
sh 'kubectl apply -f deployment.yml'
sh 'kubectl apply -f service.yml'
}
I've created a dockerfile of a Spring boot Java application. I've sent the image to Nexus using the Jenkins pipeline, but I can't deploy it.
kubectl get pod -n jenkins
test-app-... 0/1 ImagePullBackOff
kubectl describe pod test-app-.....
Error from server (NotFound): pods "test-app-.." not found
docker pull $ip_adress:8081/repository/docker-releases/demo:0.1.0 ```
Error response from daemon: Get "https://$ip_adress/v2/": http:server
gave HTTP response to HTTPS client
ip adress: private ip address
How can I send as http?
First of all try to edit /etc/containerd/config.toml and add your registry ip:port like this { "insecure-registries": ["172.16.4.93:5000"] }
if there was still a problem, add your nexus registry credential to yaml kubernetes file like link below
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
If we want to use a private registry to pull the images in kubernetes we need to configure the registry endpoint and credentials as a secret and use it in pod deployment configuration.
Note: The secrets must have to be in the same namespace as Pod
Refer this official k8 document to know more details about configuring private registry in Kubernetes
In your case you are using secret registrypullsecret . So check the secret one more time whether it is configured properly or not. If not, try following the documentation mentioned above.

How to make a deployment file for a kubernetes service that depends on images from Amazon ECR?

A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c

Unable to pull public docker image packages from GitHub through Kubernetes

I created a sample Node.js project in GitHub and created a docker image for the same. I uploaded the docker image as a package in the same repository. This is a public repo. I created a kubernetes config yaml file with this image as the pods image. Following is the yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
selector:
matchLabels:
component: node-server
template:
metadata:
labels:
component: node-server
spec:
containers:
- name: node-server
image: docker.pkg.github.com/lethalbrains/intense_omega/io_service:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: dockerconfigjson-github-com
---
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
selector:
component: node-server
ports:
- port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/inress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /api/
backend:
serviceName: server-cluster-ip-service
servicePort: 3000
After I apply this file using Kubectl and check the pods details, I get an ImagePullBackOff error.
I even tried using this option of using dockerconfigjson secret with Github Personal Access Token but still the sam result.
Edit:
Added error message from pods describe
This seems to be an issue with GitHub registry which is being discussed here.
What I can recommend is to push the image to docker hub or if create private repo which you can read about at Using a private Docker Registry with Kubernetes.
There seems to be a workaround but I did not tested that.
It's published by #sudomaxime and available here:
Here's a nasty little workaround for thoses who:
Don't mind loosing blue/green deploys until this is resolved
Don't mind 10-15 secs app start-up time
Use docker swarm / docker stack deploys
Use CI scripts for deployment
In your CI scripts call:
$ docker stack rm {{ your_stack_name }}
$ until [ -z $(docker stack ps {{ your_stack_name }} -q) ]; do sleep 1; done
$ docker stack deploy --with-registry-auth -c docker-compose.yml {{ your_stack_name }}
Basically you ask Docker scheduler to stop all the services under {{ your_stack_name }} orchestrator. A little knack of docker swarm is that docker stack rm will immediately return even if some services are not properly closed chich may cause networking errors when you try to deploy again. That's why we use a small inline script until [ -z $(docker stack ps {{ your_stack_name }} -q) ]; do sleep 1; done to wait for the proper return.
Hopes it saves a few folks headaches. I guess a similar temporary fix will help you out.
This is quite a frustrating issue, for our apps that MUST use blue/green deploys we bought a private repo to fix the problem.

Openshift Job container image from internal registry

I have this Kubernetes Job instance:
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
name: job
image: 172.30.34.145:5000/myproj/app:latest
command: ["/bin/sh", "-c", "$(COMMAND)"]
serviceAccount: default
serviceAccountName: default
restartPolicy: Never
How can I write the image name so it always pull from within my own namespace.
I'd like to set it like this:
image: app:latest
But it fails saying it's unable to pull the image
To pull from a different repository then dockerhub you need to specify the host:port part in the image name. As far as I am aware at this point there is no option to change to location of default registry in docker daemon.
If you are very fixed on the idea, you could fiddle with DNS so it resolves to your image registry instead of dockers one, but that would cut you off from docker hub completely.

How to update a Kubernetes deployment on Google Container Engine?

I've followed a few guides, and I've got CI set up with Google Container Engine and Google Container Registry. The problem is my updates aren't being applied to the deployment.
So this is my deployment.yml which contains a Kubernetes Service and Deployment:
apiVersion: v1
kind: Service
metadata:
name: my_app
labels:
app: my_app
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: my_app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my_app
spec:
replicas: 1
template:
metadata:
labels:
app: my_app
spec:
containers:
- name: node
image: gcr.io/me/my_app:latest
ports:
- containerPort: 3000
resources:
requests:
memory: 100
- name: phantom
image: docker.io/wernight/phantomjs:2.1.1
command: ["phantomjs", "--webdriver=8910", "--web-security=no", "--load-images=false", "--local-to-remote-url-access=yes"]
ports:
- containerPort: 8910
resources:
requests:
memory: 1000
As part of my CI process I run a script which updates the image in google cloud registry, then runs kubectl apply -f /deploy/deployment.yml. Both tasks succeed, and I'm notified the Deployment and Service has been updated:
2016-09-28T14:37:26.375Zgoogleclouddeploymentservice "my_app" configured
2016-09-28T14:37:27.370Zgoogleclouddeploymentdeployment "my_app" configured
Since I've included the :latest tag on my image, I thought the image would be downloaded each time the deployment is updated. Acccording to the docs a RollingUpdate should also be the default strategy.
However, when I run my CI script which updates the deployment - the updated image isn't downloaded and the changes aren't applied. What am I missing? I'm assuming that since nothing is changing in deployment.yml, no update is being applied. How do I get Kubernetes to download my updated image and use a RollingUpdate to deploy it?
You can force an update of a deployment by changing any field, such as a label. So in my case, I just added this at the end of my CI script:
kubectl patch deployment fb-video-extraction -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
We have recently published a technical overview of how the approach that we call GitOps approach can be implemented in GKE.
All you need to do is configure GCR builder to pick-up code changes from Github and run builds, you then install Weave Cloud agent in your cluster and connect to a repo where YAML files are stored, and the agent will take care of updating the repo with new images and applying the changes to the cluster.
For a more high-level overview, see also:
The GitOps Pipeline
Deploy Applications & Manage Releases
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.

Resources