How to upload and download docker images using nexus registry/repository? - docker

I was able to publish a Docker image using the jenkins pipeline, but not pull the docker image from the nexus.I used kaniko to build the image.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
hostNetwork: false
containers:
- name: test-app
image: ip_adress/demo:0.1.0
imagePullPolicy: Always
resources:
limits: {}
imagePullSecrets:
- name: registrypullsecret
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: test-app
name: test-app-service
namespace: jenkins
spec:
ports:
- nodePort: 32225
port: 8081
protocol: TCP
targetPort: 8081
selector:
app: test-app
type: NodePort
Jenkins pipeline main script
stage ('Build Image'){
container('kaniko'){
script {
sh '''
/kaniko/executor --dockerfile `pwd`/Dockerfile --context `pwd` --destination="$ip_adress:8082/demo:0.1.0" --insecure --skip-tls-verify
'''
}
stage('Kubernetes Deployment'){
container('kubectl'){
withKubeConfig([credentialsId: 'kube-config', namespace:'jenkins']){
sh 'kubectl get pods'
sh 'kubectl apply -f deployment.yml'
sh 'kubectl apply -f service.yml'
}
I've created a dockerfile of a Spring boot Java application. I've sent the image to Nexus using the Jenkins pipeline, but I can't deploy it.
kubectl get pod -n jenkins
test-app-... 0/1 ImagePullBackOff
kubectl describe pod test-app-.....
Error from server (NotFound): pods "test-app-.." not found
docker pull $ip_adress:8081/repository/docker-releases/demo:0.1.0 ```
Error response from daemon: Get "https://$ip_adress/v2/": http:server
gave HTTP response to HTTPS client
ip adress: private ip address
How can I send as http?

First of all try to edit /etc/containerd/config.toml and add your registry ip:port like this { "insecure-registries": ["172.16.4.93:5000"] }
if there was still a problem, add your nexus registry credential to yaml kubernetes file like link below
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

If we want to use a private registry to pull the images in kubernetes we need to configure the registry endpoint and credentials as a secret and use it in pod deployment configuration.
Note: The secrets must have to be in the same namespace as Pod
Refer this official k8 document to know more details about configuring private registry in Kubernetes
In your case you are using secret registrypullsecret . So check the secret one more time whether it is configured properly or not. If not, try following the documentation mentioned above.

Related

How to make a deployment file for a kubernetes service that depends on images from Amazon ECR?

A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c

Unable to pull public docker image packages from GitHub through Kubernetes

I created a sample Node.js project in GitHub and created a docker image for the same. I uploaded the docker image as a package in the same repository. This is a public repo. I created a kubernetes config yaml file with this image as the pods image. Following is the yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
selector:
matchLabels:
component: node-server
template:
metadata:
labels:
component: node-server
spec:
containers:
- name: node-server
image: docker.pkg.github.com/lethalbrains/intense_omega/io_service:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: dockerconfigjson-github-com
---
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
selector:
component: node-server
ports:
- port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/inress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /api/
backend:
serviceName: server-cluster-ip-service
servicePort: 3000
After I apply this file using Kubectl and check the pods details, I get an ImagePullBackOff error.
I even tried using this option of using dockerconfigjson secret with Github Personal Access Token but still the sam result.
Edit:
Added error message from pods describe
This seems to be an issue with GitHub registry which is being discussed here.
What I can recommend is to push the image to docker hub or if create private repo which you can read about at Using a private Docker Registry with Kubernetes.
There seems to be a workaround but I did not tested that.
It's published by #sudomaxime and available here:
Here's a nasty little workaround for thoses who:
Don't mind loosing blue/green deploys until this is resolved
Don't mind 10-15 secs app start-up time
Use docker swarm / docker stack deploys
Use CI scripts for deployment
In your CI scripts call:
$ docker stack rm {{ your_stack_name }}
$ until [ -z $(docker stack ps {{ your_stack_name }} -q) ]; do sleep 1; done
$ docker stack deploy --with-registry-auth -c docker-compose.yml {{ your_stack_name }}
Basically you ask Docker scheduler to stop all the services under {{ your_stack_name }} orchestrator. A little knack of docker swarm is that docker stack rm will immediately return even if some services are not properly closed chich may cause networking errors when you try to deploy again. That's why we use a small inline script until [ -z $(docker stack ps {{ your_stack_name }} -q) ]; do sleep 1; done to wait for the proper return.
Hopes it saves a few folks headaches. I guess a similar temporary fix will help you out.
This is quite a frustrating issue, for our apps that MUST use blue/green deploys we bought a private repo to fix the problem.

Image Pulling issue in Kubernetes deployment from Dockerhub registry

I am currently trying to implement the CI/CD pipeline using docker , Kubernetes and Jenkins. When I created the pipeline deployment Kubernetes deployment YAML file, I was not included the time stamp. Only I was using the imagePullPolicy as latest in YAML file. Regarding with latest pull I had already one discussion here, The following is the link for that discussion,
Docker image not pulling latest from dockerhub.com registry
After This discussion , I included the time stamp in my deployment YAML like the following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-kube-deployment
labels:
app: test-kube-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-kube-deployment
template:
metadata:
labels:
app: test-kube-deployment
annotations:
date: "+%H:%M:%S %d/%m/%y"
spec:
imagePullSecrets:
- name: "regcred"
containers:
- name: test-kube-deployment-container
image: spacestudymilletech010/spacestudykubernetes:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 8085
protocol: TCP
Here I modified my script to include the time stamp by adding the following in template,
annotations:
date: "+%H:%M:%S %d/%m/%y"
My service file like following,
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- port: 8085
targetPort: 8085
protocol: TCP
name: http
selector:
app: test-kube-deployment
My jenkinsfile conatining the following,
stage ('imagebuild')
{
steps
{
sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes:latest /var/lib/jenkins/workspace/jpipeline/pipeline'
sh 'docker login --username=<my-username> --password=<my-password>'
sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'
}
}
stage ('Test Deployment')
{
steps
{
sh 'kubectl apply -f deployment/testdeployment.yaml'
sh 'kubectl apply -f deployment/testservice.yaml'
}
}
But still the deployment not pulling the latest one from Dockerhub registry. How I can modify these script for resolving the latest pulling problem?
The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always.
omit the imagePullPolicy and use :latest as the tag for the image to use.
omit the imagePullPolicy and the tag for the image to use.
enable the AlwaysPullImages admission controller.
Basically, either use :latest or then use imagePullPolicy: Always
Try it and let me know how it goes!
Referenced from here
There is many articles and docs that explain how to properly build and publish the docker image using Jenkins.
You should first read Using Docker with Pipeline which shows you an example with environment variable ${env.BUILD_ID}
node {
checkout scm
docker.withRegistry('https://registry.example.com', 'credentials-id') {
def customImage = docker.build("my-image:${env.BUILD_ID}")
/* Push the container to the custom Registry */
customImage.push()
}
}
Or to put it as a stage:
stage('Push image') {
docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
I really do recommend reading Building your first Docker image with Jenkins 2: Guide for developers, which I think will answer many if not all of your questions.

Kubernetes Continuous deployment stage in Gitlab Online fails

I am working on setting up a cloud DevOps deployment pipeline using Gitlab CI online, Kubernetes, and docker. I am following an example post at Continous delivery of a spring boot application with Gitlab CI and kubernetes and Kubectl delete/create secret forbidden (Google cloud platform) .
Find below my .gitlab-ci.yml file's source
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build -t registry.gitlab.com/username/mta-hosting-optimizer .
- docker push registry.gitlab.com/username/mta-hosting-optimizer
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west1-c
- gcloud config set project mta-hosting-optimizer
- gcloud config unset container/use_client_certificate
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials mta-hosting-optimizer
- kubectl create -f admin.yaml --validate=false
- kubectl create clusterrolebinding serviceaccounts-cluster-admin--clusterrole=cluster-admin --group=system:serviceaccounts
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=username --docker-password=$REGISTRY_PASSWD --docker-email=email#email.com
- kubectl apply -f deployment.yml
Deployment fails at the line below
- kubectl create -f admin.yaml --validate=false
The error message displayed upon this failure is as follow:
error: error converting YAML to JSON: yaml: mapping values are not allowed in this context
ERROR: Job failed: exit code 1
The admin.yaml file's source is as follows:
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system
The Maven build and Docker build/package stages work find. This is the only stage that fails. I will appreciate everyone's help in resolving this issue.
Thank you very much.
You have a YAML validation error. This means that your YAML isn't formatted correctly.
You most likely wanted to format your admin.yaml file this way:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
Also: As Matthew L Daniel already pointed out you shouldn't disable validation of the YAML files.

how to restart jenkins service inside pod in kubernetes cluster

I have created a kubernetes cluster and deployed jenkins by following file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-ci
spec:
replicas: 1
template:
metadata:
labels:
run: jenkins-ci
spec:
containers:
- name: jenkins-ci
image: jenkins:2.32.2
ports:
- containerPort: 8080
and service by
apiVersion: v1
kind: Service
metadata:
name: jenkins-cli-lb
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
nodePort: 30000
# label keys and values that must match in order to receive traffic for this service
selector:
run: jenkins-ci
Now i can access jenkins UI in my browser without any problems. My issue I came into situation in which need to restart jenkins service manually??
Just kubectl delete pods -l run=jenkins-ci - Will delete all pods with this label (your jenkins containers).
Since they are under Deployment, it will re-create the containers. Network routing will be adjusted automatically (again because of the label selector).
See https://kubernetes.io/docs/reference/kubectl/cheatsheet/
You can use command below to enter the pod container.
$ kubectl exec -it kubernetes pod -- /bin/bash
After apply service Jenkins restart command.
For more details please refer :how to restart service inside pod in kubernetes cluster.

Resources