I have this Kubernetes Job instance:
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
name: job
image: 172.30.34.145:5000/myproj/app:latest
command: ["/bin/sh", "-c", "$(COMMAND)"]
serviceAccount: default
serviceAccountName: default
restartPolicy: Never
How can I write the image name so it always pull from within my own namespace.
I'd like to set it like this:
image: app:latest
But it fails saying it's unable to pull the image
To pull from a different repository then dockerhub you need to specify the host:port part in the image name. As far as I am aware at this point there is no option to change to location of default registry in docker daemon.
If you are very fixed on the idea, you could fiddle with DNS so it resolves to your image registry instead of dockers one, but that would cut you off from docker hub completely.
Related
I have a local kubernetes cluster up and running using k3s. It works like a charm so far.
On it I'm running a custom Docker registry from which I want to pull images for other deployments.
The registry is exposed to the host by means of a NodePort service. Internally it has port 5000, externally it's on port 31320.
I can push docker images to the registry from the host by tagging them as myhostname:31320/myimage:latest. This works great too.
Now I want to use this image in a basic Job deployment. I'm using the whole tag myhostname:31320/myimage:latest as container image entry like this:
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world
spec:
template:
metadata:
name: hello-world-pod
spec:
containers:
- name: hello-world
image: myhostname:31320/myimage:latest
restartPolicy: Never
Unfortunately, I keep getting a 400 BadRequest error stating: image can't be pulled. If I try using the internal service name of the registry and the internal port instead, like in private-registry:5000/myimage:latest, I'm getting the same error.
I suppose I cannot use private-registry:5000/myimage:latest because that's just not the tag of the image. I cannot push the image to private-registry:5000/myimage:latest because the host private-registry is only known inside the cluster and the port 5000 is not exposed to the host.
So... I'm stuck. What am I going to do about this? How do I get to push images from the host to the registry and allow them to be pulled from inside the cluster?
Kubernetes has a rich documentation on how to implement multiple registries to allow further deployments/pods to access to public or even private registries, to do so you can create an image pull secret k8s ressource (docs), you can either create it by running this command:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword>
or by deploying this resource in your cluster:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: awesomeapps
data:
# Make sure the you convert the whole file to base64!
# cat registry.json | base64 -d
.dockerconfigjson: <registry.json>
type: kubernetes.io/dockerconfigjson
registry.json example
{
"auths": {
"your.private.registry.example.com": {
"username": "janedoe",
"password": "xxxxxxxxxxx",
"email": "jdoe#example.com",
"auth": "c3R...zE2"
}
}
}
And now you can simply attache this imagePullSecret resource you can attache it to your deployment:
apiVersion: batch/v1
kind: Job
metadata:
name: hello-world
spec:
template:
metadata:
name: hello-world-pod
spec:
imagePullSecrets:
- name: regcred
containers:
- name: hello-world
image: myhostname:31320/myimage:latest
restartPolicy: Never
PS
You might also consider adding your registry in docker daemon as insecure registry if you encounter other issues.
you can check this SO question
I built a docker image on local. Its name is myapp.
Deploy it as myjob.yaml:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: myapp
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: myapp
image: myapp
Use kind as a local k8s cluster environment. Load this image:
kind load docker-image myapp
Deploy app:
kubectl apply -f myjob.yaml
Confirm the pods' log, it can find the image myapp.
Is it necessary to create a container register on local to serve images?
Providing an answer based on #David Maze comment.
There's a note in the kind documentation that specifying image: myapp with an implicit ...:latest tag will cause the cluster to try to pull the image again, so you either need a per-build tag (preferred) or to explicitly specify imagePullPolicy: Never
When my yaml is something like this :
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Where is the nginx image coming from? for e.g. in GKE Kubernetes world, if I was going to reference an image from registry it's normally something like this:
image: gsr.io.foo/nginx
but in this case it's just an image name:
image: nginx
So trying to just understand where the source registry is on when deployed on a K8 cluster as it seems to pull down ok but just want to know how I can figure out where it's supposed to come from?
It's coming from docker hub (https://hub.docker.com/) when only image name is specified in the manifest file. Example:
...
Containers:
- name: nginx
image: nginx
...
For nginx, it's coming from official nginx repository (https://hub.docker.com/_/nginx).
OK, I was able to specifically confirm from my node based on the container runtime:
If your nodepool is running docker, you may run docker system info and look for the image being pulled from mirror.gcr.io.
If your nodepool is running containerd, you may run sudo crictl info and look for the same as above.
I am currently trying to implement the CI/CD pipeline using docker , Kubernetes and Jenkins. When I created the pipeline deployment Kubernetes deployment YAML file, I was not included the time stamp. Only I was using the imagePullPolicy as latest in YAML file. Regarding with latest pull I had already one discussion here, The following is the link for that discussion,
Docker image not pulling latest from dockerhub.com registry
After This discussion , I included the time stamp in my deployment YAML like the following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-kube-deployment
labels:
app: test-kube-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-kube-deployment
template:
metadata:
labels:
app: test-kube-deployment
annotations:
date: "+%H:%M:%S %d/%m/%y"
spec:
imagePullSecrets:
- name: "regcred"
containers:
- name: test-kube-deployment-container
image: spacestudymilletech010/spacestudykubernetes:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 8085
protocol: TCP
Here I modified my script to include the time stamp by adding the following in template,
annotations:
date: "+%H:%M:%S %d/%m/%y"
My service file like following,
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- port: 8085
targetPort: 8085
protocol: TCP
name: http
selector:
app: test-kube-deployment
My jenkinsfile conatining the following,
stage ('imagebuild')
{
steps
{
sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes:latest /var/lib/jenkins/workspace/jpipeline/pipeline'
sh 'docker login --username=<my-username> --password=<my-password>'
sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'
}
}
stage ('Test Deployment')
{
steps
{
sh 'kubectl apply -f deployment/testdeployment.yaml'
sh 'kubectl apply -f deployment/testservice.yaml'
}
}
But still the deployment not pulling the latest one from Dockerhub registry. How I can modify these script for resolving the latest pulling problem?
The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always.
omit the imagePullPolicy and use :latest as the tag for the image to use.
omit the imagePullPolicy and the tag for the image to use.
enable the AlwaysPullImages admission controller.
Basically, either use :latest or then use imagePullPolicy: Always
Try it and let me know how it goes!
Referenced from here
There is many articles and docs that explain how to properly build and publish the docker image using Jenkins.
You should first read Using Docker with Pipeline which shows you an example with environment variable ${env.BUILD_ID}
node {
checkout scm
docker.withRegistry('https://registry.example.com', 'credentials-id') {
def customImage = docker.build("my-image:${env.BUILD_ID}")
/* Push the container to the custom Registry */
customImage.push()
}
}
Or to put it as a stage:
stage('Push image') {
docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
I really do recommend reading Building your first Docker image with Jenkins 2: Guide for developers, which I think will answer many if not all of your questions.
In Dockerfile I have mentioned volume like:
COPY src/ /var/www/html/ but somehow my code changes don't appear like it used to only with Docker. Unless I remove Pods, it does not appear. How to sync it?
I am using minikube.
webserver.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: php-apache
image: learningk8s_website
imagePullPolicy: Never
ports:
- containerPort: 80
When your container spec says:
image: learningk8s_website
imagePullPolicy: Never
The second time you kubectl apply it, Kubernetes determines that it's exactly the same as the Deployment spec you already have and does nothing. Even if it did generate new Pods, the server is highly likely to notice that it already has an image learningk8s_website:latest and won't pull a new one; indeed, you're explicitly telling Kubernetes not to.
The usual practice here is to include some unique identifier in the image name, such as a date stamp or commit hash.
IMAGE=$REGISTRY/name/learningk8s_website:$(git rev-parse --short HEAD)
docker build -t "$IMAGE" .
docker push "$IMAGE"
You then need to make the corresponding change in the Deployment spec and kubectl apply it. This will cause Kubernetes to notice that there is some change in the pod spec, create new pods with the new image, and destroy the old pods (in that order). You may find a templating engine like Helm to be useful to make it easier to inject this value into the YAML.