Is possible to use local image into pods yaml in kubernetes? - docker

Is possible to set a local image in a kubernetes pod yml file?
This is my pod yml file, and the question is if I can use a local image to containers tag (in local, I have all files to my api project, dockerfile, etc).
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: api-service
spec:
selector:
matchLabels:
api-name: api-service
replicas: 2
template:
metadata:
labels:
api-name: api-service
spec:
containers:
- name: api-service
image: #HERE

By local you mean it doesn't pull from dockerhub or any of the public registry. Yes it's possible if you run a single node kubernetes. You will utlize the docker cache where your kubernetes/kubelet is running.
First thing is, you need to set your imagePullPolicy: IfNotPresent. Then, when you build your image, you need to point to the docker instance your kubernetes is using.
I do this mostly with minikube, so the dev iteration is faster without pushing to my registry.

Related

Unable to pull docker image from local registry for Kubernetes deployment

I've a K8s cluster on Linode and another VM for operating.
I've installed Docker & K8s on operating VM to build images and do deployment on cluster.
Note: I haven't installed minikube on this VM.
I'm able to build my image but not able to pull that from local registry to k8s pod.
Below are the things I've already done & tried to solve the problem.
Create and push docker image to local registry.
Run docker container from the image, but not getting pulled in K8s.
Created "regcred" secret and used it in deployment yaml.
create image and push with VM's IP(10.128.234.123:5000/app-frontend) and use the same in deployment image reference.
Change image pull policy to IfNotPresent
I get the following error in pod description:
Warning ErrImageNeverPull 11s (x4 over 13s) kubelet Container image "localhost:5000/app-frontend" is not present with pull policy of Never
Warning Failed 11s (x4 over 13s) kubelet Error: ErrImageNeverPull
Below is my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-frontend
labels:
app: app-frontend
spec:
replicas: 1
selector:
matchLabels:
app: app-frontend
template:
metadata:
labels:
app: app-frontend
spec:
containers:
- name: app-frontend
image: localhost:5000/docker-image
imagePullPolicy: Never
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
Any help or guidance will be grateful.
In the Docs I see this
While with imagePullPolicy set to Never, never pull the image.
Try this instead
imagePullPolicy: IfNotPresent
Also
image: localhost:5000/docker-image
But in point 4. you specify an IP

How to deploy container to local kubernetes environment such as kind?

I built a docker image on local. Its name is myapp.
Deploy it as myjob.yaml:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: myapp
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: myapp
image: myapp
Use kind as a local k8s cluster environment. Load this image:
kind load docker-image myapp
Deploy app:
kubectl apply -f myjob.yaml
Confirm the pods' log, it can find the image myapp.
Is it necessary to create a container register on local to serve images?
Providing an answer based on #David Maze comment.
There's a note in the kind documentation that specifying image: myapp with an implicit ...:latest tag will cause the cluster to try to pull the image again, so you either need a per-build tag (preferred) or to explicitly specify imagePullPolicy: Never

How to configure kubernetes (microk8s) to use local docker images?

I've build docker image locally:
docker build -t backend -f backend.docker
Now I want to create deployment with it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
spec:
selector:
matchLabels:
tier: backend
replicas: 2
template:
metadata:
labels:
tier: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: IfNotPresent # This should be by default so
ports:
- containerPort: 80
kubectl apply -f file_provided_above.yaml works, but then I have following pods statuses:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-deployment-66cff7d4c6-gwbzf 0/1 ImagePullBackOff 0 18s
Before that it was ErrImagePull. So, my question is, how to tell it to use local docker images? Somewhere on the internet I read that I need to build images using microk8s.docker but it seems to be removed.
Found docs on how to use private registry: https://microk8s.io/docs/working
First it needs to be enabled:
microk8s.enable registry
Then images pushed to registry:
docker tag backend localhost:32000/backend
docker push localhost:32000/backend
And then in above config image: backend needs to be replaced with image: localhost:32000/backend

Kubernetes: The code change does not appear, is there a way to sync?

In Dockerfile I have mentioned volume like:
COPY src/ /var/www/html/ but somehow my code changes don't appear like it used to only with Docker. Unless I remove Pods, it does not appear. How to sync it?
I am using minikube.
webserver.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: php-apache
image: learningk8s_website
imagePullPolicy: Never
ports:
- containerPort: 80
When your container spec says:
image: learningk8s_website
imagePullPolicy: Never
The second time you kubectl apply it, Kubernetes determines that it's exactly the same as the Deployment spec you already have and does nothing. Even if it did generate new Pods, the server is highly likely to notice that it already has an image learningk8s_website:latest and won't pull a new one; indeed, you're explicitly telling Kubernetes not to.
The usual practice here is to include some unique identifier in the image name, such as a date stamp or commit hash.
IMAGE=$REGISTRY/name/learningk8s_website:$(git rev-parse --short HEAD)
docker build -t "$IMAGE" .
docker push "$IMAGE"
You then need to make the corresponding change in the Deployment spec and kubectl apply it. This will cause Kubernetes to notice that there is some change in the pod spec, create new pods with the new image, and destroy the old pods (in that order). You may find a templating engine like Helm to be useful to make it easier to inject this value into the YAML.

Kubernetes Workflow

I have been using kubernetes for a while now.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0+2831379", GitCommit:"283137936a
498aed572ee22af6774b6fb6e9fd94", GitTreeState:"not a git tree", BuildDate:"2016-07-05T15:40:25Z", GoV
ersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db
386f62781338b0483733b3", GitTreeState:"clean", BuildDate:"", GoVersion:"", Compiler:"", Platform:""}
I usually set an Ingress, Service and Replication Controller for each project.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: portifolio
name: portifolio-ingress
spec:
rules:
- host: www.cescoferraro.xyz
http:
paths:
- path: /
backend:
serviceName: portifolio
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
selector:
name: portifolio
ports:
- name: web
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: ReplicationController
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
replicas: 1
selector:
name: portifolio
template:
metadata:
namespace: portifolio
labels:
name: portifolio
spec:
containers:
- image: cescoferraro/portifolio:latest
imagePullPolicy: Always
name: portifolio
env:
- name: KUBERNETES
value: "true"
- name: BRANCH
value: "production"
My "problem" is that for deploying my app I usually do:
kubectl -f delete kubernetes.yaml
kubectl -f create kubernetes.yaml
I wish I could use a single command to deploy, whenever my app is up or down. Rolling updates do not work when I use the same image,(I think its a bug on my kubernetes server version). But it also do not work when the app has never been deployed at all.
I have read about Deployments, I wonder how it would help me?
Goals
1. Deploy if app is brand new
2. Replace existing pods with new ones using a new image from docker registry.
I don't think keeping all resources inside one single manifest helps you with what you want to achieve, since your Service, Ingress and ReplicationController are not likely to change simultaneously.
If all you want to do is roll out new pods, I would recommend you to replace your ReplicationController with a Deployment. Manifests have almost the exact same syntax so it's easy to migrate from standard RCs, and you could perform a server-side rolling update with a single kubectl replace -f manifest.yml.
Please note that even with a Deployment resource you can't trigger a redeployment if nothing changed in your manifest. kubectl replace would just do nothing. Therefore you could for example increment or change a tag inside your manifest in order to force the deployment, if needed (eg. revision: 003).
As already written in the previous answer, it is recommended to use a Deployment instead of a ReplicationController for this.
Using imagePullPolicy: Always will only ensure that Kubernetes does a docker pull before starting new PODs. It does not force recreation of PODs when nothing in the Deployment resource changes.
I would suggest to add 2 things to your solution:
Add a label to the Deployment with the value CURRENT_DATE as a placeholder value
Add a simple shell script to your project which replaces the placeholder with the current date+time and then uses kubectl to apply the resources.
Example Bash script
#!/usr/bin/env bash
sed "s/CURRENT_DATE/$(date)/" kubernetes.yaml | kubectl apply -f -
Then use this script for redeployment instead of calling kubectl by yourself.
This is only meant as a very simple example. When it comes to creating/applying/patching resources in Kubernetes, things tend to get more and more complicated by time. If this happens, consider using some more advanced templating solutions, e.g. by using Python and Jinja2.
You could use a deployment for this. Create it the first time, and after that you only need to do kubectl set image deploy/my-app app=user/image:tag --record and you're good to go.
Doing that, you can also do cool things like kubectl rollout undo deploy/my-app or get history and status.
You might consider using Argo.
Argo is an open-source workflow engine for Kubernetes. It allows to define complex microservices-based application deployment using YAML in source repo and automatically re-deploy app on YAML change (e.g. on every commit to production branch) .

Resources