How to initialize systemd services in kubernetes pod? - docker

I have an image from base os centos/systemd.when i give "exec /usr/sbin/init" in the laucher file of the container and creating the container using docker systemd services are up.
But when i create a container using the same image in kubernetes with the same launcher file systemd services are not comming up.How to run the /usr/sbin/init in the kubernetes so the systemd services comes up during the container creation

To solve this issue you can use kubernetes init container which run first before the main container creation and start the necessary services.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
initContainers:
- name: check-system-ready
image: busybox
command: ['sh', '-c', 'Your sysntax for systemd']
containers:
- your container spec
Sharing here official kubernetes init container doc : https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/

Related

Local kubernetes run docker pull from local image fail

My story is:
1, I create a spring-boot project, with a Dockerfile inside.
2, I successfully create the docker image IN LOCAL with above docker file.
3, I have a minikube build a K8s for my local.
4, However, when I try to apply the k8s.yaml, it tells me that there is no such docker image. Obviously my docker app search in public docker hub, so what I can do?
Below is my dockerfile
FROM openjdk:17-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
expose 8080
ENTRYPOINT ["java","-jar","/app.jar"]
Below is my k8s.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pkslow-springboot-deployment
spec:
selector:
matchLabels:
app: springboot
replicas: 2
template:
metadata:
labels:
app: springboot
spec:
containers:
- name: springboot
image: cicdstudy/apptodocker:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
app: springboot
name: pkslow-springboot-service
spec:
ports:
- port: 8080
name: springboot-service
protocol: TCP
targetPort: 8080
nodePort: 30080
selector:
app: springboot
type: NodePort
In Kubernetes there is no centralized built-in Container Image Registry exist.
Depending on the container runtime in the K8S cluster nodes you have, it might search first dockerhub to pull images.
Since free pull is not suggested or much allowed by Dockerhub now, it is suggested to create an account for development purposes. You will get 1 private repository and unlimited public repository which means that whatever you pushed to public repositories, there somebody can access it.
If there is no much concern on Intellectual Property issues, you can continue that free account for development purposes. But when going production you need to change that account with a service/robot account.
Create an Account on DockerHub https://id.docker.com/login/
Login into your DockerHub account locally on the machine where you are building your container image
docker login --username=yourhubusername --email=youremail#company.com
Build,re-tag and push your image once more (go to the folder where Dockerfile resides)
docker build -t mysuperimage:v1 .
docker tag mysuperimage:v1 yourhubusername/mysuperimage:v1
docker push yourhubusername/mysuperimage:v1
Create a secret for image registry credentials
kubectl create secret docker-registry regcred --docker-server=https://index.docker.io/v1/ --docker-username= --docker-password= --docker-email=
Create a service account for deployment
kubectl create serviceaccount yoursupersa
Attach secret to the service account named "yoursupersa"
kubectl patch serviceaccount yoursupersa -p '{"imagePullSecrets": [{"name": "docker-registry"}]}'
Now create your application as deployment resource object in K8S
kubectl create deployment mysuperapp --image=yourhubusername/mysuperimage:v1 --port=8080
Then patch your deployment with service account which has attached registry credentials.(which will cause for re-deployment)
kubectl patch deployment mysuperapp -p '{"spec":{"template":{"spec":{"serviceAccountName":"yoursupersa"}}}}'
the last step is expose your service
kubectl expose deployment/mysuperapp
Then everything is awesome! :)
if you just want to be able to pull images from your local computer with minikube you can use eval $(minikube docker-env) this leads to all docker related commands being used on your minikube cluster to use your local docker daemon. so a pull will first look in your hosts local images instead of hub.docker.io.
more information can be found here

How to refer local docker images loaded from tar file in Kubernetes deployment?

I am trying to create a Kubernetes deployment from local docker images. And using imagePullPolicy as Never such that Kubernetes would pick it up from local docker image imported via tar.
Environment
SingleNodeMaster # one node deployment
But Kubernetes always trying to fetch the private repository although local docker images are present.
Any pointers on how to debug and resolve the issue such that Kubernetes would pick the images from the local docker registry? Thank you.
Steps performed
docker load -i images.tar
docker images # displays images from myprivatehub.com/nginx/nginx-custom:v1.1.8
kubectl create -f local-test.yaml with imagepullPolicy as Never
Error
Pulling pod/nginx-custom-6499765dbc-2fts2 Pulling image "myprivatehub.com/nginx/nginx-custom:v1.1.8"
Failed pod/nginx-custom-6499765dbc-2fts2 Error: ErrImagePull
Failed pod/nginx-custom-6499765dbc-2fts2 Failed to pull image "myprivatehub.com/nginx/nginx-custom:v1.1.8": rpc error: code = Unknown desc = failed to pull and unpack image "myprivatehub.com/nginx/nginx-custom:v1.1.8": failed to resolve reference "myprivatehub.com/nginx/nginx-custom:v1.1.8": failed to do request: Head "https://myprivatehub.com/v2/nginx/nginx-custom/manifests/v1.1.8": dial tcp: lookup myprivatehub.com: no such host
docker pull <imagename>
Error response from daemon: Get https://myprivatehub.com/v2/: dial tcp: lookup myprivatehub.com on 172.31.0.2:53: no such host
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-custom
namespace: default
spec:
selector:
matchLabels:
run: nginx-custom
replicas: 5
template:
metadata:
labels:
run: nginx-custom
spec:
containers:
- image: myprivatehub.com/nginx/nginx-custom:v1.1.8
imagePullPolicy: Never
name: nginx-custom
ports:
- containerPort: 80
This happens due to container runtime being different than docker. I am using containerd , after switching container runtime to docker , it started working.
This is to update another approach that can be taken to achieve the similar result. In this case, one can use Docker Registry. Docker Registry Doc
We can create a Docker registry on the machine where Kubernetes is running and docker too is installed. One of the easiest way to achieve the same can be done as following:
Create a local private docker registry. If the registry:2 image is not present, then it would download it and run.
sudo docker run -d -p 5000:5000 --restart=always --name registry registry:2
Build the image or load the image from a tar as required. For my example, i am creating it to add it to the local repository.
sudo docker build -t coolapp:v1 .
Once the build is done, create a tag with this image such that it represents a host and a port.
sudo docker tag coolapp:v1 localhost:5000/coolapp:v1
Push the new tag to the local private registry
sudo docker push localhost:5000/coolapp:v1
Now in the Kubernetes YAML, we can specify the deployment as following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mycoolapp
spec:
replicas: 1
selector:
matchLabels:
app: mycoolapp
template:
metadata:
labels:
app: mycoolapp
spec:
containers:
- name: mycoolapp
image: localhost:5000/coolapp:v1
ports:
- containerPort: 3000
and we apply the YAML
sudo kubectl apply -f deployment.yaml
Once this is done, we will be able to see that Kubernetes has pulled the image from the local private repository and is running it.

Kubernetes: supply parameters for docker

I want to run docker containers using real-time scheduler. Is it possible to pass parameters in pod/deployment file to Kubernetes to run my containers as follows?
docker run -it --cpu-rt-runtime=950000 \
--ulimit rtprio=99 \
--cap-add=sys_nice \
debian:jessie
Unfortunately not all Docker command line features has relevant options in Kubernetes YAML.
While sys_time capability can be set using securiyContext in yaml, the --cpu-rt-runtime=950000 cannot.
In the K8s API Pod documentation you can find all the configuration that can be pass into container
under PodSecurityContext v1 core.
Another thing is that I`ve tried to run a container itself with the specs that you provided but I ran into an error:
docker: Error response from daemon: Your kernel does not support
cgroup cpu real-time runtime. See 'docker run --help'
This is related directly to kernel configuration named CONFIG_RT_GROUP_SCHED that is missing from your kernel image. Without it the cpu-rt-runtime won`t be possible to set to container.
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
You can use config maps to declare variables.
Then mount config map to env variables. Pass env variables to docker args.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
Create config map
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Create POD
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
Not all of these options are available in K8s but you can find a workaround using Limit Ranges. This is explained here.

Kubernetes: how to run application in the container with root privileges

I setup kubernetes with master and node on the same hardware (ubuntu 18) using this tutorial.
Kubernetes 1.15.3
docker 19.03.2
The container I created runs an emulation software that needs root privileges with write access to /proc/sys/kernel directory. When kubernetes start the container I get an error inside the service script /etc/init.d/myservicescript indicates that it can't write to /proc/sys/kernel/xxx. The container runs on ubuntu 14.
I tried to set the "runAsUser: 0" in the pod's yaml file
I tried to set "USER 0" in the Dockerfile
Neither work. Any suggestion on how to get this working?
Changing the user inside the container does not give you any privilege on the host. In order to get elevated privilege, you must set privileged: true in the security context.
For example:
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "999"
securityContext:
privileged: true

Docker for Windows Kubernetes pod gets ImagePullBackOff after creating a new deployment

I have successfully built Docker images and ran them in a Docker swarm. When I attempt to build an image and run it with Docker Desktop's Kubernetes cluster:
docker build -t myimage -f myDockerFile .
(the above successfully creates an image in the docker local registry)
kubectl run myapp --image=myimage:latest
(as far as I understand, this is the same as using the kubectl create deployment command)
The above command successfully creates a deployment, but when it makes a pod, the pod status always shows:
NAME READY STATUS RESTARTS AGE
myapp-<a random alphanumeric string> 0/1 ImagePullBackoff 0 <age>
I am not sure why it is having trouble pulling the image - does it maybe not know where the docker local images are?
I just had the exact same problem. Boils down to the imagePullPolicy:
PC:~$ kubectl explain deployment.spec.template.spec.containers.imagePullPolicy
KIND: Deployment
VERSION: extensions/v1beta1
FIELD: imagePullPolicy <string>
DESCRIPTION:
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
More info:
https://kubernetes.io/docs/concepts/containers/images#updating-images
Specifically, the part that says: Defaults to Always if :latest tag is specified.
That means, you created a local image, but, because you use the :latest it will try to find it in whatever remote repository you configured (by default docker hub) rather than using your local. Simply change your command to:
kubectl run myapp --image=myimage:latest --image-pull-policy Never
or
kubectl run myapp --image=myimage:latest --image-pull-policy IfNotPresent
I had this same ImagePullBack error while running a pod deployment with a YAML file, also on Docker Desktop.
For anyone else that finds this via Google (like I did), the imagePullPolicy that Lucas mentions above can also be set in the deployment yaml file. See the spec.templage.spec.containers.imagePullPolicy in the yaml snippet below (3 lines from the bottom).
I added that and my app deployed successfully into my local kube cluser, using the kubectl yaml deploy command: kubectl apply -f .\Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
labels:
app: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: node-web-app:latest
imagePullPolicy: Never
ports:
- containerPort: 3000
You didn't specify where myimage:latest is hosted, but essentially ImagePullBackoff means that I cannot pull the image because either:
You don't have networking setup in your Docker VM that can get to your Docker registry (Docker Hub?)
myimage:latest doesn't exist in your registry or is misspelled.
myimage:latest requires credentials (you are pulling from a private registry). You can take a look at this to configure container credentials in a Pod.

Resources