How to import a config file into a container from k8s - docker

I have a docker file which I've written for a React application. This app takes a .json config file that it uses at run time. The file doesn't contain any secrets.
So I've built the image without the config file, and now I'm unsure how to transfer the JSON file when I run it up.
I'm looking at deploying this in production using a CI/CD process which would entail:
gitlab (actions) building the image
pushing this to a docker repository
Kubernetes picking this up and running/starting the container
I think it's at the last point that I want to add the JSON configuration.
My question is: how do I add the config file to the application when k8's starts it up?
If I understand correctly, k8s doesn't have any local storage to create a volume from to copy it in? Can I give docker run a separate git repo where I can hold the config files?

You should take a look at configmap.
From k8s documentation configmap:
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
In your case, you want as a volume to have a file.
apiVersion: v1
kind: ConfigMap
metadata:
name: your-app
data:
config.json: #you file name
<file-content>
A configmap can be create manually or generated from a file using:
Directly in the cluster:kubectl create configmap <name> --from-file <path-to-file>.
In a yaml file:kubectl create configmap <name> --from-file <path-to-file> --dry-run=client -o yaml > <file-name>.yaml.
When you got your configmap, you must modify your deployment/pod to add a volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: <your-name>
spec:
...
template:
metadata:
...
spec:
...
containers:
- name: <container-name>
...
volumeMounts:
- mountPath: '<path>/config.json'
name: config-volume
readOnly: true
subPath: config.json
volumes:
- name: config-volume
configMap:
name: <name-of-configmap>
To deploy to your cluster, you can use plain yaml or I suggest you take a look at Kustomize or Helm charts­.
They are both popular system to deploy applications. If kustomize, there is the configmap generator feature that fit your case.
Good luck :)

Related

add configMap to deployment config on openshift 4

Ive recently started using openshift 4 and im a bit lost.
I have a running pod and I created config map for it but I cant find a way to conect the two.
Ive been told to add the config map to the deployment config of the pod in a specific path.
I tried editing the pod's yaml file to add the file as a volume but got an error when I tried to save the changes.
anyone has an idea how can I add the config map file so I can access it in a specific path in a pod?
The example of adding configmap as a volume to a pod is explained in official kubernetes documentation
Below is the sample
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config

How to attache a volume to kubernetes pod container like in docker?

I am new to Kubernetes but familiar with docker.
Docker Use Case
Usually, when I want to persist data I just create a volume with a name then attach it to the container, and even when I stop it then start another one with the same image I can see the data persisting.
So this is what i used to do in docker
docker volume create nginx-storage
run -it --rm -v nginx-storage:/usr/share/nginx/html -p 80:80 nginx:1.14.2
then I:
Create a new html file in /usr/share/nginx/html
Stop container
Run the same docker run command again (will create another container with same volume)
html file exists (which means data persisted in that volume)
Kubernetes Use Case
Usually, when I work with Kubernetes volumes I specify a PVC (PersistentVolumeClaim) and PV (PersistentVolume) using hostPath which will bind mount directory or a file from the host machine to the container.
what I want to do is reproduce the same behavior specified in the previous example (Docker Use Case) so how can I do that? Is Kubernetes creating volumes process is different from Docker? and if possible providing a YAML file would help me understand.
To a first approximation, you can't (portably) do this. Build your content into the image instead.
There are two big practical problems, especially if you're running a production-oriented system on a cloud-hosted Kubernetes:
If you look at the list of PersistentVolume types, very few of them can be used in ReadWriteMany mode. It's very easy to get, say, an AWSElasticBlockStore volume that can only be used on one node at a time, and something like this will probably be the default cluster setup. That means you'll have trouble running multiple pod replicas serving the same (static) data.
Once you do get a volume, it's very hard to edit its contents. Consider the aforementioned EBS volume: you can't edit it without being logged into the node on which it's mounted, which means finding the node, convincing your security team that you can have root access over your entire cluster, enabling remote logins, and then editing the file. That's not something that's actually possible in most non-developer Kubernetes setups.
The thing you should do instead is build your static content into a custom image. An image registry of some sort is all but required to run Kubernetes and you can push this static content server into the same registry as your application code.
FROM nginx:1.14.2
COPY . /usr/share/nginx/html
# Base image has a working CMD, no need to repeat it
Then in your deployment spec, set image: registry.example.com/nginx-frontend:20220209 or whatever you've chosen to name this build of this image, and do not use volumes at all. You'd deploy this the same way you deploy other parts of your application; you could use Helm or Kustomize to simplify the update process.
Correspondingly, in the plain-Docker case, I'd avoid volumes here. You don't discuss how files get into the nginx-storage named volume; if you're using imperative commands like docker cp or debugging tools like docker exec, those approaches are hard to script and are intrinsically local to the system they're running on. It's not easy to copy a Docker volume from one place to another. Images, though, can be pushed and pulled through a registry.
I managed to do that by creating a PVC only this is how I did it (with an Nginx image):
nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
nginx-deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template: # template for the pods
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-data
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Once I run kubectl apply on the PVC then on the deployment going to localhost:30080 will show 404 not found page means that all data in the /usr/share/nginx/html was deleted once the container gets started and that's because it's bind mounting a dir from the k8s cluster node to that container as a volume:
/usr/share/nginx/html <-- dir in volume
/var/lib/k8s-pvs/nginx2-data/pvc-9ba811b0-e6b6-4564-b6c9-4a32d04b974f <-- dir from node (was automatically created)
I tried adding a new file into that container in the html dir as a new index.html file, then deleted the container, a new container was created by the pod and checking localhost:30080 worked with the newly created home page
I tried deleting the deployment and reapplying it (without deleting the PVC) checked localhost:30080 and everything still persists.
An alternative solution specified in the comments kubernetes.io/docs/tasks/configure-pod-container/… by
larsks

Apply a specific deployment file when running an image on Minikube

On Minikube using KubeCtl, I run an image created by Docker using the following command:
kubectl run my-service --image=my-service-image:latest --port=8080 --image-pull-policy Never
But on Minukube, a different configuration is to be applied to the application. I prepared some environment variables in a deployment file and want to apply them to the images on Minikube. Is there a way to tell KubeCtl to run those images using a given deployment file or even a different way to provide the images with those values?
I tried the apply verb of KubeCtl for example, but it tries to create the pod instead of applying the configuration on it.
In minukube/kubernetes you need to apply the environment variables in the yaml file of your pod/deployment.
Here is a example of how you can configure the environment variables in a deployment spec:
apiVersion: apps/v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Here you can find more information abour environment variables.
In this case, if you want to change any value, you need to delete the pod and apply it again. But if you use deployment all modification can be done using kubectl apply command.

Kubernetes: Specify a tarball docker image to run pod

I have saved a docker image as a tar file locally using the command,
docker save -o ./dockerImage:version.tar docker.io/image:latest-1.0
How to specify this file in my pod.yaml to use this tarball and start the pod instead of pulling / already pulled image to launch the container.
Current pod.yaml file:
apiVersion: myApp/v1
kind: myKind
metadata:
name: myPod2
spec:
baseImage: docker.io/image
version: latest-1.0
I want similar to this
apiVersion: myApp/v1
kind: myKind
metadata:
name: myPod2
spec:
baseImage: localDockerImage.tar:latest-1.0
version: latest-1.0
There's no direct way to achieve that in Kubernetes.
See the discussions here: https://github.com/kubernetes/kubernetes/issues/1668
They have finally closed that issue because of the following reasons:
Given that there are a number of ways to do this (your own cluster startup scripts, run a daemonset to side load your custom images, create VM images with images pre-loaded, run a cluster-local docker registry), and the fact that there have been no substantial updates in over two years, I'm going to close this as obsolete.

how to inspect the content of persistent volume by kubernetes on azure cloud service

I have packed the software to a container. I need to put the container to cluster by Azure Container Service. The software have outputs of an directory /src/data/, I want to access the content of the whole directory.
After searching, I have to solution.
use Blob Storage on azure, but then after searching, I can't find the executable method.
use Persistent Volume, but all the official documentation of azure and pages I found is about Persistent Volume itself, not about how to inspect it.
I need to access and manage my output directory on Azure cluster. In other words, I need a savior.
As I've explained here and here, in general, if you can interact with the cluster using kubectl, you can create a pod/container, mount the PVC inside, and use the container's tools to, e.g., ls the contents. If you need more advanced editing tools, replace the container image busybox with a custom one.
Create the inspector pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pvc-inspector
spec:
containers:
- image: busybox
name: pvc-inspector
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /pvc
name: pvc-mount
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: YOUR_CLAIM_NAME_HERE
EOF
Inspect the contents
kubectl exec -it pvc-inspector -- sh
$ ls /pvc
Clean Up
kubectl delete pod pvc-inspector

Resources