Pass file to docker container in a kubernetes Pod - docker

I'm beginner in Kubernetes, what I would like to achieve is :
Pass user's ssh private/public key to the Pod and then to the Docker container (there's a shell script that will be using this key)
So I would like to know if it's possible to do that in the Kubectl apply ?
My pod.yaml looks like :
apiVersion: v1
kind: Pod
metadata:
generateName: testing
labels:
type: testing
namespace: ns-test
name: testing-config
spec:
restartPolicy: OnFailure
hostNetwork: true
containers:
- name: mycontainer
image: ".../mycontainer:latest"

you have to store the private / public key in a kubernetes secret object
apiVersion: v1
kind: Secret
metadata:
name: mysshkey
namespace: ns-test
data:
id_rsa: {{ value }}
id_rsa.pub: {{ value }}
and now you can mount this secret file in your container:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: mysshkey
The documentation of kuberentes provides also an chapter of Using Secrets as files from a Pod
It's not tested but i hope it works.

First, you create a secret with your keys: kubectl create secret generic mysecret-keys --from-file=privatekey=</path/to/the/key/file/on/your/host> --from-file=publickey=</path/to/the/key/file/on/your/host>
Then you refer to the key files using the secret in your pod:
apiVersion: v1
kind: Pod
metadata:
...
spec:
...
containers:
- name: mycontainer
image: ".../mycontainer:latest"
volumeMounts:
- name: mysecret-keys
mountPath: /path/in/the/container # <-- privatekey & publickey will be mounted as file in this directory where your shell script can access
volumes:
- name: mysecret-keys
secret:
secretName: mysecret-keys # <-- mount the secret resource you created above
You can check the secret with kubectl get secret mysecret-keys --output yaml. You can check the pod and its mounting with kubectl describe pod testing-config.

Related

Knative service deployment fails with reason RevisionMissing

I have deployed a service on Knative. I iterated on the service code/Docker image and I try to redeploy it at the same address. I proceeded as follow:
Pushed the new Docker image on our private Docker repo
Updated the service YAML file to point to the new Docker image (see YAML below)
Delete the service with the command: kubectl -n myspacename delete -f myservicename.yaml
Recreate the service with the command: kubectl -n myspacename apply -f myservicename.yaml
During the deployment, the service shows READY = Unknown and REASON = RevisionMissing, and after a while, READY = False and REASON = ProgressDeadlineExceeded. When looking at the logs of the pod with the following command kubectl -n myspacename logs revision.serving.knative.dev/myservicename-00001, I get the message:
no kind "Revision" is registered for version "serving.knative.dev/v1" in scheme "pkg/scheme/scheme.go:28"
Here is the YAML file of the service:
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: myservicename
namespace: myspacename
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/metric: concurrency
autoscaling.knative.dev/target: '1'
autoscaling.knative.dev/minScale: '0'
autoscaling.knative.dev/maxScale: '5'
autoscaling.knative.dev/scaleDownDelay: 60s
autoscaling.knative.dev/window: 600s
spec:
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: myspacename-models-pvc
imagePullSecrets:
- name: myrobotaccount-pull-secret
containers:
- name: myservicename
image: quay.company.com/project/myservicename:0.4.0
ports:
- containerPort: 5000
name: user-port
protocol: TCP
resources:
limits:
cpu: "4"
memory: 36Gi
nvidia.com/gpu: 1
requests:
cpu: "2"
memory: 32Gi
volumeMounts:
- name: nfs-volume
mountPath: /tmp/static/
securityContext:
privileged: true
env:
- name: CLOUD_STORAGE_PASSWORD
valueFrom:
secretKeyRef:
name: myservicename-cloud-storage-password
key: key
envFrom:
- configMapRef:
name: myservicename-config
The protocol I followed above is correct, the problem was because of a bug in the code of the Docker image that Knative is serving. I was able to troubleshoot the issue by looking at the logs of the pods as follow:
First run the following command to get the pod name: kubectl -n myspacename get pods. Example of pod name = myservicename-00001-deployment-56595b764f-dl7x6
Then get the logs of the pod with the following command: kubectl -n myspacename logs myservicename-00001-deployment-56595b764f-dl7x6

How to use Local docker image in kubernetes via kubectl

I created customize Docker Image and stored in my local system Now I want use that Docker Image via kubectl .
Docker image:-
1:- docker build -t backend:v1 .
Then Kubernetes file:-
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
image: backend
imagePullPolicy: Never
name: backend
resources: {}
restartPolicy: Always
status: {}```
Command to run kubectl:- kubectl create -f backend-deployment.yaml
**getting Error:-**
error: error validating "backend-deployment.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "image" in io.k8s.api.core.v1.EnvVar, ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "imagePullPolicy" in io.k8s.api.core.v1.EnvVar]; if you choose to ignore these errors, turn validation off with --validate=false
Local Registry
Set the local registry first using this command
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Image Tag
Given a Dockerfile, the image could be built and tagged this easy way:
docker build -t localhost:5000/my-image
Image Pull Policy
the field imagePullPolicy should then be changed to Never get the right image from the right repo.
given this sample pod template
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: app
image: localhost:5000/my-image
imagePullPolicy: Never
Deploy Pod
The pod can be deployed using:
kubectl create -f pod.yml
Hope this comes in handy :)
As the error specifies unknown field "image" and unknown field "imagePullPolicy"
There is syntax error in your kubernetes deployment file.
Make these changes in your yaml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: Never
env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
resources: {}
restartPolicy: Always
status: {}
Validate your kubernetes yaml file online using https://kubeyaml.com/
Or with kubectl apply --validate=true --dry-run=true -f deployment.yaml
Hope this helps.

Kubernetes pull from multiple private docker registries

To use a docker container from a private docker repo, kubernetes recommends creating a secret of type 'docker-registry' and referencing it in your deployment.
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Then in your helm chart or kubernetes deployment file, use imagePullSecrets
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
imagePullSecrets:
- name: regcred
containers:
- name: foo
image: foo.example.com
This works, but requires that all containers be sourced from the same registry.
How would you pull 2 containers from 2 registries (e.g. when using a sidecar that is stored separate from the primary container) ?
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: foo
image: foo.example.com
imagePullSecrets:
- name: foo-secret
- name: bar
image: bar.example.com
imagePullSecrets:
- name: bar-secret
I've tried creating 2 secrets foo-secret and bar-secret and referencing each appropriately, but I find it fails to pull both containers.
You have to include imagePullSecrets: directly at the pod level, but you can have multiple secrets there.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
imagePullSecrets:
- name: foo-secret
- name: bar-secret
containers:
- name: foo
image: foo.example.com/foo-image
- name: bar
image: bar.example.com/bar-image
The Kubernetes documentation on this notes:
If you need access to multiple registries, you can create one secret for each registry. Kubelet will merge any imagePullSecrets into a single virtual .docker/config.json when pulling images for your Pods.

Run private docker image on minikube k8s

I want to run a private docker image on my minikube k8s .
But the pod is never able to pull my image from docker .
How can i pull private image in k8s and use it?
This my yaml for pod
{apiVersion: v1
kind: Pod
metadata:
name: privaterepo
spec:
containers:
- name: private-reg-container
image: raveena1/test
imagePullSecrets:
- name: regsecret}
The log is:-
container "private-reg-container" in pod "privaterepo" is waiting to start: trying and failing to pull image
You need to create a secret & use it in your YAML/JSON deployment file -
Create secret (Like for Docker registry, you can change the registry server URL) -
$ kubectl create secret docker-registry regsecret --docker-server=https://index.docker.io/v1/ --docker-username=$USERNM --docker-password=$PASSWD --docker-email=vivekyad4v#gmail.com
deployment.yaml (use regsecret)-
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: local-simple-python
spec:
replicas: 2
selector:
matchLabels:
app: local-simple-python
template:
metadata:
labels:
app: local-simple-python
spec:
containers:
- name: python
image: vivekyad4v/local-simple-python:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regsecret
Deploy -
$ kubectl create -f deployment.yml
Your pods should now be able to fetch docker images on private registry.
You can find more info on -
https://github.com/vivekyad4v/kubernetes/tree/master/kubernetes-for-beginners
Official doc - https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

How to set secret data to kubernetes secrets by yaml?

I am using kubernetes to deploy a rails app to google container engine.
Follow the kubernetes secrets document: http://kubernetes.io/v1.1/docs/user-guide/secrets.html
I created a web controller file:
# web-controller.yml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- name: web
image: gcr.io/my-project-id/myapp:v1
ports:
- containerPort: 3000
name: http-server
env:
secret:
- secretName: mysecret
And created a secret file:
# secret.yml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
RAILS_ENV: production
When I run:
kubectl create -f web-controller.yml
It showed:
error: could not read an encoded object from web-controller.yml: unable to load "web-controller.yml": json: cannot unmarshal object into Go value of type []v1.EnvVar
error: no objects passed to create
Maybe the yaml format is wrong in the web-controller.yml file. Then how to write?
secret.yml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
RAILS_ENV: production
stringData is the easymode version of what you're after, one thing though.
you'll see the cleartext original yaml used to create the secret in the annotation (and if you used the above method that means you'll have a human readable secret in your annotation, if you use the below method you'll have the base64'd secret in your annotation), unless you follow up with the erase annotation command like so:
kubectl apply -f secret.yml
kubectl annotate secret mysecret kubectl.kubernetes.io/last-applied-configuration-
(the - at the end is what says to erase it)
kubectl get secret mysecret -n=api -o yaml
(to confirm)
Alternatively you'd do
Bash# echo production | base64
cHJvZHVjdGlvbgo=
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
RAILS_ENV: cHJvZHVjdGlvbgo=
You need to base64 encode the value and your key must be a valid DNS label, that is, replace RAILS_ENV with, for example, rails-env. See also this end-to-end example I put together here for more details and concrete steps.
We do not currently support secrets exposed as env vars.
Lets Say we adding imagepull secrets in deployment now follow the steps,
kubectl create secret docker-registry secret-name --docker-server=<registry-server-url> --docker-username=<Username> --docker-password=<password> --docker-email=<your-email>
Now refer this in deployment yaml file,
apiVersion: v1
kind: Deployment
metadata:
name: test-deployment
spec:
containers:
- name: test-app
image: <Image-name-private>
imagePullSecrets:
- name: secret-name
OR
Lets say you have some api key for access the application.
kubectl create secret generic secret-name --from-literal api-key="<your_api-key"
Now refer this in deployment like this.
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: secret-name
key: api-key

Resources