I want to run docker containers using real-time scheduler. Is it possible to pass parameters in pod/deployment file to Kubernetes to run my containers as follows?
docker run -it --cpu-rt-runtime=950000 \
--ulimit rtprio=99 \
--cap-add=sys_nice \
debian:jessie
Unfortunately not all Docker command line features has relevant options in Kubernetes YAML.
While sys_time capability can be set using securiyContext in yaml, the --cpu-rt-runtime=950000 cannot.
In the K8s API Pod documentation you can find all the configuration that can be pass into container
under PodSecurityContext v1 core.
Another thing is that I`ve tried to run a container itself with the specs that you provided but I ran into an error:
docker: Error response from daemon: Your kernel does not support
cgroup cpu real-time runtime. See 'docker run --help'
This is related directly to kernel configuration named CONFIG_RT_GROUP_SCHED that is missing from your kernel image. Without it the cpu-rt-runtime won`t be possible to set to container.
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
You can use config maps to declare variables.
Then mount config map to env variables. Pass env variables to docker args.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
Create config map
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Create POD
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
Not all of these options are available in K8s but you can find a workaround using Limit Ranges. This is explained here.
Related
I have a docker image that I run with specific runtime arguments. When I install a helm chart to deploy the kubernetes pod with this image, the output from the pod is different from when I use 'docker run.' I found that I should use command and args parameters in the values.yml file and the templates/deployment directory but I'm still not getting the desired output.
I've tried different variations from these links but no luck:
How to pass docker run flags via kubernetes pod
How to pass dynamic arguments to a helm chart that runs a job
How to pass arguments to Docker container in Kubernetes or OpenShift through command line?
Here's the docker run command:
docker run --it --rm --network=host --ulimit rtptrio=0 --cap-add=sys_nice --ipc=private --sysctl fs.msqueue.msg_max="10000" image_name:tag
Please try something like that:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
hostNetwork: true
securityContext:
capabilities:
add: ["SYS_NICE"]
containers:
- name: main
image: image_name:tag
I have an entrypoint defines as ENTRYPOINT ["/bin/sh"] in Dockerfile. The docker image contains most of the shell scripts. But later on I added a Golan scripts, which I want to run from kubernetes job. But due to entrypoint defined as /bin/sh, it gives me error No such file or directory when I try to run the compiled and installed go binary through args in yml of kubernetes job's deployment descriptor.
Any well defined way to achieve such goal?
As standing in documentation you can override docker's entrypoint by using command section of pod's definition in deployment.yaml, example from docs:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
restartPolicy: OnFailure
I have one docker image and I am using following command to run it.
docker run -it -p 1976:1976 --name demo demo.docker.cloud.com/demo/runtime:latest
I want to run the same in Kubernetes. This is my current yaml file.
apiVersion: v1
kind: Deployment
metadata:
name: demo-deployment
labels:
app: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: demo.docker.cloud.com/demo/runtime:latest
ports:
- containerPort: 1976
imagePullPolicy: Never
This yaml file covers everything except flag "-it". I am not able to find its Kubernetes equivalent. Please help me out with this. Thanks
I assume you are trying to connect a shell to your running container. Following the guide at https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/ - You would need the following commands. To apply your above configuration:
Create the pod: kubectl apply -f ./demo-deployment.yaml
Verify the Container is running: kubectl get pod demo-deployment
Get a shell to the running Container: kubectl exec -it demo-deployment -- /bin/bash
Looking at the Container definition in the API reference, the equivalent options are stdin: true and tty: true.
(None of the applications I work on have ever needed this; the documentation for stdin: talks about "reads from stdin in the container" and the typical sort of server-type processes you'd run in a Deployment don't read from stdin at all.)
kubectl run is the close match to docker run based on the requested scenario.
Some examples from Kubernetes documentation and it's purpose :
kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
kubectl run nginx --image=nginx -n
mynamespace # Run pod nginx in a specific namespace
kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml
I have an image from base os centos/systemd.when i give "exec /usr/sbin/init" in the laucher file of the container and creating the container using docker systemd services are up.
But when i create a container using the same image in kubernetes with the same launcher file systemd services are not comming up.How to run the /usr/sbin/init in the kubernetes so the systemd services comes up during the container creation
To solve this issue you can use kubernetes init container which run first before the main container creation and start the necessary services.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
initContainers:
- name: check-system-ready
image: busybox
command: ['sh', '-c', 'Your sysntax for systemd']
containers:
- your container spec
Sharing here official kubernetes init container doc : https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/
I have packed the software to a container. I need to put the container to cluster by Azure Container Service. The software have outputs of an directory /src/data/, I want to access the content of the whole directory.
After searching, I have to solution.
use Blob Storage on azure, but then after searching, I can't find the executable method.
use Persistent Volume, but all the official documentation of azure and pages I found is about Persistent Volume itself, not about how to inspect it.
I need to access and manage my output directory on Azure cluster. In other words, I need a savior.
As I've explained here and here, in general, if you can interact with the cluster using kubectl, you can create a pod/container, mount the PVC inside, and use the container's tools to, e.g., ls the contents. If you need more advanced editing tools, replace the container image busybox with a custom one.
Create the inspector pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pvc-inspector
spec:
containers:
- image: busybox
name: pvc-inspector
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /pvc
name: pvc-mount
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: YOUR_CLAIM_NAME_HERE
EOF
Inspect the contents
kubectl exec -it pvc-inspector -- sh
$ ls /pvc
Clean Up
kubectl delete pod pvc-inspector