I have one docker image and I am using following command to run it.
docker run -it -p 1976:1976 --name demo demo.docker.cloud.com/demo/runtime:latest
I want to run the same in Kubernetes. This is my current yaml file.
apiVersion: v1
kind: Deployment
metadata:
name: demo-deployment
labels:
app: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: demo.docker.cloud.com/demo/runtime:latest
ports:
- containerPort: 1976
imagePullPolicy: Never
This yaml file covers everything except flag "-it". I am not able to find its Kubernetes equivalent. Please help me out with this. Thanks
I assume you are trying to connect a shell to your running container. Following the guide at https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/ - You would need the following commands. To apply your above configuration:
Create the pod: kubectl apply -f ./demo-deployment.yaml
Verify the Container is running: kubectl get pod demo-deployment
Get a shell to the running Container: kubectl exec -it demo-deployment -- /bin/bash
Looking at the Container definition in the API reference, the equivalent options are stdin: true and tty: true.
(None of the applications I work on have ever needed this; the documentation for stdin: talks about "reads from stdin in the container" and the typical sort of server-type processes you'd run in a Deployment don't read from stdin at all.)
kubectl run is the close match to docker run based on the requested scenario.
Some examples from Kubernetes documentation and it's purpose :
kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
kubectl run nginx --image=nginx -n
mynamespace # Run pod nginx in a specific namespace
kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml
Related
I want to delete a specific file from a cronJob to the following container, the problem is that when I run exec I got error, how can I exec to distroless container (k8s v1.22.5) and delte the file from a cronJob, which option do we have?
this is the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: distro
labels:
app: distro
spec:
replicas: 1
selector:
matchLabels:
app: distro
template:
metadata:
labels:
app: distro
spec:
containers:
- name: edistro
image: timberio/vector:0.21.X-distroless-libc
ports:
- containerPort: 80
what I tried is
kubectl exec -i -t -n apits aor-agent-zz -c tor "--" sh -c "clear; (bash || ash || sh)"
The error is:
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec
I tried it out like following
kubectl debug -it distro-d49b456cf-t85cm --image=ubuntu --target=edistro --share-processes -n default
And got error:
Targeting container "edistro". If you don't see processes from this container it may be because the container runtime doesn't support this feature. Defaulting debug container name to debugger-fvfxs. error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
As I guess (not sure) that our the container runtime doesnt support it which option do we have?
The answer below doesn't solve the issue, I need a way to access from outside the distroless pod and delete specific file there, how can I do this?
The point of using distro-less is to have a minimal amount of tools/software packaged in the image. This means the removal of unnecessary tools like shell from the image.
You may work around using, however it may depend on your objective:
kubectl debug -it <POD_TO_DEBUG> --image=<helper-image> --target=<CONTAINER_TO_DEBUG> --share-processes
Eg:
kubectl debug -it distro-less-pod --image=ubuntu --target=edistro --share-processes
Not a great option but it is the only option I can think of.
If you are able to enter the nodes where the pods are running and you have permissions to execute commands (most likely as root) in there, you can try nsenter or any other way to enter the container mount namespace directly.
I have a docker image that I run with specific runtime arguments. When I install a helm chart to deploy the kubernetes pod with this image, the output from the pod is different from when I use 'docker run.' I found that I should use command and args parameters in the values.yml file and the templates/deployment directory but I'm still not getting the desired output.
I've tried different variations from these links but no luck:
How to pass docker run flags via kubernetes pod
How to pass dynamic arguments to a helm chart that runs a job
How to pass arguments to Docker container in Kubernetes or OpenShift through command line?
Here's the docker run command:
docker run --it --rm --network=host --ulimit rtptrio=0 --cap-add=sys_nice --ipc=private --sysctl fs.msqueue.msg_max="10000" image_name:tag
Please try something like that:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
hostNetwork: true
securityContext:
capabilities:
add: ["SYS_NICE"]
containers:
- name: main
image: image_name:tag
We want to deploy using ArgoCD from our Jenkinsfile (which is slightly not how this is intended to be done but close enough), and after done some experiments want to try using the official container with the CLI, so we have added this snippet to our our pipeline kubernetes yaml:
- name: argocdcli
image: argoproj/argocli
command:
- argo
args:
- version
tty: true
Unfortunately the usual way to keep these containers alive is to invoke cat in the container, which isn't there, so it fails miserably. Actually the only command in there is the "argo" command which doesn't have a way to sleep infinitely. (We are going to report this upstream so it will be fixed, but while we wait for that....)
My question therefore is, is there a way to indicate to Kubernetes that we know that this pod cannot keep itself up on its own, and therefore not tear it down immediately?
Unfortunately it's not possible since as you stated, argo is the only command available on this image.
It can be confirmed here:
####################################################################################################
# argocli
####################################################################################################
FROM scratch as argocli
COPY --from=argo-build /go/src/github.com/argoproj/argo/dist/argo-linux-amd64 /bin/argo
ENTRYPOINT [ "argo" ]
As we can see on this output, running argo is all this container is doing:
$ kubectl run -i --tty --image argoproj/argocli argoproj1 --restart=Never
argo is the command line interface to Argo
Usage:
argo [flags]
argo [command]
...
You can optionally create you own image based on that and include sleep, so it'll be possible to keep it running as in this example:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
I want to run docker containers using real-time scheduler. Is it possible to pass parameters in pod/deployment file to Kubernetes to run my containers as follows?
docker run -it --cpu-rt-runtime=950000 \
--ulimit rtprio=99 \
--cap-add=sys_nice \
debian:jessie
Unfortunately not all Docker command line features has relevant options in Kubernetes YAML.
While sys_time capability can be set using securiyContext in yaml, the --cpu-rt-runtime=950000 cannot.
In the K8s API Pod documentation you can find all the configuration that can be pass into container
under PodSecurityContext v1 core.
Another thing is that I`ve tried to run a container itself with the specs that you provided but I ran into an error:
docker: Error response from daemon: Your kernel does not support
cgroup cpu real-time runtime. See 'docker run --help'
This is related directly to kernel configuration named CONFIG_RT_GROUP_SCHED that is missing from your kernel image. Without it the cpu-rt-runtime won`t be possible to set to container.
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
You can use config maps to declare variables.
Then mount config map to env variables. Pass env variables to docker args.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
Create config map
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Create POD
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
Not all of these options are available in K8s but you can find a workaround using Limit Ranges. This is explained here.
Dockerfile
FROM ubuntu
MAINTAINER user#gmail.com
RUN apt-get update
RUN apt-get install -y openjdk-8-jdk
ADD build/libs/micro-service-gradle-0.0.1-SNAPSHOT.jar /var/local/
ENTRYPOINT exec java $JAVA_OPTS \
-jar /var/local/micro-service-gradle-0.0.1-SNAPSHOT.jar
EXPOSE 8080
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: micro-service-gradle
labels:
app: micro-service-gradle
spec:
replicas: 1
selector:
matchLabels:
app: micro-service-gradle
template:
metadata:
labels:
app: micro-service-gradle
spec:
containers:
- name: micro-service-gradle
image: micro-service-gradle:latest
ports:
- containerPort: 8080
Deploying spring boot application in Kubernetes . Pod is not getting created. When i check kubectl get pods. it says CrashLoopBackOff.
NAME READY STATUS RESTARTS AGE
micro-service-gradle-fc97c97b-8hwhg 0/1 CrashLoopBackOff 6 6m23s
I tried to check logs for the same container. Logs are empty
kubectl logs -p micro-service-gradle-fc97c97b-8hwhg
I created the container manually using docker run. There is no issues in image and containers works fine.
How to verify the logs for why the pods in crash status.
You need to use
kubectl describe pod micro-service-gradle-fc97c97b-8hwhg
to get the relevant logs. This should guide you to your problem.
I ran into a similar issue. When I run
kubectl describe pod <podname>
and read the events, Though the image was pulled, message outputted was 'restarting failed container'
The pod was crashing because it was not performing any task. To keep it running, I add a sleep command based on similar example in the docs
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
This SO answer also mentions running an infinite could also solve the problem
https://stackoverflow.com/a/55610769/7128032
You deployment resource looks ok. As you are able to create the container manually using, problem is with the connection to the image repository. Setup the impage pull secret and you should be able to create the pod
I faced similar issue. Just verify if your container is able to run continuously. You have to run the process in foreground to keep container running.
The possible reasons of such error are:
the application inside your pod is not starting due to an error;
the image your pod is based on is not present in the registry, or the node where your pod has been scheduled cannot pull from the registry;
some parameters of the pod has not been configured correctly
You can view what is happening by checking the events:
kubectl get events
or checking pod status:
kubectl describe po mypod-390jo50wn3-sp40r
Full explanation here: https://pillsfromtheweb.blogspot.com/2020/05/troubleshooting-kubernetes.html
I had the same problem, but using this worked for me:
image: Image:latest
command: [ "sleep" ]
args: [ "infinity" ]