I run my Rails Application production on kubernetes cluster. One node for Rails process, 1 node for Sidekiq for cronjob.
I call delay jobs on Rails Application, it can not run because the sidekiq process does not appear on Rails Node. What should I do please?
On Dockerfile
ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]
On docker-entrypoint.sh:
#!/bin/sh
set -e
if [ -f tmp/pids/server.pid ]; then
rm tmp/pids/server.pid
fi
bundle exec rails s -b 0.0.0.0 -e production
Can I run multiple process on one cluster Or call to another node for Job?
You have your entrypoint script hard-wired to only run the Rails server. A better approach would be to separate this setup from the actual command to run. If a Dockerfile has both an ENTRYPOINT and a CMD then the CMD is passed as arguments to the ENTRYPOINT, and you can combine this with the shell exec "$#" construct to replace the entrypoint script with the main container process.
In a Ruby context, you probably need to run most things under Bundler, and I'd fold that into the final line.
#!/bin/sh
# docker-entrypoint.sh
...
exec bundle exec "$#"
In the Dockerfile, you'd both specify the ENTRYPOINT as you have it now, but also the default CMD to run.
ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]
CMD rails s -b 0.0.0.0 -e production
Now, the benefit of doing this is that you can replace the CMD when you run the container. In plain Docker, you'd pass the command after the docker run image-name
docker run ... image-name sidekiq
The entrypoint wrapper would still run, and clean up the Rails pid file, and run sidekiq under Bundler instead of running the Rails server.
Bringing this up to Kubernetes, you would have two separate Deployments, one for the Rails server and one for the Sidekiq worker. Somewhat confusingly, Kubernetes uses different names for the two parts of the command; command: overrides Dockerfile ENTRYPOINT, and args: overrides CMD. So for this setup you need to specify sidekiq as the args:, and leave the command: alone.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
template:
spec:
containers:
- name: app
image: image-name
---
kind: Deployment
metadata:
name: worker
spec:
template:
spec:
containers:
- name: worker
image: image-name
args:
- sidekiq
Only the main application needs a matching Service (if you're using the Istio service mesh, it has different requirements) and you need to make sure the spec: { template: { metadata: { labels: } } } can disambiguate the two sets of Pods. Either or both parts can independently have non-default replicas: and a matching HorizontalPodAutoscaler.
Related
I want to delete a specific file from a cronJob to the following container, the problem is that when I run exec I got error, how can I exec to distroless container (k8s v1.22.5) and delte the file from a cronJob, which option do we have?
this is the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: distro
labels:
app: distro
spec:
replicas: 1
selector:
matchLabels:
app: distro
template:
metadata:
labels:
app: distro
spec:
containers:
- name: edistro
image: timberio/vector:0.21.X-distroless-libc
ports:
- containerPort: 80
what I tried is
kubectl exec -i -t -n apits aor-agent-zz -c tor "--" sh -c "clear; (bash || ash || sh)"
The error is:
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec
I tried it out like following
kubectl debug -it distro-d49b456cf-t85cm --image=ubuntu --target=edistro --share-processes -n default
And got error:
Targeting container "edistro". If you don't see processes from this container it may be because the container runtime doesn't support this feature. Defaulting debug container name to debugger-fvfxs. error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
As I guess (not sure) that our the container runtime doesnt support it which option do we have?
The answer below doesn't solve the issue, I need a way to access from outside the distroless pod and delete specific file there, how can I do this?
The point of using distro-less is to have a minimal amount of tools/software packaged in the image. This means the removal of unnecessary tools like shell from the image.
You may work around using, however it may depend on your objective:
kubectl debug -it <POD_TO_DEBUG> --image=<helper-image> --target=<CONTAINER_TO_DEBUG> --share-processes
Eg:
kubectl debug -it distro-less-pod --image=ubuntu --target=edistro --share-processes
Not a great option but it is the only option I can think of.
If you are able to enter the nodes where the pods are running and you have permissions to execute commands (most likely as root) in there, you can try nsenter or any other way to enter the container mount namespace directly.
I am currently executing an exe file using windows task scheduler. I would like to run my exe file in docker container. Though I could think of running my exe file in docker, I am not sure how to schedule the run as it was achieved through windows task scheduler.
Please advise on how to schedule and run the .exe file in docker..
Note: Helm is what I use for deployment. So I cannot use docker-compose.yaml file.
Thanks,
As #David Maze rightly pointed out, you might be interested in CronJobs.
We can find in the CronJob documentation:
CronJobs are useful for creating periodic and recurring tasks, like running backups or sending emails. CronJobs can also schedule individual tasks for a specific time, such as scheduling a Job for when your cluster is likely to be idle.
You can use a CronJob to run Jobs on a time-based schedule, it's similar to Cron tasks on a Linux or UNIX system.
I'll create a simple example from scratch to illustrate how it works.
You typically create a container image of your application and push it to a registry before referring to it in a CronJob.
I'm not sure if you have a docker image already built, so I'll create one too.
Suppose I have a job.py Python script and want to "package" it as a docker image:
$ cat job.py
print("Starting job...")
for i in range(1, 6):
print(i)
print("Done")
Docker can build images automatically by reading the instructions from a Dockerfile.
I have a single Python script, so I will use the python:3 image as the base image:
$ cat Dockerfile
FROM python:3
WORKDIR /usr/src/app
COPY job.py .
CMD [ "python", "./job.py" ]
After creating a Dockerfile we can use the docker build command to build Docker image and docker push
to share this image to the Docker Hub registry or to a self-hosted one.
NOTE: I'm using Docker Hub in this example.
### docker build -t <hub-user>/<repo-name>[:<tag>]
$ docker build -t zyrafywchodzadoszafy/cronjob:latest .
...
Successfully built cc46cde8fcdd
Successfully tagged zyrafywchodzadoszafy/cronjob:latest
### docker push <hub-user>/<repo-name>:<tag>
$ docker push zyrafywchodzadoszafy/cronjob:latest
The push refers to repository [docker.io/zyrafywchodzadoszafy/cronjob]
adabca8949d9: Pushed
a1e07bb90a13: Pushed
...
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
zyrafywchodzadoszafy/cronjob latest cc46cde8fcdd 14 minutes ago 885MB
We can quickly make sure that everything is working by running this docker image:
$ docker run -it --rm zyrafywchodzadoszafy/cronjob:latest
Starting job...
1
2
3
4
5
Done
Now it's time to create a CronJob:
$ cat cronjob.yml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-test
spec:
jobTemplate:
metadata:
name: cronjob-test
spec:
template:
metadata:
spec:
containers:
- image: zyrafywchodzadoszafy/cronjob:latest
name: cronjob-test
restartPolicy: OnFailure
schedule: '*/1 * * * *'
$ kubectl apply -f cronjob.yml
cronjob.batch/cronjob-test created
$ kubectl get cronjob cronjob-test
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob-test */1 * * * * False 1 10s 36s
In the example above, cronjob-test CronJob runs the job.py Python script every minute.
Finally, to see if it works as expected, let's take a look at the Pods spawned by the cronjob-test CronJob:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
cronjob-test-1618581120-vmqtc 0/1 Completed 0 2m37s
cronjob-test-1618581180-nqqsd 0/1 Completed 0 97s
cronjob-test-1618581240-vhrhm 0/1 Completed 0 37s
$ kubectl logs -f cronjob-test-1618581120-vmqtc
Starting job...
1
2
3
4
5
Done
Much more information on specific configuration options can be found in the CronJob documentation.
I have this dockerfile
FROM rabbitmq:3.7.12-management
CMD . /files/envinfo && echo $RABBITMQ_DEFAULT_USER && rabbitmq-server
In the envinfo I have this content
export RABBITMQ_DEFAULT_USER='anothername'
When the docker starts up the echo of RABBITMQ_DEFAULT_USER really prints out anothername. But when the service starts it doesnt see it.
If I set the environment variable another way from the kubernetes file it works as it should.
You can see the rabbitmq image I extend here.
https://github.com/docker-library/rabbitmq/blob/35b41e318d9d9272126f681be74bcbfd9712d71b/3.8/ubuntu/Dockerfile
I have another process that fetches the file and puts it in /files/envinfo to make it available for this docker image when it starts. So I cant use environment settings from kubernetes.
Looking forward to hear some suggestions =)
I agree with #code-gorilla use Kubernetes environment variables. But another way to do it is to source the environment variables before the entry point:
ENTRYPOINT ["source /files/envinfo && docker-entrypoint.sh"]
Overriding CMD will only change the argument to ENTRYPOINT, that's probably why it doesn't work for you.
You can try to debug it further:
1) Connect to the container and check if the env variable set within it
docker exec -it <container-id> /bin/bash
Then in your container:
echo $RABBITMQ_DEFAULT_USER
2) Use the Kubernetes evironment variable configuration instead of script execution in CMD
See the docs.
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Note: I suspect that an evironment variable set within the CMD command is not available in all shells of the container, e.g. when you open a new bash within it. This is what the Kubernetes config takes care of.
I have a node.js application that I am trying to deploy to Kubernetes.To run this normally on my machine without using Kubernetes, i would run the commands npm install and npm run build and then serve the "dist" folder. Normally i would install npm's serve using "npm install -g serve" and then run "serve -s dist".This works fine.But now to deploy to Kubernetes for production how can I create my image?I mean how should the docker file for this look like?
Note: I don't want to use nginx, apache or any kind of web server.I want to do this using node/npm's server for serving the dist folder.Plz help
Dockerfile(What I have tried)
FROM node:8
WORKDIR /usr/src/app
COPY /dist
RUN npm install -g serve
serve -s dist
I am sure if this dockerfile is right.So i need guidance on how to correctly create image to serve dist folder of npm run build.Plz help?
I think that you can find tons of tutorials in the globe about customers web applications integration in Kubernetes cluster and further exposing them to the service visitors.
Actually, application containerized in Docker environment has to be ported in the particular Image from Dockerfile or build up within Docker Compose tool in order to remain all the application’s service dependencies; when the image is ready, it can be stored in public DockerHub or in isolated Private registry, thus Kubernetes container runtime then pulls this image and creates appropriate workloads(Pods) within the cluster according to the declared resource model.
I would recommend the following scenario:
Build docker image from your initial Dockerfile (I've made some correction):
FROM node:8
WORKDIR /usr/src/app
COPY dist/ ./dist/
RUN npm install -g serve
$ sudo docker image build <PATH>
Create tag related to the source image:
$ sudo docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
Export the image to DockerHub or some private registry:
$ sudo docker push [OPTIONS] NAME[:TAG]
Create relevant Kubernetes workload(Pod) and apply it in Kubernetes cluster, starting Node server inside the container, listening on 5000 port:
apiVersion: v1
kind: Pod
metadata:
name: nodetest
labels:
node: test
spec:
containers:
- name: node-test
image: TARGET_IMAGE[:TAG]
ports:
- containerPort: 5000
command: [ "/bin/bash", "-ce", "serve -s dist" ]
If you consider exposing the application for external cluster clients, then look at NodePort service:
$ kubectl expose po nodetest --port=5000 --target-port=5000 --type=NodePort
Update_1:
The application service then might reachable on the host machine within some specific port, you can simply retrieve this port value:
kubectl get svc nodetest -o jsonpath='{.spec.ports[0].nodePort}'
Update_2:
In order to expose NodePort service on some desired port, just apply the following manifest, approaching 30000 port assignment:
apiVersion: v1
kind: Service
metadata:
labels:
node: test
name: nodetest
spec:
ports:
- nodePort: 30000
port: 5000
protocol: TCP
targetPort: 5000
selector:
node: test
type: NodePort
I have one docker image and I am using following command to run it.
docker run -it -p 1976:1976 --name demo demo.docker.cloud.com/demo/runtime:latest
I want to run the same in Kubernetes. This is my current yaml file.
apiVersion: v1
kind: Deployment
metadata:
name: demo-deployment
labels:
app: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: demo.docker.cloud.com/demo/runtime:latest
ports:
- containerPort: 1976
imagePullPolicy: Never
This yaml file covers everything except flag "-it". I am not able to find its Kubernetes equivalent. Please help me out with this. Thanks
I assume you are trying to connect a shell to your running container. Following the guide at https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/ - You would need the following commands. To apply your above configuration:
Create the pod: kubectl apply -f ./demo-deployment.yaml
Verify the Container is running: kubectl get pod demo-deployment
Get a shell to the running Container: kubectl exec -it demo-deployment -- /bin/bash
Looking at the Container definition in the API reference, the equivalent options are stdin: true and tty: true.
(None of the applications I work on have ever needed this; the documentation for stdin: talks about "reads from stdin in the container" and the typical sort of server-type processes you'd run in a Deployment don't read from stdin at all.)
kubectl run is the close match to docker run based on the requested scenario.
Some examples from Kubernetes documentation and it's purpose :
kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
kubectl run nginx --image=nginx -n
mynamespace # Run pod nginx in a specific namespace
kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml