I know that it is possible to execute multiple commands simultaneously in Kubernetes. I've seen Multiple commands in kubernetes. But what I wanted to know is to execute multiple commands simultaneously.
command: ["/bin/sh","-c"]
args: ["command one; command two"]
Here both command one and command two execute parallel.
As command one starts a server instance and similarly command two start another server.
In my docker environment I have specified one command and then I exec int docker and start another command. But in k8s deployment it won't be possible. What should I do in this situation?
I will be using helm chart, so if there is any trick related to helm charts. I can use that as well.
Fully agree with #David Maze and #masseyb.
Writing answer as community wiki just to index this answer for future researches.
You are not able to execute simultaneously multiple commands. You should create few similar but separate deployments and use there different commands.
Related
I'm trying to build an Oozie workflow to execute everyday a python script which needs specific libraries to run.
At the moment I created a python virtual environment (using venv) on a node of my cluster (consisting of 11 nodes).
Through Oozie I saw that it is possible to run the script using an SSH Action specifying the node containing the virtual environment. Alternatively it is possible to use a Shell Action to run the python script but this requires creating the virtual environment, with the same dependencies in terms of libraries, on the node where the shell will be executed (any of the cluster nodes).
I would like to avoid sharing keys or configuring all the cluster nodes to make this possible and looking in the docs I found this section talking about launching applications using Docker containers but in Hadoop version of my cluster this feature is experimental and not complete (Hadoop 3.0.0). I suppose that if you can launch Docker containers from shell you should be able to launch them from Oozie.
So my question is: has anyone tried to do it? Is it a trick to use docker this way?
I came across this question but to date 2019/09/30 there are no specific answers.
UPDATE: I tried to do it, and it works (you can find more info in my answer to this question). I'm still wondering if it's a correct way to do it.
I am creating an automation framework using selenium and my entry point in execution is creating containers of different db types, load them with database dumps and then start with tests.
I have one simple and might be a foolish question
If I create a docker-compose file which creates the above mentioned container and generally we do docker-compose up command to run the docker compose file.
But can I control the docker-compose/Dockerfile when the execution is going on, like
Test starts from TestNG -> Before scripts execute to run the docker-compose file and create containers.
how can I control that?
Thanks in advance
I can think of the following options:
1- use ansible to deploy for you, you can write a play book with the steps
advantages: scaling, will manage everything for you, you can add notifications, but requires managing ansible itself and learning it.
2- use a shell script that will start things in order start the containers (or however you want the order to be) then start the TestNG, cheap and dirty solution.
I have tests that I run locally using a docker-compose environment.
I would like to implement these tests as part of our CI using Jenkins with Kubernetes on Google Cloud (following this setup).
I have been unsuccessful because docker-in-docker does not work.
It seems that right now there is no solution for this use-case. I have found other questions related to this issue; here, and here.
I am looking for solutions that will let me run docker-compose. I have found solutions for running docker, but not for running docker-compose.
I am hoping someone else has had this use-case and found a solution.
Edit: Let me clarify my use-case:
When I detect a valid trigger (ie: push to repo) I need to start a new job.
I need to setup an environment with multiple dockers/instances (docker-compose).
The instances on this environment need access to code from git (mount volumes/create new images with the data).
I need to run tests in this environment.
I need to then retrieve results from these instances (JUnit test results for Jenkins to parse).
The problems I am having are with 2, and 3.
For 2 there is a problem running this in parallel (more than one job) since the docker context is shared (docker-in-docker issues). If this is running on more than one node then i get clashes because of shared resources (ports for example). my workaround is to only limit it to one running instance and queue the rest (not ideal for CI)
For 3 there is a problem mounting volumes since the docker context is shared (docker-in-docker issues). I can not mount the code that I checkout in the job because it is not present on the host that is responsible for running the docker instances that I trigger. my workaround is to build a new image from my template and just copy the code into the new image and then use that for the test (this works, but means I need to use docker cp tricks to get data back out, which is also not ideal)
I think the better way is to use the pure Kubernetes resources to run tests directly by Kubernetes, not by docker-compose.
You can convert your docker-compose files into Kubernetes resources using kompose utility.
Probably, you will need some adaptation of the conversion result, or maybe you should manually convert your docker-compose objects into Kubernetes objects. Possibly, you can just use Jobs with multiple containers instead of a combination of deployments + services.
Anyway, I definitely recommend you to use Kubernetes abstractions instead of running tools like docker-compose inside Kubernetes.
Moreover, you still will be able to run tests locally using Minikube to spawn the small all-in-one cluster right on your PC.
I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.
I'm trying to automate the following loop with Docker: spawn a container, do some work inside of it (more than one single command), get some data out of the container.
Something along the lines of:
for ( i = 0; i < 10; i++ )
spawn a container
wget revision-i
do something with it and store results in results.txt
According to the documentation I should go with:
for ( ... )
docker run <image> <long; list; of; instructions; separated; by; semicolon>
Unfortunately, this approach is not attractive nor maintanable as the list of instructions grows in complexity.
Wrapping the instructions in a script as in docker run <image> /bin/bash script.sh doesn't work either since I want to spawn a new container for every iteration of the loop.
To sum up:
Is there any sensible way to run a complex series of
commands as described above inside the same container?
Once some data are saved inside a container in, say, /home/results.txt,
and the container returns, how do I get results.txt? The only way I
can think of is to commit the container and tar the file out of the
new image. Is there a more efficient way to do it?
Bonus: should I use vanilla LXC instead? I don't have any experience with it though so I'm not sure.
Thanks.
I eventually came up with a solution that works for me and greatly improved my Docker experience.
Long story short: I used a combination of Fabric and a container running sshd.
Details:
The idea is to spawn container(s) with sshd running using Fabric's local, and run commands on the containers using Fabric's run.
To give a (Python) example, you might have a Container class with:
1) a method to locally spawn a new container with sshd up and running, e.g.
local('docker run -d -p 22 your/image /usr/sbin/sshd -D')
2) set the env parameters needed by Fabric to connect to the running container - check Fabric's tutorial for more on this
3) write your methods to run everything you want in the container exploiting Fabric's run, e.g.
run('uname -on')
Oh, and if you like Ruby better you can achieve the same using Capistrano.
Thanks to #qkrijger (+1'd) for putting me on the right track :)
On question 2.
I don't know if this is the best way, but you could install SSH on you image and use that. For more information on this, you can check out this page from the documentation.
You post 2 questions in one. Maybe you should put 2. in a different post. I will consider 1. here.
It is unclear to me whether you want to spawn a new container for every iteration (as you say first) or if you want to "run a complex series of commands as described above inside the same container?" as you say later.
If you want to spawn multiple containers I would expect you to have a script on your machine handling that.
If you need to pass an argument to your container (like i): there is being work done on passing arguments currently. See https://github.com/dotcloud/docker/pull/1015 (and https://github.com/dotcloud/docker/pull/1015/files for documentation change which is not online yet).