I'm trying to run a bunch of docker-compose commands in parallel using GNU parallel. For some reason though, it looks like starting parallel force docker-compose in detached mode so I can't access the container output anymore.
I was hoping parallel would just keep docker-compose attached and print the output in order once the process is done.
Here's my command
echo 'tests/test_foo.py' | parallel -X docker-compose --project-name bar run --rm test py.test $*
Is there a way to force docker-compose to stay attached?
There is no detached mode for run, it should always print the log lines to the terminal. This seems to work for me with a single line of input.
However, calculating the container name is racy, so trying to run multiple instances like this may result in name conflict errors. You might need to use --name as an option to run to set a unique name for each.
You might also consider using pytest-xdist to run the tests in parallel inside a single test container.
Further to dnephin's answer, You can name each container with parallel's positional arguments so there is no naming contention
echo -e "1\ttest1\n2\ttest2\n3\ttest3" | parallel -X --colsep '\t' docker-compose run --name proj{1} web echo {2}
In this demo the tab is used to separate the first field, used for the name and the second string used for the command.
You either need to use existing fields from tests/test_foo.py for the names or add this additional information if what you have in that file is not unique.
Related
I'm using Go to create a kind of custom client for Docker. It parses a YAML file, creates the containers with some hard-coded options and then creates terminal windows in order to be able to interact with the containers. However, I'm struggling with the last part, the creation of the terminal windows.
I want my program to use setuid to avoid users to use sudo or be part of the docker group, as that could make them able to use directly the Docker CLI. Instead, I want them to only use my program to manage Docker containers. To create the terminal windows, I was using the os/exec package to call the terminal emulator, which it would create several tabs for each container. For example, the executed command would be: xfce4-terminal -e "sh -c 'docker container attach container1; exec sh" --tab -e "sh -c 'docker container attach container2; exec sh" (the last part, exec sh, is added so the tab can be used after stopping the container)
This doesn't work because xfce4-terminal, like gnome-terminal or terminator, are GTK+ apps and they don't allow setuid execution. I tried to use cmd.SysProcAttr to set the real UID and GID while creating the terminal windows, but then the docker attach command fails as the user doesn't belong to the docker group. Finally, I tried using sudo, but this has the problem that, after stopping the container, the user can execute commands as the root user.
As stated in the GTK website, I believe that the way to go would be to call the client.ContainerAttach function of the Docker SDK and pass the output to the non-setuid terminal through a pipe. But I don't know how I should implement this, so that's why I'm asking for your help.
I'd be happy too if you provide me a solution that doesn't use pipes or that stuff but it has the desired behaviour, that is, create one terminal window with N tabs, one for each container (or N terminal windows, both are good to me).
Thanks in advance!
I'm somewhat new to Docker. I would like to be able to use Docker to distribute a CLI program, but run the program normally once it has been installed. To be specific, after running docker build on the system, I need to be able to simply run my-program in the terminal, not docker run my-program. How can I do this?
I tried something with a Makefile which runs docker build -t my-program . and then writes a shell script to ~/.local/bin/ called my-program that runs docker run my-program, but this adds another container every time I run the script.
EDIT: I realize is the expected behavior of docker run, but it does not work for my use-case.
Any help is greatly appreciated!
If you want to keep your script, add the remove flag --rm to the docker run command. The remove flag removes the container automatically after the entry-point process has exit.
Additionally, I would personally prefer an alias for this. Simply add something like this example alias my-program="docker run --rm my-program" to your ~/.bashrc or ~/.zshrc file. This even has the advantage that all other parameters after the alias (my-program param1 param2) are automatically forwarded to the entry-point of your image without any additional effort.
As a novice user of a complicated CI system, trying out scripts, I am confused whether my scripts are executed directly by my system's bash, or from a docker container running on the same system. Hence the question: what command (environment variable query or whatever) could tell me whether I am in docker or not?
I guess you are trying to find out whether your script is run from within the context of a docker container OR from within the host machine which runs docker.
Another way of looking at this is: you have a script which is running and this script is actually a process. And any given process has an associated PID.
You might want to find out if this process is running within a docker container or directly within the host machine.
Let's say that your process runs within docker container, then we can conclude that docker process is the parent of your process
Running the top command would list all the processes in the machine. Then using another command ps -axfo pid,uname,cmd would give full listing of processes
Let's say you have identified the parent process id (for eg: 2871). Now you can run
docker ps | awk '{ print $1}' | xargs docker inspect -f '{{ .State.Pid }} {{ .Config.Hostname }}' | grep 2871
Using this you can identify the container containing the process
If we run pstree, we could the process tree all the way upto boot process
Courtesy:
Finding out to which docker container a process belongs to
how-do-i-get-the-parent-process-id-of-a-given-child-process
Hope this helps
If you find yourself in a container, you must have execute a command to enter that container.
If you forgot where you are, type docker ps. If it fails, you are in a docker container.
Edit :
Obviously, this simple trick does not work when you run docker in docker.
I have a docker container that has services running on multiple ports.
When I try to start one of these processes mid-way through my Dockerfile it causes the build process to stall indefinitely.
RUN /opt/webhook/webhook-linux-amd64/webhook -hooks /opt/webhook/hooks.json -verbose
So the program is running as it should but it never moves on.
I've tried adding & to the end of the command to tell bash to run the next step in parallel but this causes the service to not be running in the final image. I also tried redirecting the output of the program to /dev/null.
How can I get around this?
You have a misconception here. The commands in the Dockerfile are executed to create a docker image before it is executed. One type of command in the Dockerfile is RUN which allows you to run an arbitrary shell command whose actions influence the image under creation in some sense.
Therefore, the build process waits until the command terminates.
It seems you want to start the service when the image is started. To do so use the CMD command instead. It tells Docker what is supposed to be executed when the image is started.
When I'm debugging my Dockerfile, y constantly need to run these two commands:
$ docker build -t myuser/myapp:mytag - < myapp.docker # to create the container
$ docker run -i -t myuser/myapp:mytag /bin/bash # to enter the container and see what's going on when something went wrong
("mytag" is usually something like "production", "testing", or "development". Not sure if I'm supposed to use tags this way)
But now the second command doesn't seem to work anymore: it's starting an old container. If I list all the containers with $ docker images, I see my tagged container in the 3rd place, and other untagged containers before it. If I use the ID of the 1st container it works fine, but it will be annoying to do it that way, I will have to search for its ID every time.
What am I doing wrong?
You just have to start and attach a container using:
docker start -i #ContainerID
It's important to be clear about containers vs images. It sounds like your tagged image is 3rd in the list of images, and that you believe the first image that only has n ID really should be tagged but it isn't. This probably means that there's a problem building the image. The docker build output is verbose by default and should show you the problem.
As an aside, I'm not entirely sure about your use case but the idea of having different containers for development, testing and production is an anti-pattern. The entire point is to minimize differences between execution environments. In most cases you should use the same image but provide different environment variables to configure the application as desired for each environment. But again, I'm sure there are reasons to do this and you may have a legitimate one.
Here's what I use:
run_most_recent_container() { docker exec -it `docker ps -a --no-trunc -q | head -n 1` bash; }