Write to stdin of running docker container - docker

Say I run a docker container as a daemon:
docker run -d foo
is there a way to write to the stdin of that container? Something like:
docker exec -i foo echo 'hi'
last time I checked the -i and -d flags were mutually exclusive when used with the docker run command.

According to another answer on ServerFault, you can use socat to pipe input to a docker container like this:
echo 'hi' | socat EXEC:"docker attach container0",pty STDIN
Note that the echo command includes a newline at the end of the output, so the line above actually sends hi\n. Use echo -n if you don't want a newline.
Let's see how this looks like with the example script from David's answer:
# Create a new empty directory
mkdir x
# Run a container, in the background, that copies its stdin
# to a file in that directory
docker run -itd --rm -v $PWD/x:/x --name cattainer busybox sh -c 'cat >/x/y'
# Send some strings in
echo 'hi' | socat EXEC:"docker attach cattainer",pty STDIN
echo 'still there?' | socat EXEC:"docker attach cattainer",pty STDIN
# Stop container (cleans up itself because of --rm)
docker stop cattainer
# See what we got out
cat x/y
# should output:
# hi
# still there?
You could also wrap it in a shell function:
docker_send() {
container="$1"; shift
echo "$#" | socat EXEC:"docker attach $container",pty STDIN
}
docker_send cattainer "Hello cat!"
docker_send cattainer -n "No newline here:" # flag -n is passed to echo
Trivia: I'm actually using this approach to control a Terraria server running in a docker container, because TerrariaServer.exe only accepts server commands (like save or exit) on stdin.

In principle you can docker attach to it. CTRL+C will stop the container (by sending SIGINT to the process); CTRL+P, CTRL+Q will detach from it and leave it running (if you started the container with docker run -it).
The one trick here is that docker attach expects to be running in a terminal of some sort; you can do something like run it under script to meet this requirement. Here's an example:
# Create a new empty directory
mkdir x
# Run a container, in the background, that copies its stdin
# to a file in that directory
docker run -itd -v $PWD/x:/x --name cat busybox sh -c 'cat >/x/y'
# Send a string in
echo foo | script -q /dev/null docker attach cat
# Note, EOF here stops the container, probably because /bin/cat
# exits normally
# Clean up
docker rm cat
# See what we got out
cat x/y
In practice, if the main way a program communicates is via text on its standard input and standard output, Docker isn't a great packaging mechanism for it. In higher-level environments like Docker Compose or Kubernetes, it becomes progressively harder to send content this way, and there's frequently an assumption that a container can run completely autonomously. Just invoking the program gets complicated quickly (as this question hints at). If you have something like, say, the create-react-app setup tool that asks a bunch of interactive questions then writes things to the host filesystem, it will be vastly easier to run it directly on the host and not in Docker.

Related

Symlink /dev/err in docker

For what purpose are such symlinks set for logs in the docker? Why is it redirecting to the main process?
ln -sf /proc/1/fd/2 /app/storage/logs/apache.log
ln -sf /proc/1/fd/2 /dev/stderr
Many software packages are designed to write their logs out to files, and don't have an obvious option to send the logs to somewhere else. So the first thing that having this symlink does is let you configure the application to write logs to "a file", but actually have it show up on the container's stdout or stderr.
For a minimal example, you could try something like
docker run -d --name test busybox \
sh -c 'ln -s /tmp/log.txt /proc/1/fd/1; echo "hello" > /tmp/log.txt'
docker wait test
docker logs test
docker rm test
In the temporary BusyBox container, we set up a symlink, and then write some text to the "log file"; since it goes to the main process's stdout, it shows up in docker logs.
Another common reason to do this is to give the operator the opportunity to actually write to a file, if that's what they want. Let's consider this minimal image:
FROM busybox
RUN mkdir /logs \
&& ln -s /logs/log.txt /proc/1/fd/1
CMD echo 'hello' > /logs/log.txt
This is the same as the previous command but recast into image form
$ docker build -t log-test .
$ docker run --rm log-test
hello
However, we also have the option of bind-mounting a host directory to receive those logs:
$ mkdir logs
$ docker run --rm -v "$PWD/logs:/logs" log-test
$ cat logs/log.txt
hello
The docker run -v bind-mount hides the /logs directory in the image, and therefore the symlink, so the echo command writes to an actual file, which is then visible on the host system.
I know in particular the standard HTTP-server containers are set up this way, sending the HTTP access log to stdout unless something else is configured as log storage, but it's not specific to this class of image.

Execute local shell script using docker run interactive

Can I execute a local shell script within a docker container using docker run -it ?
Here is what I can do:
$ docker run -it 5ee0b7440be5
bash-4.2# echo "Hello"
Hello
bash-4.2# exit
exit
I have a shell script on my local machine
hello.sh:
echo "Hello"
I would like to execute the local shell script within the container and read the value returned:
$ docker run -it 5e3337440be5 #Some way of passing a reference to hello.sh to the container.
Hello
A specific design goal of Docker is that you can't. A container can't access the host filesystem at all, except to the extent that an administrator explicitly mounts parts of the filesystem into the container. (See #tentative's answer for a way to do this for your use case.)
In most cases this means you need to COPY all of the scripts and support tools into your image. You can create a container running any command you want, and one typical approach is to set the image's CMD to do "the normal thing the container will normally do" (like run a Web server) but to allow running the container with a different command (an admin task, a background worker, ...).
# Dockerfile
FROM alpine
...
COPY hello.sh /usr/local/bin
...
EXPOSE 80
CMD httpd -f -h /var/www
docker build -t my/image .
docker run -d -p 8000:80 --name web my/image
docker run --rm --name hello my/image \
hello.sh
In normal operation you should not need docker exec, though it's really useful for debugging. If you are in a situation where you're really stuck, you need more diagnostic tools to be understand how to reproduce a situation, and you don't have a choice but to look inside the running container, you can also docker cp the script or tool into the container before you docker exec there. If you do this, remember that the image also needs to contain any dependencies for the tool (interpreters like Python or GNU Bash, C shared libraries), and that any docker cpd files will be lost when the container exits.
You can use a bind-mount to mount a local file to the container and execute it. When you do that, however, be aware that you'll need to be providing the container process with write/execute access to the folder or specific script you want to run. Depending on your objective, using Docker for this purpose may not be the best idea.
See #David Maze's answer for reasons why. However, here's how you can do it:
Assuming you're on a Unix based system and the hello.sh script is in your current directory, you can mount that single script to the container with -v $(pwd)/hello.sh:/home/hello.sh.
This command will mount the file to your container, start your shell in the folder where you mounted it, and run a shell:
docker run -it -v $(pwd)/hello.sh:/home/hello.sh --workdir /home ubuntu:20.04 /bin/sh
root#987eb876b:/home ./hello.sh
Hello World!
This command will run that script directly and save the output into the variable output:
output=$(docker run -it -v $(pwd)/hello.sh:/home/test.sh ubuntu:20.04 /home/hello.sh)
echo $output
Hello World!
References for more information:
https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
https://docs.docker.com/storage/bind-mounts/#use-a-read-only-bind-mount

Docker-compose pass stdout from a service to stdin in another service

I'm not sure that what's I'm looking for is possible or not... I'm a newbie in docker-compose world and I've read a lot of documentation and posts but I wasn't able to find out a solution.
I need to pass the stdout of a service defined in docker-compose to the stdin of another service. So the output of ServiceA will be the input of ServiceB.
Is it possible?
I see the function stdin_open but I cannot understand how to use the stdout of the other service as input.
Any suggestion?
Thanks
You can't do this in Docker easily.
Container processes' stdin and stdout aren't usually used for much. Most often the stdout receives log messages that can get reviewed later, and containers actually communicate through network sockets. (A container would typically run Apache but not grep.)
Docker doesn't have a native cross-container pipe, beyond the networking setup. If you're docker running containers from the shell, you can use an ordinary pipe there:
sudo sh -c 'docker run image-a | docker run image-b'
If it's practical to run both processes in the same container, you can use a shell pipe as the main container command:
docker run image sh -c 'process_a | process_b'
A differently hacky approach is to use a tool like Netcat to bridge between "stdin" and a network port. For example, consider a "server":
#!/bin/sh
# server.sh
# (Note, this uses busybox nc syntax)
nc -l -p 12345 \
| cat \ # <-- a process that reads from stdin
> out.txt
And a matching "client":
#!/bin/sh
# client.sh
cat in.txt \ # <-- a process that writes to stdout
| nc "$1" 12345
Build these into an image
FROM busybox
COPY client.sh server.sh /bin/
EXPOSE 12345
WORKDIR /data
CMD ["server.sh"]
Now run both containers:
docker network create testnet
docker build -t testimg .
echo hello world > in.txt
docker run -d -v $PWD:/data --net testnet --name server testimg \
server.sh
docker run -d -v $PWD:/data --net testnet --name client testimg \
client.sh server
docker wait client
docker wait server
cat out.txt
A more robust path would be to wrap the server process in a simple HTTP server that accepted an HTTP POST on some path and launched a subprocess to handle the request; then you'd have a single long-running server process instead of having to re-launch it for each request. The client would use a tool like curl or any other HTTP client.

Getting docker containers output from another docker container

I have an application that runs dockerized commands via docker run --rm .... Is it possible to dockerize this application?
E.g. I want the app RUNNER to run app RUNNABLE and read the stdout result.
(I need multiple instances of RUNNABLE in async calling fashion but that's RUNNER application business)
I know it is possible to just export root access to docker socket to the RUNNER application but this doesn't feel right. Especially with no-root-running rule for *nix applications.
Is there any other method to communicate containers rather than exporting socket to the container? Am I doing the system design wrong?
Basically it's possible to let the container communicate over host based files. Take a look at the following example.
# create a test dir
mkdir -p docker_test/bin
cd docker_test
# an endless running script that writes output in a file
vim bin/printDate.sh
chmod 700 bin/*.sh
# start docker container from debian image
# the container got a local host mount of /tmp to container /opt/output
# printDate.sh writes its output to container /opt/output/printDate.txt
docker run --name cont-1 \
-v /tmp:/opt/output -it -v `pwd`/bin:/opt/test \
debian /opt/test/printDate.sh
# start a second container in another terminal and mount /tmp again
docker run --name cont-2 -v /tmp:/opt/output -it \
debian tail -f /opt/output/printDate.txt
# the second container prints the output of program in cont-1
The endless script for container 1 output
#!/bin/bash
while true; do
sleep 1
date >> /opt/output/printDate.txt
done

How to tell if a docker container run with -d has finished running its CMD

I want to make a simple bash script which runs one docker container with -d and then do something else if and only if the container has finished running its CMD. How can I do this while avoiding timing issues since the docker container can take a while to finish starting up?
My only thought was that the Dockerfile for the container will need to create some sort of state on the container itself when it's done and then the bash script can poll until the state file is there. Is there a better / standard way to do something like this?
Essentially I need a way for the host that ran a docker container with -d to be able to tell when it's ready.
Update
Made it work with the tailing logs method, but it seems a bit hacky:
docker run -d \
--name sauceconnect \
sauceconnect
# Tail logs until 'Sauce Connect is up'
docker logs -f sauceconnect | while read LINE
do
echo "$LINE"
if [[ "$LINE" == *"Sauce Connect is up"* ]]; then
pkill -P $$ docker
fi
done
You should be fine to check the logs via docker logs -f <containter_name_or_ID>
-f : same as tail -f
For example, the CMD is finished, and export a log as JOB ABC is successfully started.
your script can detect and run the rest jobs after get it.

Resources