Pipeline which woks perfectly well on host get stuck in docker container - docker

I'm trying to execut following command:
tail -n +1 -f /test/high_scan.log | awk '1,/shutting down/; /shutting down/ { exit }'
Inside a docker container created by:
docker run -it ubuntu:18.04 bash
The idea is to break the pipeline when awk detects the shutting down pattern
It works perfectly well on the host but never terminates inside the container.
The file is the same one in both cases.
Any ideas?

awk in a fresh ubuntu:18.04 Docker container is mawk. On your host system, it's almost certainly gawk. If you install gawk in the container and use it instead, then it will work the same.
It looks like the particular problem is that mawk block-buffers its input, and once the writing process ends, tail -f will sit forever, so it will never read the end of its buffer. Thus, as an alternative solution, you can pass -W interactive to mawk if you'd rather keep using it.

Related

Docker container not exiting when disconnected from a login shell

I have a docker container running on the server side as a user's login shell so that anyone can ssh into the server and get access to some resource inside.
Say, I have a user called test and I want people to be able to SSH into test's account using some publicly available password. Here's what I have in /etc/passwd
test:1000:1000::/:/bin/test-shell
and in /bin/test-shell
#!/bin/bash
docker run -it --rm --network none python:3.10-alpine /bin/sh
Now, whenever someone ssh into my machine using ssh test#example.com, they are immediately dropped into a disposable docker container. So far so good.
The problem I have is, if the user doesn't exit the shell by either calling exit or pressing Ctrl-D but just closes their terminal window instead, the container is left running indefinitely and taking up limited server resources. I'm wondering if it is possible (and if so, how) to make sure the container is properly stopped (and therefore deleted) when a user disconnects.
I have seen Why does SIGHUP not work on busybox sh in an Alpine Docker container? and tried the approach of trapping both SIGHUP and SIGPIPE (running trap exit SIGHUP SIGPIPE inside the container), unfortunately nothing happens. I'm suspecting maybe the signals are received by the host shell instead of inside the shell inside the container, but I'm not sure how I can leverage that (if that's really what happens) considering I have no way to get the dynamically generated container name, and I can't name the container because I want every single ssh attempt to spawn a different container.
I think this answer may help you: https://unix.stackexchange.com/a/85429
For example you can try something like this:
#!/bin/bash
[[ "$PAM_USER" -ne "test" ]] && exit 0
SESSION_COUNT="$(w -h "$PAM_USER" | wc -l)"
if (( SESSION_COUNT == 0 )) && [[ "$PAM_TYPE" == "close_session" ]]; then
docker kill <containerId>
fi
You should use some unique tag for containerId maybe you can associate id of the container with the session id
Maybe setting up a connection timout with ClientAliveInterval and ClientAliveCountMax from /etc/ssh/sshd_config will help. By default it is not active.
https://linux.die.net/man/5/sshd_config
You could add this in your /bin/test-shell script:
#!/bin/bash
# Generate a random name for container
CONTAINER_NAME=${RANDOM}
# Register script to stop container with name
trap '/bin/stop-container "$CONTAINER_NAME"' EXIT
# Run container with name
docker run --name $CONTAINER_NAME -it --rm --network none python:3.10-alpine /bin/sh
# Unregister trap if normal exit
trap - EXIT
At the moment, I can't test it, but it could works, because as described in trap manual:
The environment in which the shell executes a trap on EXIT shall be identical to the environment immediately after the last command executed before the trap on EXIT was taken.
Hope this help you to find the right direction.

Can a process in Docker container run a command in the host? [duplicate]

How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

Why doesn't docker run support using --rm and -d at the same time?

Docker's documentation says that --rm and -d cannot be used together: https://docs.docker.com/engine/reference/run/#detached-d
Why? I seem to be misunderstanding what "detached" means; it seems entirely orthogonal to what --rm does. Why are they mutually exclusive?
By way of analogy, if I start a process in the background (e.g. start my-service), and the process exits, the process's resources are freed automatically (by init). It doesn't stick around, waiting for me to manually remove it. Why doesn't docker allow me to combine -d with --rm so that my container works in an analogous way?
I think that would address a very common use case. Seems that it would very nicely obviate the following work around: https://thraxil.org/users/anders/posts/2015/11/03/Docker-and-Upstart/
What am I missing???
Because --rm is implemented as a client-side option: when you specify --rm, the docker client waits around for the container to exit, and then removes it.
When you specify -d, the docker client exits. The container is running and is managed by the Docker server. There is no longer any client running to implement the --rm functionality.
As a way of answering, lets imagine i launch a container using -d and --rm and this is allowed. docker run -d --rm --name=my_app my_container
If my app works as expected it will run and when it come time to die, it dies and quietly removes itself, meaning I can rerun this command with little hassle. This seems ideal, and your question was one I faced myself while setting up some docker automation for my project.
What if, however, something goes wrong, the process running in the container encounters a fatal error and crashes, causing the container to die. The problem is that, to any outside observer, be they human or monitoring software, will not be able to tell the difference between these two scenarios, except maybe by how long the container was alive.
In cases where -d is not used, running a command in CLI or using upstart/initd/systemd/other, the container writes output, which will remain even if the container was given --rm, allowing an error or crash to be noticed and resolved.
In cases where -d is used, not binding container output to any output stream or file, --rm is not allowed to ensure that there is evidence left behind, in the form of a dead container, of an error/crash.
To wrap up/TL;DR: I believe this conscious choice was made by docker developers to prevent cases in which containers were completely unaccounted for with the trade-off being the need to add two more commands to your automation script.
As of version 1.13, you now can use --rm with background jobs. The remove operation has been moved from the client to the server to enable this.
See the following pull request for more details: https://github.com/docker/docker/pull/20848
Here's an example of this new behavior:
$ docker run -d --rm --name sleep busybox sleep 10
be943302d668f6416b083b8b6fa74e254b8e4200d14f6d7743d48691db1a4b18
$ docker ps | grep sleep
be943302d668 busybox "sleep 10" 4 seconds ago Up 3 seconds sleep
$ sleep 10; docker ps | grep sleep

How to run shell script on host from docker container?

How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

Correct way to detach from a container without stopping it

In Docker 1.1.2 (latest), what's the correct way to detach from a container without stopping it?
So for example, if I try:
docker run -i -t foo /bin/bash or
docker attach foo (for already running container)
both of which get me to a terminal in the container, how do I exit the container's terminal without stopping it?
exit and CTR+C both stop the container.
Type Ctrl+p then Ctrl+q. It will help you to turn interactive mode to daemon mode.
See https://docs.docker.com/engine/reference/commandline/cli/#default-key-sequence-to-detach-from-containers:
Once attached to a container, users detach from it and leave it running using the using CTRL-p CTRL-q key sequence. This detach key sequence is customizable using the detachKeys property. [...]
Update: As mentioned in below answers Ctrl+p, Ctrl+q will now turn interactive mode into daemon mode.
Well Ctrl+C (or Ctrl+\) should detach you from the container but it will kill the container because your main process is a bash.
A little lesson about docker.
The container is not a real full functional OS. When you run a container the process you launch take the PID 1 and assume init power. So when that process is terminated the daemon stop the container until a new process is launched (via docker start) (More explanation on the matter http://phusion.github.io/baseimage-docker/#intro)
If you want a container that run in detached mode all the time, i suggest you use
docker run -d foo
With an ssh server on the container. (easiest way is to follow the dockerizing openssh tutorial https://docs.docker.com/engine/examples/running_ssh_service/)
Or you can just relaunch your container via
docker start foo
(it will be detached by default)
I dug into this and all the answers above are partially right. It all depends on how the container is launched. It comes down to the following when the container was launched:
was a TTY allocated (-t)
was stdin left open (-i)
^P^Q does work, BUT only when -t and -i is used to launch the container:
[berto#g6]$ docker run -ti -d --name test python:3.6 /bin/bash -c 'while [ 1 ]; do sleep 30; done;'
b26e39632351192a9a1a00ea0c2f3e10729b6d3e22f8e0676d6519e15c08b518
[berto#g6]$ docker attach test
# here I typed ^P^Q
read escape sequence
# i'm back to my prompt
[berto#g6]$ docker kill test; docker rm -v test
test
test
ctrl+c does work, BUT only when -t (without -i) is used to launch the container:
[berto#g6]$ docker run -t -d --name test python:3.6 /bin/bash -c 'while [ 1 ]; do sleep 30; done;'
018a228c96d6bf2e73cccaefcf656b02753905b9a859f32e60bdf343bcbe834d
[berto#g6]$ docker attach test
^C
[berto#g6]$
The third way to detach
There is a way to detach without killing the container though; you need another shell. In summary, running this in another shell detached and left the container running pkill -9 -f 'docker.*attach':
[berto#g6]$ docker run -d --name test python:3.6 /bin/bash -c 'while [ 1 ]; do sleep 30; done;'
b26e39632351192a9a1a00ea0c2f3e10729b6d3e22f8e0676d6519e15c08b518
[berto#g6]$ docker attach test
# here I typed ^P^Q and doesn't work
^P
# ctrl+c doesn't work either
^C
# can't background either
^Z
# go to another shell and run the `pkill` command above
# i'm back to my prompt
[berto#g6]$
Why? Because you're killing the process that connected you to the container, not the container itself.
If you do "docker attach "container id" you get into the container.
To exit from the container without stopping the container you need to enter Ctrl+P+Q
I consider Ashwin's answer to be the most correct, my old answer is below.
I'd like to add another option here which is to run the container as follows
docker run -dti foo bash
You can then enter the container and run bash with
docker exec -ti ID_of_foo bash
No need to install sshd :)
Try CTRL+P,CTRL+Q to turn interactive mode to daemon.
If this does not work and you attached through docker attach, you can detach by killing the docker attach process.
Better way is to use sig-proxy parameter to avoid passing the CTRL+C to your container :
docker attach --sig-proxy=false [container-name]
Same option is available for docker run command.
The default way to detach from an interactive container is Ctrl+P Ctrl+Q, but you can override it when running a new container or attaching to existing container using the --detach-keys flag.
You can use the --detach-keys option when you run docker attach to override the default CTRL+P, CTRL + Q sequence (that doesn't always work).
For example, when you run docker attach --detach-keys="ctrl-a" test and you press CTRL+A you will exit the container, without killing it.
Other examples:
docker attach --detach-keys="ctrl-a,x" test - press CTRL+A and then X to exit
docker attach --detach-keys="a,b,c" test - press A, then B, then C to exit
Extract from the official documentation:
If you want, you can configure an override the Docker key sequence for detach. This is useful if the Docker default sequence conflicts with key sequence you use for other applications. There are two ways to define your own detach key sequence, as a per-container override or as a configuration property on your entire configuration.
To override the sequence for an individual container, use the --detach-keys="<sequence>" flag with the docker attach command. The format of the <sequence> is either a letter [a-Z], or the ctrl- combined with any of the following:
a-z (a single lowercase alpha character )
# (at sign)
[ (left bracket)
\ (two backward slashes)
_ (underscore)
^ (caret)
These a, ctrl-a, X, or ctrl-\\ values are all examples of valid key sequences. To configure a different configuration default key sequence for all containers, see Configuration file section.
Note: This works since docker version 1.10+ (at the time of this answer, the current version is 18.03)
If you just want see the output of the process running from within the container, you can do a simple docker container logs -f <container id>.
The -f flag makes it so that the output of the container is followed and updated in real-time. Very useful for debugging or monitoring.
In Docker container atleast one process must be run, then only the container will be running the docker image(ubuntu,httd..etc, whatever it is) at background without exiting
For example in ubuntu docker image ,
To create a new container with detach mode (running background atleast on process),
docker run -d -i -t f63181f19b2f /bin/bash
it will create a new contain for this image(ubuntu) id f63181f19b2f . The container will run in the detached mode (running in background) at that time a small process tty bash shell will be running at background. so, container will keep on running untill the bash shell process will killed.
To attach to the running background container,use
docker attach b1a0873a8647
if you want to detach from container without exiting(without killing the bash shell),
By default , you can use ctrl-p,q. it will come out of container without exiting from the container(running background. that means without killing the bash shell).
You can pass the custom command during attach time to container,
docker attach --detach-keys="ctrl-s" b1a0873a8647
this time ctrl-p,q escape sequence won't work. instead, ctrl-s will work for exiting from container. you can pass any key eg, (ctrl-*)
You can simply kill docker cli process by sending SEGKILL. If you started the container with
docker run -it some/container
You can get it's pid
ps -aux | grep docker
user 1234 0.3 0.6 1357948 54684 pts/2 Sl+ 15:09 0:00 docker run -it some/container
let's say it's 1234, you can "detach" it with
kill -9 1234
It's somewhat of a hack but it works!
To prevent having logs you should run in detach mode using the -d flag
docker run -d <your_command>
If you are already stuck, you could open a new window/tab in your terminal and close the first one. It won't stop the process of the running job
in case if you using docker on windows, you may use combination 'CTRL + D'
Old post but just exit then start it again... the issue is if you are on a windows machine Ctrl p or Ctrl P are tied to print... exiting the starting the container should not hurt anything

Resources