Docker save read arguments from stdin - docker

In lieu of a docker-compose save command I've resorted to using sed to read the YAML file and piping this into docker save.
I'm seeing some strange behavior with docker reading from stdin. For example, I have a command that can find all the images in a docker-compose YAML file and output it to stdout
sed -nr 's/image: "(.*)"/\1/p' docker-compose.yml | uniq | xargs -d '\n' | cat
Will output:
mysql redis python
However if I try to pipe this into docker save I get the following error
sed -nr 's/image: "(.*)"/\1/p' docker-compose.yml | uniq | xargs -d '\n' |
docker save | gzip -c > images.tar.gz
"docker save" requires at least 1 argument(s).
See 'docker save --help'.
Usage: docker save [OPTIONS] IMAGE [IMAGE...]
Save one or more images to a tar archive (streamed to STDOUT by default)
How do I get docker to read from stdin for its arguments?

To complete this with a proper answer, from William Pursell in the comments:
The same way you get any command to take arguments from stdin: sed ... | xargs docker save

Related

how to pass output of a command to another in shell script

I am trying to ssh into server, and into a docker container to run the service. however I am not able to store containerId into a variable to pass it to enter the container.
#!/bin/bash
ssh test_server << EOF
ls
sudo docker ps | grep 'tests_service_image' | colrm 13 # This command works
containerId=$(sudo docker ps | grep 'tests_service_image' | colrm 13) # This doesn't
sudo docker exec -i "$containerId" bash # Throws error since containerId is empty
./run.sh
EOF
exit
The problem is that you are doing variable/function expansions on your own side. You need to escape those so that those expansions happen on the server side.
#!/bin/sh
ssh test_server << EOF
containerId=\$(sudo docker ps | grep 'tests_service_image' | colrm 13)
sudo docker exec -i "\$containerId" bash
./run.sh
EOF
exit
Edit:
Pass it directly to docker exec command like so
sudo docker exec -i $(sudo docker ps | grep 'tests_service_image' | colrm 13) bash
Original Answer:
This is written assuming that the script execution is done post sshing into the server. but modified the answer to above based on the specific query
container ID is stored in variable containerId, you are getting the error Error: No such container: because you are passing a different variable $container instead of $containerId to docker exec command.

cannot apply sed to the stdout within a docker image

I have a docker file that has an entry point which is an s2i/bin/run script:
#!/bin/bash
export_vars=$(cgroup-limits); export $export_vars
exec /opt/app-root/services.sh
The services.sh script runs php-fpm and nginx:
php-fpm 2>&1
nginx -c /opt/app-root/etc/conf.d/nginx/nginx.conf
# this echo to stdout is needed otherwise no stdout doesn't show up on the docker run output
echo date 2>&1
The php scripts are logging to stderr so that script does 2>&1 to redirected to stdout which is needed for the log aggregator.
I want to run sed or awk over the log output. Yet if I try:
php-fpm 2>&1 | sed 's/A/B/g'
or
exec /opt/app-root/services.sh | sed 's/A/B/g'
Then nothing shows up when I run the container. Without the pipe to sed the output of php-fpm shows up as the output of docker run okay.
Is there a way to sed the output of php-fpm ensuring that the output makes it to the output of docker?
Edit Note that I tried the obvious | sed 's/A/B/g' in both places and was also trying running the pipe in a subshell $(stuff|sed 's/A/B/g') in both places. Neither works so this seems to be a Docker or s2i issue.
Try keeping sed arguments in double quotes.
php-fpm 2>&1 | sed "s/A/B/g"

Piped command is failing inside docker container

I'm executing this command from docker host which is finally not giving me any error on stdout. And completes successfully on prompt but doesn't executes what it is supposed to do inside container.
Can someone please help me identify what am i doing wrong?
docker exec -dt SPSSC /bin/bash -c "grep -ril 'LOCALIZATION_ENABLED="false"' /opt/tpa/confd/config/* | grep -v 'diameter' | xargs sed -i 's/LOCALIZATION_ENABLED="false"/LOCALIZATION_ENABLED="true"/g'"

Running `bash` using docker exec with xargs command

I've been trying to execute bash on running docker container which has specific name as follows. --(1)
docker ps | grep somename | awk '{print $1 " bash"}' | xargs -I'{}' docker exec -it '{}'
but it didn't work and it shows a message like
"docker exec" requires at least 2 argument(s)
when I tried using command as follows --(2)
docker ps | grep somename | awk '{print $1 " bash"}' | xargs docker exec -it
it shows another error messages like
the input device is not a TTY
But when I tried using $() (sub shell) then it can be accomplished but I cannot understand why it does not work with the two codes (1)(2) above (using xargs)
Could any body explain why those happen?
I really appreciate any help you can provide in advance =)
EDIT 1:
I know how to accomplish my goal in other way like
docker exec -it $(docker ps | grep perf | awk '{print $1 " bash"}' )
But I'm just curious about why those codes are not working =)
First question
"docker exec" requires at least 2 argument(s)
In last pipe command, standard input of xargs is, for example, 42a9903486f2 bash. And you used xargs with -I (replace string) option.
So, docker recognizes that 42a9903486f2 bash is a first argument, without 2nd argument.
Below example perhaps is the what you expected.
docker ps | grep somename | awk '{print $1 " bash"}' | xargs bash -c 'docker exec -it $0 $1'
Second question
the input device is not a TTY
xargs excutes command on new child process. So you need to reopen stdin to child process for interactive communication. (MacOS: -o option)
docker ps | grep somename | awk '{print $1 " bash"}' | xargs -o docker exec -it
This worked for me:
sudo docker ps -q | xargs -I'{}' docker exec -t {} du -hs /tmp/
The exec command you run is something like this:
docker exec -it 'a1b2c3d4 bash'
And that is only one argument, not two. You need to remove the quotes around the argument to docker exec.
... | xargs -I'{}' docker exec -it {}
Then you will exec properly with two arguments.
docker exec -it a1b2c3d4 bash
------ ---
first arg ^ ^ second arg

Use last container name as default in docker client

docker client for docker ps has very useful flag -l which shows container information which was run recently. However all other docker commands requires providing either CONTAINER ID or NAME.
Is there any nice trick which would allow to call:
docker logs -f -l
instead of:
docker logs -f random_name
You can you docker logs -f `docker ps -ql`
For the last container
docker ps -n 1
or variants such as
docker ps -qan 1
can be handy
After a while playing with docker tutorial, I created small set of aliases:
alias docker_last="docker ps -l | tail -n +2 | awk '{ print \$(NF) }' | xargs docker $1"
alias docker_all="docker ps -a | tail -n +2 | awk '{ print \$(NF) }' | xargs docker $1"
alias docker_up="docker ps | tail -n +2 | awk '{ print \$(NF) }' | xargs docker $1"
alias docker_down="docker ps -a | tail -n +2 | grep -v Up | awk '{ print \$(NF) }' | xargs docker $1"
Which allow to call command on last, all, up and down containers:
docker_last logs # Display logs from last created container
docker_down rm # Remove all stopped containers
docker_up stop # Stop all running containers

Resources