how to get docker container to read from stdin? - docker

I have a script that I want to use to use Sigil (based on Go's template engine) to populate template files
I'm using a dockerized sigil for this via:
docker run -v ${TEMPLATE_DIR}:/tmp/sigil mikegrass/gliderlabs_sigil-docker/sigil -f prometheus-configmap.yaml -p API_SERVER=$api_server_url > $TEMP_FILE
This seems a bit clunky with having to map a volume, so I'd rather use STDIN to pass in the file....
So what I'd like is
cat ./prometheus-configmap.yaml | docker run mikegrass/gliderlabs_sigil-docker -p API_SERVER=$api_server_url > $TEMP_FILE
Unfortunately this doesn't work, I get no output.
Googling around I see possible solutions but haven't gotten any to work...

You need to run the container in interactive mode with --interactive or -i:
cat ./prometheus-configmap.yaml | docker run -i mikegrass/gliderlabs_sigil-docker -p API_SERVER=$api_server_url > $TEMP_FILE

Without cat:
docker run -i mikegrass/gliderlabs_sigil-docker -p API_SERVER=$api_server_url < ./prometheus-configmap.yaml

Related

live log from docker container

How to have live log from a docker container?
After many sleepless nights and many many internet searches an answer finely comes to me in a dream I would say - Hey! just detach the dumb container!
The easiest way if you have the container running and sending logs to stdout, is to use docker attach , here is an example of running a container and attaching to it to see the output
$ docker run -d --name topdemo ubuntu /usr/bin/top -b
$ docker attach topdemo
basically this is the syntax of the command docker attach [OPTIONS] <CONTAINER>
After searching quite a lot(!),
I am using it like this:
NUM=`docker run -d --user 1013830000 -p 4567:4567 container_test:v1`
docker logs -f $NUM
docker stop $NUM

How to run docker with a blend file from a ftp site

I have manage to run this command in Docker:
$ docker run --rm -v d:/temp/test:/media/ ikester/blender /media/blendfile.blend -o /media/frame_### -f 1
But I want to run it from a ftp site instead of "d:/temp/test"
As I see it I should replace "d:/temp/test" with someting else, but what?
Or should I do it in some other way?

Piping a file into docker run

I need to pipe (inject) a file or some data into docker as part of the run command and have it written to a file within the container as part of the startup. Is there best practise way to do this ?
I've tried this.
cat data.txt | docker run -a stdin -a stdout -i -t ubuntu /bin/bash -c 'cat >/data.txt'
But can't seem to get it to work.
cat setup.json | docker run -i ubuntu /bin/bash -c 'cat'
This worked for me. Remove the -t. Don't need the -a's either.
The better solution is to make (mount) you host folder be accessible to docker container. E.g. like this
docker run -v /Users/<path>:/<container path> ...
Here is /Users/<path> is a folder on your host computer and <container path> mounted path inside container.
Also see Manage data in containers manual page.
UPDATE another Accessing External Files from Docker Containers example.

docker RUN append to /etc/hosts in Dockerfile not working

I have a simple Dockerfile but the first RUN command (to append a host IP address to /etc/hosts) has no effect
FROM dockerfile/java
RUN sudo echo "XX.XX.XXX.XXX some.box.com MyFriendlyBoxName" >> /etc/hosts
ADD ./somejavaapp.jar /tmp/
#CMD java -jar /tmp/somejavaapp.jar
EXPOSE 8280
I build using
docker build .
and then test the RUN echo line has worked using
sudo docker run -t -i <built image ID> /bin/bash
I am then into the container but the /etc/hosts file has not been appended. Running the same echo .... line while now in the container has the desired effect
Can anyone tell me what is wrong with my dockerfile RUN ...?
Docker will generate /etc/hosts dynamically every time you create a new container. So that it can link others. You can use --add-host option:
docker run --add-host www.domain.com:8.8.8.8 ubuntu ping www.domain.com
If you use docker-compose, use extra_hosts:
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
If you are trying to maintain the hosts file entries between host machine and container another way would be to wrap your command with a shell script that maps your hosts' /etc/hosts into --add-host params:
~/bin/java:
#!/bin/sh
ADD_HOSTS=$(tail -n +10 /etc/hosts | egrep -v '(^#|^$)' | sed -r 's/^([a-z0-9\.\:]+)\s+(.+)$/--add-host="\2:\1"/g')
eval docker run \
-it \
--rm \
$ADD_HOSTS \
<image> \
java $*
return $?
Obviously replace java with whatever you are trying to do...
Explanation; ADD_HOSTS will take everything after the first 10 lines in your hosts' /etc/hosts file | remove comments and blank lines | re-order the entries into --add-host params.
The reason for taking everything after the first 10 lines is to exclude the localhost and ipv6 entries for your host machine. You may need to adjust this to suit your needs.
I had such a problem that I wanted to add multiple domain names for a single IP. Actually, there is no way to make in build time because at each run of the image the /etc/hosts will be overwritten. So, it seems that utilizing --add-host option or its extra_hosts aliens in the docker-compose method be the only way.
But as a matter of fact, adding each of the domain names by a single --add-host was so messy to me. I tried below one and it works:
docker run --add-host "some.box.com MyFriendlyBoxName":X.X.X.X ubuntu cat /etc/hosts
Actually, it just copies and pastes the String after the --add-host to the colon.

How to edit Docker container files from the host?

Now that I found a way to expose host files to the container (-v option) I would like to do kind of the opposite:
How can I edit files from a running container with a host editor?
sshfs could probably do the job but since a running container is already some kind of host directory I wonder if there is a portable (between aufs, btrfs and device mapper) way to do that?
The best way is:
$ docker cp CONTAINER:FILEPATH LOCALFILEPATH
$ vi LOCALFILEPATH
$ docker cp LOCALFILEPATH CONTAINER:FILEPATH
Limitations with $ docker exec: it can only attach to a running container.
Limitations with $ docker run: it will create a new container.
Whilst it is possible, and the other answers explain how, you should avoid editing files in the Union File System if you can.
Your definition of volumes isn't quite right - it's more about bypassing the Union File System than exposing files on the host. For example, if I do:
$ docker run --name="test" -v /volume-test debian echo "test"
The directory /volume-test inside the container will not be part of the Union File System and instead will exist somewhere on the host. I haven't specified where on the host, as I may not care - I'm not exposing host files, just creating a directory that is shareable between containers and the host. You can find out exactly where it is on the host with:
$ docker inspect -f "{{.Volumes}}" test
map[/volume_test:/var/lib/docker/vfs/dir/b7fff1922e25f0df949e650dfa885dbc304d9d213f703250cf5857446d104895]
If you really need to just make a quick edit to a file to test something, either use docker exec to get a shell in the container and edit directly, or use docker cp to copy the file out, edit on the host and copy back in.
We can use another way to edit files inside working containers (this won't work if container is stoped).
Logic is to:
-)copy file from container to host
-)edit file on host using its host editor
-)copy file back to container
We can do all this steps manualy, but i have written simple bash script to make this easy by one call.
/bin/dmcedit:
#!/bin/sh
set -e
CONTAINER=$1
FILEPATH=$2
BASE=$(basename $FILEPATH)
DIR=$(dirname $FILEPATH)
TMPDIR=/tmp/m_docker_$(date +%s)/
mkdir $TMPDIR
cd $TMPDIR
docker cp $CONTAINER:$FILEPATH ./$DIR
mcedit ./$FILEPATH
docker cp ./$FILEPATH $CONTAINER:$FILEPATH
rm -rf $TMPDIR
echo 'END'
exit 1;
Usage example:
dmcedit CONTAINERNAME /path/to/file/in/container
The script is very easy, but it's working fine for me.
Any suggestions are appreciated.
There are two ways to mount files into your container. It looks like you want a bind mount.
Bind Mounts
This mounts local files directly into the container's filesystem. The containerside path and the hostside path both point to the same file. Edits made from either side will show up on both sides.
mount the file:
❯ echo foo > ./foo
❯ docker run --mount type=bind,source=$(pwd)/foo,target=/foo -it debian:latest
# cat /foo
foo # local file shows up in container
in a separate shell, edit the file:
❯ echo 'bar' > ./foo # make a hostside change
back in the container:
# cat /foo
bar # the hostside change shows up
# echo baz > /foo # make a containerside change
# exit
❯ cat foo
baz # the containerside change shows up
Volume Mounts
mount the volume
❯ docker run --mount type=volume,source=foovolume,target=/foo -it debian:latest
root#containerB# echo 'this is in a volume' > /foo/data
the local filesystem is unchanged
docker sees a new volume:
❯ docker volume ls
DRIVER VOLUME NAME
local foovolume
create a new container with the same volume
❯ docker run --mount type=volume,source=foovolume,target=/foo -it debian:latest
root#containerC:/# cat /foo/data
this is in a volume # data is still available
syntax: -v vs --mount
These do the same thing. -v is more concise, --mount is more explicit.
bind mounts
-v /hostside/path:/containerside/path
--mount type=bind,source=/hostside/path,target=/containerside/path
volume mounts
-v /containerside/path
-v volumename:/containerside/path
--mount type=volume,source=volumename,target=/containerside/path
(If a volume name is not specified, a random one is chosen.)
The documentaion tries to convince you to use one thing in favor of another instead of just telling you how it works, which is confusing.
Here's the script I use:
#!/bin/bash
IFS=$'\n\t'
set -euox pipefail
CNAME="$1"
FILE_PATH="$2"
TMPFILE="$(mktemp)"
docker exec "$CNAME" cat "$FILE_PATH" > "$TMPFILE"
$EDITOR "$TMPFILE"
cat "$TMPFILE" | docker exec -i "$CNAME" sh -c 'cat > '"$FILE_PATH"
rm "$TMPFILE"
and the gist for when I fix it but forget to update this answer:
https://gist.github.com/dmohs/b50ea4302b62ebfc4f308a20d3de4213
If you think your volume is a "network drive", it will be easier.
To edit the file located in this drive, you just need to turn on another machine and connect to this network drive, then edit the file like normal.
How to do that purely with docker (without FTP/SSH ...)?
Run a container that has an editor (VI, Emacs). Search Docker hub for "alpine vim"
Example:
docker run -d --name shared_vim_editor \
-v <your_volume>:/home/developer/workspace \
jare/vim-bundle:latest
Run the interactive command:
docker exec -it -u root shared_vim_editor /bin/bash
Hope this helps.
I use sftp plugin from my IDE.
Install ssh server for your container and allow root access.
Run your docker container with -p localport:22
Install from your IDE a sftp plugin
Example using sublime sftp plugin:
https://www.youtube.com/watch?v=HMfjt_YMru0
The way I am doing is using Emacs with docker package installed. I would recommend Spacemacs version of Emacs. I would follow the following steps:
1) Install Emacs (Instruction) and install Spacemacs (Instruction)
2) Add docker in your .spacemacs file
3) Start Emacs
4) Find file (SPC+f+f) and type /docker:<container-id>:/<path of dir/file in the container>
5) Now your emacs will use the container environment to edit the files
docker run -it -name YOUR_NAME IMAGE_ID /bin/bash
$>vi path_to_file
The following worked for me
docker run -it IMAGE_NAME /bin/bash
eg. my image was called ipython/notebook
docker run -it ipython/notebook /bin/bash

Resources