Send file from container to host - docker

This is what I tried:
Dockerfile:
ENTRYPOINT go test ./tests -v .>/outputs/report.txt
Command line:
docker run test -v /outputs:/outputs
I expect that the newly generated report.txt will be available in the host in the same directory. What am I missing here?

I think that you almost made it.
Try to map the volume before image name.
Instead: docker run test -v /outputs:/outputs
Use: docker run -v /outputs:/outputs test
This command will bind your local /outputs with the /outputs in the container. And remember, all commands after image name will pass a command to the container.
For more information see: Docker run command docs and Docker volume docs

Related

Getting error Docker invalid mode, trying to mount to container

I'm executing the next command:
docker run --rm -it -v https://github.com/rasilvap/lift-tool-test:/code ubuntu:20.04 bash
But I'm getting the next error message:
docker: Error response from daemon: invalid mode: /code.
My idea is to mount my repo inside the container in the code mount and then cd /code and execute some scripts.
It is possible to mount an online repo inside a mount container? in my case https://github.com/rasilvap/lift-tool-test , maybe I'm misunderstanding something
Any ideas?
Thanks!
No this is not possible at all like that, easy way would be to clone the repo locally and then mount the local folder to it.
docker run --rm -it -v localfolderpath:/code ubuntu:20.0 bash
You could also forward your local ssh-agent to the container and have some kind of env variable with the link and clone it at the building process. But this option is very hard to do. If you have private repos if they are open source you don't even need to forward it you'd just need to build an image on top of ubuntu where you clone the repo during build time.
Here is the docker https://docs.docker.com/storage/volumes/ documentation.

Apache/Nifi 1.12.1 Docker Image Issue

I have a Dockerfile based on apache/nifi:1.12.1 and want to expand it like this:
FROM apache/nifi:1.12.1
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
Thing is that the folder isn't created when I'm building the image from Linux distros like Ubuntu and CentOS. Build succeeds, I run it with docker run -it -d --rm --name nifi nifi-test but when I enter the container through docker exec there's no flow dir.
Strange thing is, that the flow dir is being created normally when I'm building the image through Windows and Docker Desktop. I can't understand why is this happening.
I've tried things such as USER nifi or RUN chown ... but still...
For your convenience, this is the base image:
https://github.com/apache/nifi/blob/rel/nifi-1.12.1/nifi-docker/dockerhub/Dockerfile
Take a look at this as well:
This is what looks like at the CLI
Thanks in advance.
By taking a look at the dockerfile provided you can see the following volume definition
Then if you run
docker image inspect apache/nifi:1.12.1
As a result, when you execute the RUN command to create a folder under the conf directory it succeeds
BUT when you run the container the volumes are mounted and as a result they overwrite everything that is under the mountpoint /opt/nifi/nifi-current/conf
In your case the flow directory.
You can test this by editing your Dockerfile
FROM apache/nifi:1.12.1
# this will be overriden, by volumes
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
# this will be available in the container environment
RUN mkdir -p /opt/nifi/nifi-current/flow
To tackle this you could
clone the Dockerfile of the image you use as base one (the one in
FROM) and remove the VOLUME directive manually. Then build it and
use in your FROM as base one.
You could try to avoid adding directories under the mount points specified in the Dockerfile

how to copy files from one docker service to another, inside of docker bash

I am trying to copy a file from one docker-compose service to another while in the service's bash environment, but I cannot seem to figure out how to do it.
Can anybody provide me with an idea?
Here is the command I am attempting to run:
(docker cp ../db_backups/latest.sqlc pgadmin_1:/var/lib/pgadmin/storage/mine/)
The error is simply:
bash: docker: command not found
There's no way to do that by default. There are a few things you could do to enable that behavior.
The easiest solution is just to run docker cp on the host (docker cp from the first container to the host, then docker cp from the host to the second container).
If it all has to be done inside the container, the next easiest solution is probably to use a shared volume:
docker run -v shared:/shared --name containerA ...
docker run -v shared:/shared --name containerB ...
Then in containerA you can cp ../db_backups/latest.sqlc /shared, and in containerB you can cp /shared/latest.sqlc /var/lib/pgadmin/storage/mine.
This is a nice solution because it doesn't require installing anything inside the container.
Alternately, you could:
Install the docker CLI inside each container, and mount the Docker socket inside each container. This would let you run your docker cp command, but it gives anything inside the container complete control of your host (because access to docker == root access).
Run sshd in the target container, set up the necessary keys, and then use scp to copy things from the first container to the second container.

Trying to run "comitted" Docker image, get "cannot mount volume over existing file, file exists"

I am developing a Docker image. I started with a base image and was working inside it interactively, using bash. I installed a bunch of stuff, and the install (which included compiling a lot of code) took over 20 minutes, so to save my work, I used:
$ docker commit 0f08ac958391 myproject:wip
Now when I try to run the image:
$ docker run --rm -it myproject:wip
docker: Error response from daemon: cannot mount volume over existing file, file exists /var/lib/docker/overlay2/95aa9a9ea7cc0b1ba302adbd287e4d7059ee4fbe64183404df3bc65df332ee63/merged/run/host-services/ssh-auth.sock.
What is going on? How do I fix this?
Note about related/duplicate questions: while there are other questions about this error message, none of the answers directly explain why the error happens in this situation or what to do about it. In fact, most of the questions have no answers at all.
When I ran the base image, I included a mount for the SSH agent socket:
$ docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:dev /bin/bash
This bind mounts a file from the host (actually the Docker daemon VM) to a file in the Docker container. When I committed the running image, the image contained the file /run/host-services/ssh-auth.sock. The image also contained an empty volume reference to /run/host-services/ssh-auth.sock. This means that when I ran
$ docker run --rm -it myproject:wip
It was equivalent to running
$ docker run -v /run/host-services/ssh-auth.sock --rm -it myproject:wip
Unfortunately, what that command does is create an anonymous volume and mount it into the directory /run/host-services/ssh-auth.sock in the container. This works if the container has such a directory or even if it does not. What causes it to fail is if the target name is taken up by a file. Docker will not mount a volume over a file.
The solution is to explicitly provide a mapping from a host file to the target volume. Any host file will do, but in my case it is best to use the original. So this works:
docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:wip

Docker volumes not keeping data

I've built a docker image with a python script that works with two different commands. The first one creates a file that is used when executing the second one.
As far as I know, I must use a Docker volume to store data between executions so I've created a volume with:
docker volume create myvol
To then use it when running the container
$ docker run myimg fit -v myvol:/data
model.h5 stored at /data
But then, when executing the other command it seems than the Docker directory /data is empty...
$ docker run predict -v myvol:/data
Error: /data/model.h5 not found
Is there any point that I'm missing?
The docker command line is order sensitive. The syntax is:
docker $args_to_docker run $args_to_run $image_name $override_to_cmd
In your command you pass the -v option after the image name, so it becomes the CMD value in your container:
$ docker run myimg fit -v myvol:/data
model.h5 stored at /data
That runs the cmd fit -v myvol:/data inside the container.
The solution is to change the order if you want the -v to be an option to run and define a volume:
$ docker run -v myvol:/data myimg fit
$ docker run -v myvol:/data predict
Make sure you use the -v or --mount argument to the Docker run command. This will make sure that the data is really stored outside of the container, and you'll lose nothing.
See: https://docs.docker.com/storage/volumes/ for details.
When running these commands you need to remember that the first part of the -v option is always <path_to_directory_on_host>:<path_to_directory_on_guest>. You are able to use both absolute path's and relative paths to the directory on your host machine. So first thing is to create a directory on your host called data, move model.h5 into that directory, then mount it with the -v switch.
so if I had my data directory in C:\data on a windows directory I would use:
docker run <img_name> -v C:\data:/data
if I was on unix and my data directory was in /usr/data and I wanted it to be mounted on /data in my guest container then I would use
docker run <img_name> -v /usr/data:/data

Resources