Docker bench - How to persist logs or supply log file argument - docker

I am following the tutorial to run docker bench from its GitHub page
I am executing it as follows:
C:/ docker ps
<lists running containers>
C:/ docker run -it --net host --pid host --userns host --cap-add audit_control -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST -v /etc:/etc -v /usr/bin/docker-containerd:/usr/bin/docker-containerd -v /usr/bin/docker-runc:/usr/bin/docker-runc -v /usr/lib/systemd:/usr/lib/systemd -v /var/lib:/var/lib -v /var/run/docker.sock:/var/run/docker.sock --label docker_bench_security docker/docker-bench-security
The docker bench command works fine, and I see the colored output with my PASS/WARNs and my total score out of the total checks at the bottom.
The problem is that docker bench says "By default the Docker Bench for Security script will run all available CIS tests and produce logs in the current directory named docker-bench-security.sh.log.json and docker-bench-security.sh.log"
However in my root (C:) where I executed the commands, I do not see these two files.
I have also tried running the same docker bench command above but with the optional log argument
docker run docker/docker-bench-security..... -l logs.txt
But I do not see any file get created (and if I premake the file it is not populated).
Any ideas on how I can capture my docker bench output in a file?

The file is likely created inside the container.
As you noticed you can set its path using the -l path option,
but if you want the file to appear on the host you need to mount
that path as a volume.
In other words you need to run the following command:
docker run (...) -v /path/to/my-logs:/tmp/my-logs docker-bench-security (...) -l /tmp/my-logs/log.txt
--where (...) are the existing parameters that you use.

Related

Understanding a Docker .Sh file

The .sh file I am working with is:
docker run -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder tr_xl_container nohup python /Path/file.py -p/Path/ |& tee $HOME/Path/log.txt
I am confused about the -v and everything after that. Specifically, the -v $HOME/Folder:/Folder tr_xl_container section and -p/Path/. If someone would be able to help breakdown what those commands mean or point me to a reference that does, that would be very much appreciated. I checked Docker documentation and Linux command line documentation and did not come up with anything too helpful.
A docker run command is split up in 3 parts:
docker options
the image to run
a command for the container
In your case -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder are docker options.
tr_xl_container is the image name.
nohup python /Path/file.py -p/Path/ is the command sent to the container.
The last part, |& tee $HOME/Path/log.txt isn't run in the container, but takes the output from the docker run command and saves it in $HOME/Path/log.txt.
As for -v $HOME/Folder:/Folder, it's a volume mapping or more precisely, a bind mount. It creates a directory in the container with the path /Folder that is linked to the directory $Home/Folder on the host machine. That makes files in the host directory visible inside the container and if the container does anything with files in the /Folder directory, those changes will be visible in the host directory.
The command after the image name is for the container and it's up to the container what to do with it. From looking at it, it looks like it runs a Python program stored in /Path/file.py in the image. But to be sure, you'll need to know what the image does.

No access to volume after docker run -v

The following command runs fine on my local machine.
docker run -it --rm --ulimit memlock=-1 \
-v "$HOMEDIR/..":"/home/user/repo" \
-w "/home/user/repo/linux" \
${DOCKER_IMAGE_NAME} bash build.sh
Running it in a docker-in-docker evirionment (that means the mentioned docker command is executed in a container on google cloudbuild) is leading to two problems though:
Docker complains The input device is not a tty. My workaround: I simply used only docker run -i --rm.
Somehow the assigned volume and working directory on the container do not exist under the given path. But i checked them on the host system and they exist, but somehow do not make it until the container.
I thought also already about using docker exec but there i don't have the fancy -v options. I tried both, the docker run command with the -i and the -it flag on my local machine where it both runned fine. Anyway on cloudbuild i get the tty error when usind -it and the unacessible volume problem occurs when using -i.

Pycharm docker run configuration do not accept environment variables

I am trying to set up a docker run configuration in Pycharm, i am pretty new to this functionality in pycharm, and i can't get it working.
In docker I would run the container with the following command
docker build -t test-container . && docker run --name container-pycharm -t -i --env-file .env -v $(pwd):/srv/app -p 8080:8080 --rm test-container ./serve-app
I set up this in pycharm, by adding the following line
--rm --env-file .env -i -t -p 8080:8080 -v $(pwd):/srv/app
to command line options section in the relevant docker Run/Debug Configuration Pycharm window. Unfortunately I get
Failed to deploy 'container-pycharm Dockerfile: Dockerfile': com.github.dockerjava.api.exception.BadRequestException: {"message":"create $(pwd): \"$(pwd)\" includes invalid characters for a local volume name, only \"[a-zA-Z0-9][a-zA-Z0-9_.-]\" are allowed. If you intended to pass a host directory, use absolute path"}
Clearly, I cant use $(pwd) in my command line options, any idea on how to solve this in pycharm?
Pycharm doesn't invoke docker directly via the command you see in the command preview, it goes through its custom parser, currently they haven't implemented the feature to read envs. Thus "If you intended to pass a host directory, use absolute path"
And -v is not officially supported as command line options in the current version. Ref
Use Bind mounts instead

How to re-mount a docker volume without overriding existing files?

When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.

Get docker run command for container

I have a container that I created, but I can't remember the exact docker run command I used to kick it off. Is there any way that can be retrieved?
This is not the same as See full command of running/stopped container in Docker What I want to know is the full docker command that spawned the container, not the command within the container.
You can infer most of that information by looking at the output of docker inspect.
For example, you can discover the command started inside the container by looking at the Config.Cmd key. If I run:
$ docker run -v /tmp/data:/data --name sleep -it --rm alpine sleep 600
I can later run:
$ docker inspect --format '{{.Config.Cmd}}' sleep
And get:
{[sleep 600]}
Similarly, the output of docker inspect will also include information about Docker volumes used in the container:
$ docker inspect --format '{{.Volumes}}' sleep
map[/data:/tmp/data]
You can of course just run docker inspect without --format, which will give you a big (100+ lines) chunk of JSON output containing all the available keys, which includes information about port mappings, network configuration, and more.

Resources