How to monitor docker containers log from non-root user? - docker

I want to monitor docker containers log from non-root user(td-agent) and on host server,
sudo chmod o+rx /var/lib/docker
sudo find /var/lib/docker/containers/ -type d -exec chmod o+rx {} \;
sudo find /var/lib/docker/containers/ -type f -exec chmod o+r {} \;
But containers directory rollback 600 and each container directory keep 600.
# find /var/lib/docker/containers -ls
143142 4 drwx------ 4 root root 4096 Aug 14 12:01 /var/lib/docker/containers
146027 4 drwx------ 2 root root 4096 Aug 14 12:00 /var/lib/docker/containers/145efa73652aad14e1706e8fcd1597ccbbb49fd756047f3931270b46fe01945d
146031 4 -rw-r--r-- 1 root root 190 Aug 14 12:00 /var/lib/docker/containers/145efa73652aad14e1706e8fcd1597ccbbb49fd756047f3931270b46fe01945d/hostconfig.json
146046 4 -rw-r--r-- 1 root root 13 Aug 14 12:00 /var/lib/docker/containers/145efa73652aad14e1706e8fcd1597ccbbb49fd756047f3931270b46fe01945d/hostname
146047 4 -rw-r--r-- 1 root root 174 Aug 14 12:00 /var/lib/docker/containers/145efa73652aad14e1706e8fcd1597ccbbb49fd756047f3931270b46fe01945d/hosts
146030 4 -rw-r--r-- 1 root root 3305 Aug 14 12:00 /var/lib/docker/containers/145efa73652aad14e1706e8fcd1597ccbbb49fd756047f3931270b46fe01945d/config.json
146049 4 -rw------- 1 root root 1853 Aug 14 12:00 /var/lib/docker/containers/145efa73652aad14e1706e8fcd1597ccbbb49fd756047f3931270b46fe01945d/145efa73652aad14e1706e8fcd1597ccbbb49fd756047f3931270b46fe01945d-json.log
146050 4 drwx------ 2 root root 4096 Aug 14 12:01 /var/lib/docker/containers/f09796f978ef5bab1449d2d10d400228eb76376579e7e33c615313eeed53f370
146054 4 -rw-r--r-- 1 root root 190 Aug 14 12:01 /var/lib/docker/containers/f09796f978ef5bab1449d2d10d400228eb76376579e7e33c615313eeed53f370/hostconfig.json
146056 4 -rw-r--r-- 1 root root 13 Aug 14 12:01 /var/lib/docker/containers/f09796f978ef5bab1449d2d10d400228eb76376579e7e33c615313eeed53f370/hostname
146057 4 -rw-r--r-- 1 root root 174 Aug 14 12:01 /var/lib/docker/containers/f09796f978ef5bab1449d2d10d400228eb76376579e7e33c615313eeed53f370/hosts
146053 4 -rw-r--r-- 1 root root 3286 Aug 14 12:01 /var/lib/docker/containers/f09796f978ef5bab1449d2d10d400228eb76376579e7e33c615313eeed53f370/config.json
146058 4 -rw------- 1 root root 1843 Aug 14 12:01 /var/lib/docker/containers/f09796f978ef5bab1449d2d10d400228eb76376579e7e33c615313eeed53f370/f09796f978ef5bab1449d2d10d400228eb76376579e7e33c615313eeed53f370-json.log
How to monitor this each json.log? or any other good monitoring way?

logspout is another way to collect containerslogs. I'm not sure this is the best solution, but it is very interesting and consistent way to collect containers logs.
You just need to run logspout container. This container has a feature that send docker containers' logs to other syslog server. (or you can use HTTP api also. see repository)
# (172.17.42.1 is host ip address)
$ docker run -v=/var/run/docker.sock:/tmp/docker.sock progrium/logspout syslog://172.17.42.1:5140
And fluentd that is running on host can handle these logs through syslog protocal. Below is td-agent.conf example. It receive logs from syslog protocal and send them to elasticsearch server. (check this example project)
<source>
type syslog
port 5140
bind 0.0.0.0
tag syslog.udp
format /^(?<time>.*?) (?<container_id>.*?) (?<container_name>.*?): (?<message>.*?)$/
time_format %Y-%m-%dT%H:%M:%S%z
</source>
<match syslog.**>
index_name <ES_INDEX_NAME>
type_name <ES_TYPE_NAME>
type elasticsearch
host <ES_HOST>
port <ES_PORT>
flush_interval 3s
</match>

As I discussed in detail in this answer that the OP never acknowledged whatsoever, I find the best approach is to configure the applications running within the container to log messages to syslog, and mount the host's syslog socket to the container.
docker run -v /dev/log:/dev/log ...
Downside of this approach is that if the syslog daemon on the host is restarted, the container will lose it's socket since the daemon recreates the socket at restart.
A fix for this would be to add another socket (in rsyslog this can be done using the imuxsock module). Create the additional socket in some known directory, then bind mount the directory instead of /dev/log directly. The additional socket will also be removed when rsyslog restarts, but will be recreated and available to the application in the directory following the restart.

One easy way to deal with this issue is to mount host's /sys/fs/cgroup into a Docker container that's running in_docker_metrics. See https://github.com/bdehamer/docker-librato

Sematext Docker Agent (open-source, github) can do this for you. You won't need td-agent. SDA will collect logs, but also events and metrics. See https://github.com/sematext/sematext-agent-docker and
https://sematext.com/docker

Related

How can Docker container write to a mounted directory with permissions granted through group membership?

Versions
Host OS: Debian 4.9.110
Docker Version: 18.06.1-ce
Scenario
I have a directory where multiple users (user-a and user-b) have read/write access through a common group membership (shared), set up via chown:
/media/disk-a/shared/$ ls -la
drwxrwsr-x 4 user-a shared 4096 Oct 7 22:21 .
drwxrwxr-x 7 root root 4096 Oct 1 19:58 ..
drwxrwsr-x 5 user-a shared 4096 Oct 7 22:10 folder-a
drwxrwsr-x 3 user-a shared 4096 Nov 10 22:10 folder-b
UIDs & GIDs are as following:
uid=1000(user-a) gid=1000(user-a) groups=1000(user-a),1003(shared)
uid=1002(user-b) gid=1002(user-b) groups=1002(user-b),1003(shared)
Relevant /etc/group looks like this:
shared:x:1003:user-a,user-b
When suing into both users, files can be created as expected within the shared directory.
The shared directory is attached to a Docker container via mount binds to /shared/. The Docker container runs as user-b (using the --user "1002:1002" parameter)
$ ps aux | grep user-b
user-b 1347 0.2 1.2 1579548 45740 ? Ssl 17:47 0:02 entrypoint.sh
id from within the container prints the following, to me okay-looking result:
I have no name!#7a5d2cc27491:/$ id
uid=1002 gid=1002
Also ls -la mirrors its host system equivalent perfectly:
I have no name!#7a5d2cc27491:/shared ls -la
total 16
drwxrwsr-x 4 1000 1003 4096 Oct 7 20:21 .
drwxr-xr-x 1 root root 4096 Oct 8 07:58 ..
drwxrwsr-x 5 1000 1003 4096 Oct 7 20:10 folder-a
drwxrwsr-x 3 1000 1003 4096 Nov 10 20:10 folder-b
Problem
From within the container, I cannot write anything to the shared directory. For touch test I get the following i.e.:
I have no name!#7a5d2cc27491:/shared$ touch test
touch: cannot touch 'test': Permission denied
I can write to a directory which is directly owned by user-b (user & group) and mounted to the container... Simply the group membership seems somehow not to be respected at all.
I have looked into things like user namespace remapping and things, but these seemed to be solutions for something not applying here. What do I miss?
Your container user has gid=1002, but is not member of group shared with gid=1003.
Additionally to --user "1002:1002" you need --group-add 1003.
Than the container user is allowed to access the shared folder with gid=1003.
id should show:
I have no name!#7a5d2cc27491:/$ id
uid=1002 gid=1002 groups=1003

How to crawl nginx container logs via filebeat?

Problem Statement
The NGINX image is configured to send the main NGINX access and error logs to the Docker log collector by default. This is done by linking them to stdout and stderr, which causes all messages from both logs to be stored in the file /var/lib/docker/containers/<container id>/<container id>-json.log on the Docker Host.
Since the hard work of getting the logs out of the container and into the host has already been taken care of us, perhaps we should try to leverage that? But there are numerous indistinguishable folders in /var/lib/docker/containers/
# ls -alrt /var/lib/docker/containers/
total 84
drwx--x--x 14 root root 4096 Jul 4 13:40 ..
drwx------ 4 root root 4096 Jul 4 13:55 a4ee4224c3e4c68a8023eb63c01b2a288019257440b30c4efb7226eb83629956
drwx------ 4 root root 4096 Jul 6 16:24 59d1465b5c42f2ce6b13747c39ff3995191d325d641b6ef8cad1a8446247ef24
...
drwx------ 4 root root 4096 Jul 9 06:34 cab3407af18d778b259f54df16e60f5e5187f14b01a020b30f6c91c6f8003bdd
drwx------ 4 root root 4096 Jul 9 06:35 0b99140af456b29af6fcd3956a6cdfa4c78d1e1b387654645f63b8dc4bbf049c
drwx------ 21 root root 4096 Jul 9 06:35 .
Even if we narrow them down by searching recursively through /var/lib/docker/containers/ for any files that are of type -json.log and contain the string upstream_response_time
# grep -lr "upstream_response_time" /var/lib/docker/containers/ --include "*-json.log"
/var/lib/docker/containers/cfe8...fe18/cfe8...fe18-json.log
/var/lib/docker/containers/c3c3...6662/c3c3...6662-json.log
... still leaves us in a situation where we will constantly have to step in to find the correct folders due to containers starting/stopping ... we would be stuck reconfiguring FileBeat to crawl them.
Question: So how can the docker container log folders be renamed to give them a predictable name?
Alternatives
Here are certain other methods that I've ruled out but feel free to differ.
Setting up a named volume
$ tree /var/lib/docker/volumes/*nginx-log-volume
/var/lib/docker/volumes/my_swarm_stack_nginx-log-volume
└── _data
├── access.log -> /dev/stdout
└── error.log -> /dev/stderr
The named volume exists as a combination of the stack name and the named volume name: my_swarm_stack_nginx-log-volume. BUT rather than being regular files, these are some sort of a softlink/pipe to std streams. So I felt that this approach is invalid.
I think you are over-complicating the problem at hand. Filebeat already has a lot of configurable options, you don't need to reinvent stuff like this.
I suggest you just use add_docker_metadata processor. This will attach useful information like image & container name for each log produced by the container, which could then be checked by drop processor and you could make the conditions here such that you only accept logs from a specific container only.
processors:
- add_docker_metadata:
- drop_event:
when:
not:
regexp:
docker.container.name: "^nginx"
Adding Docker Metadata Documentation
Filtering Using Drop Processor

Failing to get PyCharm to work with remote interpreter on docker

When I add a remote interpreter from one of my docker-compose, it doesn't seem to succeed and doesn't show any packages in the dialog. When I add an interpreter to the debugger it says:
python packaging tools not found.
Then if i click on install packaging tools, error displayed:
ERROR: for dockeryard_pycharm_helpers_1
Cannot start service pycharm_helpers: network not found
Starting dockeryard_postgres_1 ...
Starting dockeryard_nginx_1 ...
Starting dockeryard_redis_1 ...
Starting dockeryard_postgres_1 ...
Starting dockeryard_nginx_1 ...
Starting dockeryard_pycharm_helpers_1
Starting dockeryard_redis_1
Starting dockeryard_worker_1 ...
Starting dockeryard_worker_1
Starting dockeryard_pycharm_helpers_1
ERROR: for dockeryard_pycharm_helpers_1 Cannot start service pycharm_helpers: network not found
ERROR: for pycharm_helpers Cannot start service pycharm_helpers: network not found
[31m
ERROR [0m:
Note, this interpreter was already in use and I was able to connect remotely with PyCharm, but I have added and eventually removed a custom network to the container.
As explained in Configuring Remote Python Interpreters - "When a remote Python interpreter is added, at first the PyCharm helpers are copied to the remote host". And my guess something went wrong since network was updated in the docker-compose.
From what I understand from the error message, when PyCharm starts interpreter it tries to use/find that network c7b0cc277c94ba5f58f6e72dcbab1ba24794e72422e839a83ea6102d08c40452.
I don't see that network listed anywhere when I run:
$ docker network inspect dockeryard_default
So PyCharm stores it somewhere and not been updated with the change.
I have tried to remove interpreter (using PyCharm dialog) and add it back - same result.
How can I get rid of this network and make PyCharm to be able to debug again?
Thanks.
Was having a near identical error and was able to get past it. I did two things though I'm uncertain as to which was the actual solution:
Made sure the mappings were correct under both (a) Preferences -> Project -> Project Interpreter -> Path mappings and (b) Run -> Edit Configurations -> <Your_Configuration> -> Path mappings
Removed/deleted any containers that looked to be related to PyCharm (believe this is more than likely what solved things).
Hope this helps. PyCharm docker-compose seems to work for some and be a real PITA for others.
One other note. I downgraded from PyCharm 2018 to 2017.3 as there's known docker bugs in 2018.
EDIT: And it would seem a docker-compose down from CLI reintroduces the error -_-
TLDR:
The {project_name}_pycharm_helpers_{pycharm_build_number} volume has been removed or is corrupted.
To repopulate it, run:
docker volume rm {project_name}_pycharm_helpers_{pycharm_build_number}
docker run -v {project_name}_pycharm_helpers_{pycharm_build_number}:/opt/.pycharm_helpers pycharm_helpers:{pycharm_build_number}
The pycharm_build_number can be found in the about section of your pycharm (mac OS: Pycharm > About)
Long story
I struggled a lot with PyCharm suddenly not finding the helpers any more or any related bugs, sometimes because I was clearing my containers or volumes. For instance, running
docker rm -f `docker container ps -aq`
docker volume rm $(docker volume ls -q)
will almost surely get pycharm into troubles.
AFAIK about how PyCharm works, there is:
a PyCharm base image named pycharm_helpers with tag corresponding to your pycharm build number (for example: PY-202.7660.27)
the first time you create docker related things, PyCharm creates volumes that get data from this image for later use in your containers. For instance, after a first attempt at running a remote docker-compose interpreter, I see the newly created myproject_pycharm_helpers_PY-202.7660.27 volume when doing docker volume ls.
when running the docker interpreter, PyCharm adds this volume into the /opt/.pycharm_helpers directory by adding at some point a -v myproject_pycharm_helpers_PY-202.7660.27:/opt/.pycharm_helpers to your command. For instance using docker-compose, you can see the addition of the -f /Users/clementwalter/Library/Caches/JetBrains/PyCharm2020.2/tmp/docker-compose.override.1508.yml and when you actually look into this file you see:
version: "3.8"
services:
local:
command:
- "python"
- "/opt/.pycharm_helpers/pydev/pydevconsole.py"
- "--mode=server"
- "--port=55824"
entrypoint: ""
environment:
PYCHARM_MATPLOTLIB_INTERACTIVE: "true"
PYTHONPATH: "/opt/project:/opt/.pycharm_helpers/pycharm_matplotlib_backend:/opt/.pycharm_helpers/pycharm_display:/opt/.pycharm_helpers/third_party/thriftpy:/opt/.pycharm_helpers/pydev"
PYTHONUNBUFFERED: "1"
PYTHONIOENCODING: "UTF-8"
PYCHARM_MATPLOTLIB_INDEX: "0"
PYCHARM_HOSTED: "1"
PYCHARM_DISPLAY_PORT: "63342"
IPYTHONENABLE: "True"
volumes:
- "/Users/clementwalter/Documents/myproject:/opt/project:rw"
- "pycharm_helpers_PY-202.7660.27:/opt/.pycharm_helpers"
working_dir: "/opt/project"
volumes:
pycharm_helpers_PY-202.7660.27: {}
You get into troubles when this volume is not correctly populated anymore.
Fortunately the docker volume documentation has a section "Populate a volume using a container" which is exactly what PyCharm does under the hood.
For the record you can check the content of the pycharm_helpers image:
$ docker run -it pycharm_helpers:PY-202.7660.27 sh
/opt/.pycharm_helpers #
you end up into the pycharm_helpers directory and find all the helpers here:
/opt/.pycharm_helpers # ls -la
total 5568
drwxr-xr-x 21 root root 4096 Dec 17 16:38 .
drwxr-xr-x 1 root root 4096 Dec 17 11:07 ..
-rw-r--r-- 1 root root 274 Dec 17 11:07 Dockerfile
drwxr-xr-x 5 root root 4096 Dec 17 16:38 MathJax
-rw-r--r-- 1 root root 2526 Sep 16 11:14 check_all_test_suite.py
-rw-r--r-- 1 root root 3194 Sep 16 11:14 conda_packaging_tool.py
drwxr-xr-x 2 root root 4096 Dec 17 16:38 coverage_runner
drwxr-xr-x 3 root root 4096 Dec 17 16:38 coveragepy
-rw-r--r-- 1 root root 11586 Sep 16 11:14 docstring_formatter.py
drwxr-xr-x 4 root root 4096 Dec 17 16:38 epydoc
-rw-r--r-- 1 root root 519 Sep 16 11:14 extra_syspath.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 generator3
-rw-r--r-- 1 root root 8 Sep 16 11:14 icon-robots.txt
-rw-r--r-- 1 root root 3950 Sep 16 11:14 packaging_tool.py
-rw-r--r-- 1 root root 1490666 Sep 16 11:14 pip-20.1.1-py2.py3-none-any.whl
drwxr-xr-x 2 root root 4096 Dec 17 16:38 pockets
drwxr-xr-x 3 root root 4096 Dec 17 16:38 profiler
-rw-r--r-- 1 root root 863 Sep 16 11:14 py2ipnb_converter.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 py2only
drwxr-xr-x 3 root root 4096 Dec 17 16:38 py3only
drwxr-xr-x 7 root root 4096 Dec 17 16:38 pycharm
drwxr-xr-x 4 root root 4096 Dec 17 16:38 pycharm_display
drwxr-xr-x 3 root root 4096 Dec 17 16:38 pycharm_matplotlib_backend
-rw-r--r-- 1 root root 103414 Sep 16 11:14 pycodestyle.py
drwxr-xr-x 24 root root 4096 Dec 17 16:38 pydev
drwxr-xr-x 9 root root 4096 Dec 17 16:38 python-skeletons
drwxr-xr-x 2 root root 4096 Dec 17 16:38 rest_runners
-rw-r--r-- 1 root root 583493 Sep 16 11:14 setuptools-44.1.1-py2.py3-none-any.whl
-rw-r--r-- 1 root root 29664 Sep 16 11:14 six.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 sphinxcontrib
-rw-r--r-- 1 root root 128 Sep 16 11:14 syspath.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 third_party
drwxr-xr-x 3 root root 4096 Dec 17 16:38 tools
drwxr-xr-x 5 root root 4096 Dec 17 16:38 typeshed
-rw-r--r-- 1 root root 3354133 Sep 16 11:14 virtualenv-16.7.10-py2.py3-none-any.whl
to make these helpers available again, following the docker documentation, you have to fix the volume. To do so:
docker rm {project_name}_pycharm_helpers_{pycharm_build}
docker run -v {project_name}_pycharm_helpers_{pycharm_build}:"/opt/.pycharm_helpers" pycharm_helpers:{tag}
et voilà
If you're still seeing this in PyCharm 2020.2 then do this:
close PyCharm
try #peterc's suggestion:
docker ps -a | grep -i pycharm | awk '{print $1}' | xargs docker rm
launch PyCharm again
The option invalidate cache -> Clear downloaded shared indexes will also repopulate the Pycharm volumes. (At least in 2021.1)

How to get contents generated by a docker container on the local fileystem (minimal failing example)

This question is a minimal failing version of this other one:
How to get contents generated by a docker container on the local fileystem
I have the following files:
./test
-rw-r--r-- 1 miqueladell staff 114 Jan 21 15:24 Dockerfile
-rw-r--r-- 1 miqueladell staff 90 Jan 21 15:23 docker-compose.yml
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 html
./test/html:
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
DockerFile
FROM php:7.0.2-apache
RUN touch /var/www/html/file_generated_inside_the_container
VOLUME /var/www/html/
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
After running a container built from the image defined in the Dockerfile what I want is having:
./html
-- file_from_local_filesystem
-- file_generated_inside_the_container
Instead of this I get the following:
build the image
$ docker build --no-cache -t test .
Sending build context to Docker daemon 4.096 kB
Step 1 : FROM php:7.0.2-apache
---> 2f16964f48ba
Step 2 : RUN touch /var/www/html/file_generated_inside_the_container
---> Running in b957cc9d7345
---> 5579d3a2d3b2
Removing intermediate container b957cc9d7345
Step 3 : VOLUME /var/www/html/
---> Running in 6722ddba76cc
---> 4408967d2a98
Removing intermediate container 6722ddba76cc
Successfully built 4408967d2a98
run a container with previous image
$ docker-compose up -d
Creating test_test_1
list files on the local machine filesystem
$ ls -al html
total 0
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 .
drwxr-xr-x 5 miqueladell staff 170 Jan 21 14:20 ..
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
list files from the container
$ docker exec -i -t test_test_1 ls -alR /var/www/html
/var/www/html:
total 4
drwxr-xr-x 1 1000 staff 102 Jan 21 14:25 .
drwxr-xr-x 4 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 staff 0 Jan 21 14:22 file_from_local_filesystem
The volume from the local filesystem gets mounted on the container file system replacing the contents of it.
This is contrary at what I understand in the section "Permissions and Ownership" of this guide Understanding volumes
How could I get the desired output?
Thanks
EDIT: As is said in the accepted answer I did not understand volumes when asking the question. Volumes, as mountponint, replace the container content with the local filesystem that is mounted.
The solution I needed was to use ENTRYPOINT to run the necessary commands to initialize the contents of the mounted volume once the container is running.
The code that originated the question can be seen working here:
https://github.com/MiquelAdell/composed_wordpress/tree/1.0.0
This is from the guide you've pointed to
This won’t happen if you specify a host directory for the volume
Volumes you share from other containers or host filesystem replace directories from container.
If you need to add some files to volume, you should do it after you start container. You can do an entrypoint for example which does touch and then runs your main process.
Yep, pretty sure it should be the full path:
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
./html should be /path/to/html
Edit
Output after changing to full path and running test.sh:
$ docker exec -ti dockervolumetest_test_1 bash
root#c0bd7a722b63:/var/www/html# ls -la
total 8
drwxr-xr-x 2 1000 adm 4096 Jan 21 15:19 .
drwxr-xr-x 3 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 adm 0 Jan 21 15:19 file_from_local_filesystem
Edit 2
Sorry, I misunderstood the entire premise of the question :)
So you're trying to get file_generated_inside_the_container (which is created inside your docker image only) mounted to some location on your host machine - like a "reverse mount".
This isn't possible to do with any docker commands, but if all you're after is access to your VOLUMEs files on your host, you can find the files in the docker root directory (normally /var/lib/docker). To find the exact location of the files, you can use docker inspect [container_id], or in the latest versions use the docker API.
See cpuguy's answer in this github issue: https://github.com/docker/docker/issues/12853#issuecomment-123953258 for more details.

Is it safe to delete docker logs generated at /var/lib/docker/containers/HASH

The log is now is currently 13GB, I don't know if it safe to delete the log, and how to make the log smaller
root#faith:/var/lib/docker/containers/f1ac17e833be2e5d1586d34c51324178bd18f969d
1046cbb59f10eaa4bcf84be# ls -alh
total 13G
drwx------ 2 root root 4.0K Mar 6 08:35 .
drwx------ 3 root root 4.0K Feb 24 11:00 ..
-rw-r--r-- 1 root root 2.1K Feb 24 10:15 config.json
-rw------- 1 root root 13G Feb 25 00:27 f1ac17e833be2e5d1586d34c51324178bd18f96
9d1046cbb59f10eaa4bcf84be-json.log
-rw-r--r-- 1 root root 611 Feb 24 10:15 hostconfig.json
-rw-r--r-- 1 root root 13 Feb 24 10:15 hostname
-rw-r--r-- 1 root root 175 Feb 24 10:15 hosts
-rw-r--r-- 1 root root 61 Feb 24 10:15 resolv.conf
-rw-r--r-- 1 root root 71 Feb 24 10:15 resolv.conf.hash
Congratulations, you have discovered one of The Big Unsolved Problems with Docker!
As Nathaniel says, Docker assumes it has complete ownership of things under /var/lib/docker so trying to delete files there from behind Docker's back may not work.
However, based on components in issue 7333 and in PR 9753, it looks like people are successfully using logrotate and the copytruncate directive to rotate docker logs. Both these links are worth reading, because they contain a long discussion about the pitfalls of Docker logging and some potential solutions.
Ideally, Docker itself would have much better native support for log management. Until then, here are some alternatives to consider:
If you control the source for your applications, you can configure everything to log to syslog rather than to stdout/stderr. You can then have a variety of solutions you can pursue, from running a syslog service inside your container to exposing the hosts's /dev/log inside the container.
Another options is to run systemd inside your container, and use this to start your services. systemd will collect stdout/stderr from your services and feed that to journald, and journald will take care of things like log rotation (and also give you a reasonably flexible mechanism for querying the logs).
These ought to be cleaned up when you delete the container. (Thus, it is not OK to delete them, because Docker believes that it has control of /var/lib/docker.)

Resources