How to browse the contents of a docker/btrfs container-specific layer - docker

I have read, and I believe understood, the docker pages on using btrfs, and notably this one
My question is rather simple, I would need to be able to navigate (e.g. using cd and ls, but any other means is fine) in what the above link calls the Thin R/W layer attached to a given container.
The reason I need this is, I use an image that I have not built myself - namely jupyter/scipy-notebook:latest - and what I can see is that each container starts with a circa 100-200 Mb impact on overall disk usage, even though nothing much should be going on in the container.
So I suspect some rather verbose logs get created that I need to silent down a bit; however the whole union fs is huge - circa 5Gb large - so it would help me greatly to navigate only the data that is specific to one container so I can pinpoint the problem.

To list the files that are changed/stored since the original image use
docker diff my-container
This is quite handy if you want to get an idea of what's happening inside, doesn't give you the file sizes though.

Related

Docker - Identify Unused Volumes

I'm trying to find a way to identify what container created a volume as well as where it wants to mount to & also If it will be reused when the container restarts for volumes that are currently not in use.
I know I can see what container is currently using a volume, & where it's mounted to in said container, but that isn't enough. I need to identify containers that are no longer running.
The Situation
I've noticed a frequently reoccurring problem with Docker, I create a container to test it out, make some adjustments, restart it, make some more, restart it, until I get it working how I want it to.
In the process, many times, I come across containers that create worthless volumes. These, after the fact, I can identify as 8K volumes not currently in use & just delete them.
But many times these volumes aren't even persistent, as the container will create a new one each time it runs.
At times I look at my Volumes list & see over 100 volumes, none of which are currently in use. The 8KB ones I'll delete without a second thought, but the ones that are 12KB or 24KB or 100KB or 5Mb, etc, etc I don't want to just delete.
I use a Portainer agent inside Portainer solely for the ability to quickly browse these volumes & decide if it needs to be kept, transferred to a Bind mount, or just discarded, but it's becoming more & more of a problem & I figure there has to be some way to identify the container they came from. I'm sure it will require some sort of code exploration, but where? is there not a tool to do this? If I know where the information is I should be able to write a script or even make a container just for this purpose, I just don't know where to begin.
The most annoying thing is when a container creates a second container & that container, that I have no control over, is using a volume, but it creates a new one each time it starts.
Some examples
adoring_hellman created by VS Code Server container linuxserver/code-server
datadog/agent creates a container I believe is called st-vector or something similar
Both which have access to /var/run/docker.sock

How to inspect contents of different Docker image layers?

My current understanding of a Docker image is that it is a collection of individual layers. Each layer only contains deltas that are merged via the union filesystem (which simply mounts all layers on top of each other). When instantiating an image, another (writable) layer is put on top that will then contain all container-specific changes that are persisted between restarts. Please correct me if I am wrong in any of the above.
I would like to inspect the contents of each of the various layers. I am particularly interested in inspecting the top-most layer to see whether my containerized app writes any data that would bloat the container, like a log or so. I am working on macOS, which does not store all the files in /var/lib/docker/, but seems to store them in a VM. I read about the docker-machine tools that make it easy to connect to the Docker engine via SSH, where one would be able to see and mount all layers. However, this tool seems to be discontinued.
Does anybody have an idea on 1) how to connect to the docker engine to get access to the layers and 2) how to find out what files are contained in a particular layer?
edit: It seems to be possible to use docker diff to see the file differences between the original image and the running container, which is what I mainly wanted to achieve, but the original questions remain.
You can list the layers and their sizes with the docker history command. But to inspect the contents of all layers I recommend to use the dive tool.

Docker design: exchange data between containers or put multiple processes in one container?

In a current project I have to perform the following tasks (among others):
capture video frames from five IP cameras and stitch a panorama
run machine learning based object detection on the panorama
stream the panorama so it can be displayed in a UI
Currently, the stitching and the streaming runs in one docker container, and the object detection runs in another, reading the panorama stream as input.
Since I need to increase the input resolution for the the object detector while maintaining the stream resolution for the UI, I have to look for alternative ways of getting the stitched (full resolution) panorama (~10 MB per frame) from the stitcher container to the detector container.
My thoughts regarding potential solutions:
shared volume. Potential downside: One extra write and read per frame might be too slow?
Using a message queue or e.g. redis. Potential downside: yet another component in the architecture.
merging the two containers. Potential downside(s): Not only does it not feel right, but the two containers have completely different base images and dependencies. Plus I'd have to worry about parallelization.
Since I'm not the sharpest knife in the docker drawer, what I'm asking for are tips, experiences and best practices regarding fast data exchange between docker containers.
Usually most communication between Docker containers is over network sockets. This is fine when you're talking to something like a relational database or an HTTP server. It sounds like your application is a little more about sharing files, though, and that's something Docker is a little less good at.
If you only want one copy of each component, or are still actively developing the pipeline: I'd probably not use Docker for this. Since each container has an isolated filesystem and its own user ID space, sharing files can be unexpectedly tricky (every container must agree on numeric user IDs). But if you just run everything on the host, as the same user, pointing at the same directory, this isn't a problem.
If you're trying to scale this in production: I'd add some sort of shared filesystem and a message queueing system like RabbitMQ. For local work this could be a Docker named volume or bind-mounted host directory; cloud storage like Amazon S3 will work fine too. The setup is like this:
Each component knows about the shared storage and connects to RabbitMQ, but is unaware of the other components.
Each component reads a message from a RabbitMQ queue that names a file to process.
The component reads the file and does its work.
When it finishes, the component writes the result file back to the shared storage, and writes its location to a RabbitMQ exchange.
In this setup each component is totally stateless. If you discover that, for example, the machine-learning component of this is slowest, you can run duplicate copies of it. If something breaks, RabbitMQ will remember that a given message hasn't been fully processed (acknowledged); and again because of the isolation you can run that specific component locally to reproduce and fix the issue.
This model also translates well to larger-scale Docker-based cluster-computing systems like Kubernetes.
Running this locally, I would absolutely keep separate concerns in separate containers (especially if individual image-processing and ML tasks are expensive). The setup I propose needs both a message queue (to keep track of the work) and a shared filesystem (because message queues tend to not be optimized for 10+ MB individual messages). You get a choice between Docker named volumes and host bind-mounts as readily available shared storage. Bind mounts are easier to inspect and administer, but on some platforms are legendarily slow. Named volumes I think are reasonably fast, but you can only access them from Docker containers, which means needing to launch more containers to do basic things like backup and pruning.
Alright, Let's unpack this:
IMHO Shared Volume works just fine, but gets way too messy over time. Especially if you're handling Stateful services.
MQ: This seems like a best option in my opinion. Yes, it's another component in your architecture, but it makes sense to have it rather than maintaining messy shared Volumes or handling massive container images (if you manage to combine 2 container images)
Yes, You could potentially do this, but not a good idea. Considering your use case, I'm going to go ahead and make an assumption that you have a massive list of dependencies which could potentially lead to a conflict. Also, lot of dependencies = larger image = Larger attack surface - which from a security perspective is not a good thing.
If you really want to run multiple processes in one container, it's possible. There are multiple ways to achieve that, however I prefer supervisord.
https://docs.docker.com/config/containers/multi-service_container/

Why might an image run differently in Kubernetes than in Docker?

I'm experiencing an issue where an image I'm running as part of a Kubernetes deployment is behaving differently from the expected and consistent behavior of the same image run with docker run <...>. My understanding of the main purpose of containerizing a project is that it will always run the same way, regardless of the host environment (ignoring the influence of the user and of outside data. Is this wrong?
Without going into too much detail about my specific problem (since I feel the solution may likely be far too specific to be of help to anyone else on SO, and because I've already detailed it here), I'm curious if someone can detail possible reasons to look into as to why an image might run differently in a Kubernetes environment than locally through Docker.
The general answer of why they're different is resources, but the real answer is that they should both be identical given identical resources.
Kubernetes uses docker for its container runtime, at least in most cases I've seen. There are some other runtimes (cri-o and rkt) that are less widely adopted, so using those may also contribute to variance in how things work.
On your local docker it's pretty easy to mount things like directories (volumes) into the image, and you can populate the directory with some content. Doing the same thing on k8s is more difficult, and probably involves more complicated mappings, persistent volumes or an init container.
Running docker on your laptop and k8s on a server somewhere may give you different hardware resources:
different amounts of RAM
different size of hard disk
different processor features
different core counts
The last one is most likely what you're seeing, flask is probably looking up the core count for both systems and seeing two different values, and so it runs two different thread / worker counts.

How file lookup work in Docker container

According to Docker docs, every Dockerfile instruction create a layer, and all the layers are kept when you create new image based on an old one. Then when I create my own image, I might have hundreds of layers involved because of the recursive inherit of layers of base image.
In my understand, file lookup in container work this way:
process want to access file a, lookup starts from the container layer(thin w/r layer) .
UnionFS check whether this layer have a record for it (have it or marked as deleted). If yes, return it or say not found respectively, ending the lookup. If no, pass the task to the layer below.
the lookup end at the bottom layer.
If that is the way, consider a file that resides in the bottom layer and unchanged by other layers, /bin/sh maybe, would need going through all the layers to the bottom. Though the layers might be very light-weight, a lookup still need 100x time than a regular one, noticeable. But from my experience, Docker is pretty fast, almost same as a native OS. Where am I wrong?
This is all thanks to UnionFS and Union mounts!
Straight from wikipedia:
It allows files and directories of separate file systems, known as
branches, to be transparently overlaid, forming a single coherent file
system.
And from an interesting article:
In the kernel, the filesystems are stacked in order of their mount
sequence, the first mounted filesystem is at the bottom of the mount
stack, and the latest mount is at the top of the stack. Only the files
and directories of the top of the mount stack are visible. With union
mounts, directory entries from the lower filesystems are merged with
the directory entries of upper filesystem, thus making a logical
combination of all mounted filesystems. Files with the same name in a
lower filesystem are masked, as the upper one takes precedence.
So it doesn't "go through layers" in the conventional sense (e.g one at a time) but rather it knows (at any given time) which file resides on which disk.
Doing this in the filesystem layer also means none of the software has to worry about where the file resides, it knows to ask for /bin/sh and the filesystem knows where to get it.
More info can be found in this webinar.
So to answer your question:
Where am I wrong?
You are thinking that it has to look through the layers one at a time while it doesn't have to do that. (UnionFS is awesome!)
To add to the correct prior answer, copy-on-write (CoW) and union filesystem implementors want to have near-native performance, so, of course, have tuned their implementations and "API" to have best possible lookup/filesystem performance.
That said, it's good to be aware that Docker does not operate on top of only a single 'type' of union/CoW filesystem, but has a small array of available options, with defaults depending on the Linux distro on which it is installed.
AUFS and overlay(fs) are the most common, but Docker also supports devicemapper (Red Hat contributed and supported on Fedora/RHEL/CentOS), btrfs, and zfs. I have a blog post comparing and contrasting the various options that may be of interest.

Resources