Perhaps a silly question with no sense:
In a kubernetes deployment (or minikube), when a pod container crashes, i would like to analyze the file system at that moment. In this way, i could see core dumps or any other useful information.
I know that i could mount a volume or PVC to get core dumps from a host-defined core pattern location, and i also could get logs by mean a rsyslog sidecar or any other way, but i still would like to do "post-mortem" analysis if possible. I assume that kubernetes should provide (but i don't know how, that's the reason of my question) some mechanism to do this forensics tasks easing the life to all of us, because in a production system we could need to analyze killed/exited containers.
I tried playing directly with docker run without --rm option, but can't get nothing useful from inspection to get useful information or recreate the file system in last moment that had the container alive.
Thank u very much!
When a pod container crashes, i would like to analyze the file system at that moment.
POD (Containers) natively use non-persistent storage.
When a container exits/terminates, so does the container’s storage.
POD (Container) can be connected to storage that is external. This will allows for the storage of persistent data (you can configure volume mount as path to core dump etc..), since this external storage is not removed when a container is stopped/killed will help you with more flexibility to analysis the file system. Configuring container file system storage with commonly used file systems such as NFS .. etc ..
Related
I am working on a Devops project. I want to find the perfect solution.
I have a conflict between two solutions. should I use the application code inside the docker images or in volumes?
Your code should almost never be in volumes, developer-only setups aside (and even then). This is doubly true if you have a setup like a frequent developer-only Node setup that puts the node_modules directory into a Docker-managed anonymous volume: since Docker will refuse to update that directory on its own, the primary effect of this is to cause Docker to ignore any changes to the package.json file.
More generally, in this context, you should think of the image as a way to distribute the application code. Consider clustered environments like Kubernetes: the cluster manager knows how to pull versioned Docker images on its own, but you need to work around a lot of the standard machinery to try to push code into a volume. You should not need to both distribute a Docker image and also separately distribute the code in the image.
I'd suggest using host-directory mounts for injecting configuration files and for storing file-based logs (if the container can't be configured to log to stdout). Use either host-directory or named-volume mounts for stateful containers' data (host directories are easier to back up, named volumes are faster on non-Linux platforms). Do not use volumes at all for your application code or libraries.
(Consider that, if you're just overwriting all of the application code with volume mounts, you may as well just use the base node image and not build a custom image; and if you're doing that, you may as well use your automation system (Salt Stack, Ansible, Chef, etc.) to just install Node and ignore Docker entirely.)
So I have a working DASK/SLURM cluster of 4 raspberry Pis with a common NFS share, that I can run Python jobs succesfully.
However, I want to add some more arm devices to my cluster that do not support NFS mounts (Kernel module missing) so I wish to move to fuse based ftp mounts wiht CurlftpFS.
I have setup the mounts sucesfully with anonymous username and without any passwords and the common FTP share can be seen by all the nodes (just as before when it was an NFS share).
I can still run SLURM jobs (since they do not use the share) but when I try to run a DASK job the master node timesout complaining that no worker nodes could be started.
I am not sure what exactly is the problem, since the share it open to anyone for read/write access (e.g. logs and dask queue intermediate files).
Any ideas how I can troubleshoot this?
I don't believe anyone has a cluster like yours!
At a guess, the filesystem access via FUSE, ftp and the pi is much slower than the OS is expecting, and you are seeing the effects of low-level timeouts, i.e., from Dask's point of view it appears that files reads are failing. Dask needs access to storage for configuration and sometimes temporary files. You would want to make sure that these locations are on local storage or tuned off. However, if this is happening during import of modules, which you have on the shared drive by design, there may be no fixing it (python loads many small files during import). Why not use rsync to move the files to the nodes?
We are running a pod in Kubernetes that needs to load a file during runtime. This file has the following properties:
It is known at build time
It should be mounted read-only by multiple pods (the same kind)
It might change (externally to the cluster) and needs to be updated
For various reasons (security being the main concern) the file cannot be inside the docker image
It is potentially quite large, theoretically up to 100 MB, but in practice between 200kB - 10MB.
We have considered various options:
Creating a persistent volume, mount the volume in a temporary pod to write (update) the file, unmount the volume, and then mount it in the service with ROX (Read-Only Multiple) claims. This solution means we need downtime during upgrade, and it is hard to automate (due to timings).
Creating multiple secrets using the secrets management of Kubernetes, and then "assemble" the file before loading it in an init-container or something similar.
Both of these solutions feels a little bit hacked - is there a better solution out there that we could utilize for solving this?
You need to use a shared filesystem that supports Read/Write Multiple Pods.
Here is a link to the CSI Drivers which can be used with Kubernetes and provide those access:
https://kubernetes-csi.github.io/docs/drivers.html
Ideally, you need a solution that is not an appliance, and can run anywhere meaning it can run in the cloud or on-prem.
The platforms that could work for you are Ceph, GlusterFS, and Quobyte (Disclaimer, I work for Quobyte)
I’m looking for a really simple, lightweight way of persisting logs from a docker container running in kubernetes. I just want the stdout (and stderr I guess) to go to persistent disk, I don’t want anything else for analysing the logs, to send them over the internet to a third party, etc. as part of this.
Having done some reading I’ve been considering a DaemonSet with the application container, but then another container which has /var/lib/docker/containers mounted and also a persistent volume (maybe NFS) mounted too. That container would then need a way to copy logs from the default docker JSON logging driver in /var/lib/docker/containers to the persistent volume, maybe rsync running regularly.
Would that work (presumably if the rsync container goes down it's going to miss stuff because nothing's queuing, perhaps that's ok rather than trying to queue potentially huge amounts of logs), is this a sensible approach for the desired outcome? It’s only for one or two containers if that makes a difference. Thanks.
Fluentd supports a simple file output plugin (https://docs.fluentd.org/output/file) which you can easily aim at a PersistentVolume mount. Otherwise you would configure Fluentd (or Bit if you prefer) just like normal for Kubernetes so find your favorite guide and follow it.
I was asked this question in an interview and i m not sure of the correct answer hence I would like your suggestions.
I was asked whether we should persist production critical data inside of the docker instance or outside of it? What would be my choice and the reasons for it.
Would your answer differ incase we have a non-prod non critical data ?
Back your answers with reasons.
Most data should be managed externally to containers and container images. I tend to view data constrained to a container as temporary (intermediate|discardable) data. Otherwise, if it's being captured but it's not important to my business, why create it?
The name "container" is misleading. Containers aren't like VMs where there's a strong barrier (isolation) between VMs. When you run multiple containers on a single host, you can enumerate all their processes using ps aux on the host.
There are good arguments for maintaining separation between processes and data and running both within a single container makes it more challenging to retain this separation.
Unlike processes, files in container layers are more isolated though. Although the layers are manifest as files on the host OS, you can't simply ls a container layer's files from the host OS. This makes accessing the data in a container more complex. There's also a performance penalty for effectively running a file system atop another file system.
While it's common and trivial to move container images between machines (viz docker push and docker pull), it's less easy to move containers between machines. This isn't generally a problem for moving processes as these (config aside) are stateless and easy to move and recreate, but your data is state and you want to be able to move this data easily (for backups, recovery) and increasingly to move amongst a dynamic pool of nodes that perform processing upon it.
Less importantly but not unimportantly, it's relatively easy to perform the equivalent of a rm -rf * with Docker by removing containers (docker container rm ...) and thereby deleting the application and your data.
The two very most basic considerations you should have here:
Whenever a container gets deleted, everything in the container filesystem is lost.
It's extremely common to delete containers; it's required to change many startup options or to update a container to a newer image.
So you don't really want to keep anything "in the container" as its primary data storage: it's inaccessible from outside the container, and will get lost the next time there's a critical security update and you must delete the container.
In plain Docker, I'd suggest keeping
...in the image: your actual application (the compiled binary or its interpreted source as appropriate; this does not go in a volume)
...in the container: /tmp
...in a bind-mounted host directory: configuration files you need to push into the container at startup time; directories of log files produced by the container (things where you as an operator need to directly interact with the files)
...in either a named volume or bind-mounted host directory: persistent data the container records in the filesystem
On this last point, consider trying to avoid this layer altogether; keeping data in a database running "somewhere else" (could be another container, a cloud service like RDS, ...) simplifies things like backups and simplifies running multiple replicas of the same service. A host directory is easier to back up, but on some environments (MacOS) it's unacceptably slow.
My answers don't change here for "production" vs. "non-production" or "critical" vs. "non-critical", with limited exceptions you can justify by saying "it's okay if I lose this data" ("because it's not the master copy of it").