I run some Docker Windows containers. I'm searching for some way to backup these containers, while they're running. But when I try to use standart ways to backup containers, I get such errors:
PS C:\Users\roza> docker commit 908d6334d554
Error response from daemon: windows does not support commit of a running container
PS C:\Users\roza> docker export 908d6334d554 -o tar.tar
Error response from daemon: the daemon on this platform does not support export of a container
Why I cannot commit/export running Windows containers?
Is there some (maybe non-standart and very tricky, maybe with usage of external tools) way to create backup of such containers?
This may not be what you want to hear but...
In container world, backup of running containers should not be required. If you lose something when the container exists, then the image should be better segmented. Anything that must survive after the container is killed (log, assets, or even temp folders) should be mapped as volumes. That will give you greater control over backup.
A commit of a Windows container also involves stopping it first, then committing. Another limitation is that VSS based apps won't interoperate with containers. As the earlier answer suggested, the standard approach for containers is to simply spin up a new container from an image.
Windows images from Microsoft (which is all Windows images) are licensed, and I believe part of that licensing means you cannot export the image. The lack of pause/unpause is because of the underlying implementation. Linux does a pause with cgroups which aren't on Windows. Only Windows HyperV containers support pause because they use a HyperV command to implement it.
That said, backing up anything in docker involves backing up 3 things:
the image registry server
the configuration for the container, preferably a docker-compose.yml file
the volume data
You don't backup the containers themselves, they are ephemeral, treated like cattle. The volume data will be a filesystem directory, and you'll use your standard backup tools on this directory. If you cannot backup while your container is running, then stop the container first, and restart the container after the backup is complete.
Related
I have a server that's been running in docker on coreos. For some reason containerd has stopped running and the docker daemon has stopped working correctly. My efforts to debug haven't gotten far. I'd like to just boot a new instance and migrate, but I'm not sure I can backup my volume without a working docker service. Is it possible to backup my volume without using docker?
Most search results assume a running docker system, and don't work in this case.
By default, docker volumes are stored in /var/lib/docker/volumes. Being that you don't have a working docker setup, you might have to dive into the subfolders to figure out which volume you're concerned with, but that should at least give you a start. If it's helpful, in a working docker environment, you can inspect docker volumes outlined here, and get all necessary information you would need to carry this out.
I built Docker image on server that can run CI-CD for Jenkins. Because some builds use Docker, I installed Docker inside my image, and in order to allow the inside Docker to run, I had to give it --privilege.
All works good, but I would like to run the docker in docker, on Openshift (or Kubernetes). The problem is with getting the --privilege permissions.
Is running privilege container on Openshift is dangerous, and if so why and how much?
A privileged container can reboot the host, replace the host's kernel, access arbitrary host devices (like the raw disk device), and reconfigure the host's network stack, among other things. I'd consider it extremely dangerous, and not really any safer than running a process as root on the host.
I'd suggest that using --privileged at all is probably a mistake. If you really need a process to administer the host, you should run it directly (as root) on the host and not inside an isolation layer that blocks the things it's trying to do. There are some limited escalated-privilege things that are useful, but if e.g. your container needs to mlock(2) you should --cap-add IPC_LOCK for the specific privilege you need, instead of opening up the whole world.
(My understanding is still that trying to run Docker inside Docker is generally considered a mistake and using the host's Docker daemon is preferable. Of course, this also gives unlimited control over the host...)
In short, the answer is no, it's not safe. Docker-in-Docker in particular is far from safe due to potential memory and file system corruption, and even mounting the host's docker socket is unsafe in effectively any environment as it effectively gives the build pipeline root privileges. This is why tools like Buildah and Kaniko were made, as well as build images like S2I.
Buildah in particular is Red Hat's own tool for building inside containers but as of now I believe they still can't run completely privilege-less.
Additionally, on Openshift 4, you cannot run Docker-in-Docker at all since the runtime was changed to CRI-O.
I know that containers are a form of isolation between the app and the host (the managed running process). I also know that container images are basically the package for the runtime environment (hopefully I got that correct). What's confusing to me is when they say that a Docker image doesn't retain state. So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart? Why would I use a database in a Docker container?
It's also difficult for me to grasp LXC. On another question page I see:
LinuX Containers (LXC) is an operating system-level virtualization
method for running multiple isolated Linux systems (containers) on a
single control host (LXC host)
What does that exactly mean? Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
LXC and Docker, Both are completely different. But we say both are container holders.
There are two types of Containers,
1.Application Containers: Whose main motto is to provide application dependencies. These are Docker Containers (Light Weight Containers). They run as a process in your host and gets all the things done you want. They literally don't need any OS Image/ Boot Up thing. They come and they go in a matter of seconds. You cannot run multiple process/services inside a docker container. If you want, you can do run multiple process inside a docker container, but it is laborious. Here, resources (CPU, Disk, Memory, RAM) will be shared.
2.System Containers: These are fat Containers, means they are heavy, they need OS Images
to launch themselves, at the same time they are not as heavy as Virtual Machines, They are very similar to VM's but differ in architecture a bit.
In this, Let us say Ubuntu as a Host Machine, if you have LXC installed and configured in your ubuntu host, You can run a Centos Container, a Ubuntu(with Differnet Version), a RHEL, a Fedora and any linux flavour on top of a Ubuntu Host. You can also run multiple process inside an LXC contianer. Here also resoucre sharing will be done.
So, If you have a huge application running in one LXC Container, it requires more resources, simultaneously if you have another application running inside another LXC container which require less resources. The Container with less requirement will share the resources with the container with more resource requirement.
Answering Your Question:
So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart?
You won't create a database docker image with some data to it(This is not recommended).
You run/create a container from an image and you attach/mount data to it.
So, when you stop/restart a container, data will never gets lost if you attach that data to a volume as this volume resides somewhere other than the docker container (May be a NFS Server or Host itself).
Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
Yes, You can do this. We are running LXC Containers in our production.
I'm trying to wrap my head around Docker containers, specifically how to deploy them to a Docker container host. I know there are lots of options here and ultimately we'll switch to a more common deployment approach (e.g. to Azure, AWS) but this is a temporary requirement. We're using windows containers.
I have a container image that I've created and will be recreated on each build as part of a Jenkins job (our Jenkins instance is hosted on a container-ready windows server 2016 box). I also have a separate container-ready Windows Server 2016 box which is where we intend to run the containers from.
However, I'm not sure how I can have the containers that our Jenkins box produces automatically pushed to our separate 2016 host. Ideally, I'd like to avoid using a container registry, unless there is a low-friction, on-premise option available.
Container registries are the way to distribute Docker images. Tooling is built around registries, it would be counterproductive to work against the concept.
But docker image save and docker image import could get you started as it saves the image as a tar file that you can transfer between the hosts. Once you copied the image to the other box, you can start it up with the usual docker run command, or docker compose up.
If your case is not trivial though and you start having multiple Docker hosts to run the containers, container orchestrators like Docker Swarm, Kubernetes are the way to go - or the managed versions of those, like Azure ACS. That rabbit hole is deeper though than I can answer in a single SO answer :)
Maybe I missed something in the Docker documentation, but I'm curious and can't find an answer:
What mechanism is used to restart docker containers if they should error/close/etc?
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose? (as they say it is not production ready)
What mechanism is used to restart docker containers if they should error/close/etc?
Docker restart policies, as set with the --restart option to docker run. From the docker-run(1) man page:
--restart=""
Restart policy to apply when a container exits (no, on-fail‐
ure[:max-retry], always)
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose?
Well, you can of course use docker-compose if that is the best match for your requirements, even if it is not labelled as "production ready".
You can investigate larger container management solutions like Kubernetes or even OpenStack (although I would not recommend the latter unless you are already familiar with OpenStack).
You could craft individual systemd unit files for each container.