I'm still confused by Docker containers and images - docker

I know that containers are a form of isolation between the app and the host (the managed running process). I also know that container images are basically the package for the runtime environment (hopefully I got that correct). What's confusing to me is when they say that a Docker image doesn't retain state. So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart? Why would I use a database in a Docker container?
It's also difficult for me to grasp LXC. On another question page I see:
LinuX Containers (LXC) is an operating system-level virtualization
method for running multiple isolated Linux systems (containers) on a
single control host (LXC host)
What does that exactly mean? Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?

LXC and Docker, Both are completely different. But we say both are container holders.
There are two types of Containers,
1.Application Containers: Whose main motto is to provide application dependencies. These are Docker Containers (Light Weight Containers). They run as a process in your host and gets all the things done you want. They literally don't need any OS Image/ Boot Up thing. They come and they go in a matter of seconds. You cannot run multiple process/services inside a docker container. If you want, you can do run multiple process inside a docker container, but it is laborious. Here, resources (CPU, Disk, Memory, RAM) will be shared.
2.System Containers: These are fat Containers, means they are heavy, they need OS Images
to launch themselves, at the same time they are not as heavy as Virtual Machines, They are very similar to VM's but differ in architecture a bit.
In this, Let us say Ubuntu as a Host Machine, if you have LXC installed and configured in your ubuntu host, You can run a Centos Container, a Ubuntu(with Differnet Version), a RHEL, a Fedora and any linux flavour on top of a Ubuntu Host. You can also run multiple process inside an LXC contianer. Here also resoucre sharing will be done.
So, If you have a huge application running in one LXC Container, it requires more resources, simultaneously if you have another application running inside another LXC container which require less resources. The Container with less requirement will share the resources with the container with more resource requirement.
Answering Your Question:
So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart?
You won't create a database docker image with some data to it(This is not recommended).
You run/create a container from an image and you attach/mount data to it.
So, when you stop/restart a container, data will never gets lost if you attach that data to a volume as this volume resides somewhere other than the docker container (May be a NFS Server or Host itself).
Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
Yes, You can do this. We are running LXC Containers in our production.

Related

Can docker container call host syscalls?

I'm running a Docker container (alphine) on MacOS 11.6, there's a Typescript app in that container. I need to simulate and record input from Docker on host. Is it possible to setup Docker in a way that would allow my container to control host's input using node.js osx-mouse package, or by writing a Swift wrapper creating CGEvents?
That's almost certainly not possible. In general Docker containers are prohibited from accessing the host display or other host devices. Since Docker Desktop runs a hidden Linux VM, it's especially difficult: the display technologies are totally different and the VM layer makes it look like the container and host are on physically separate systems.
As a general rule, if you need to interact with the host display or any other hardware, it's much easier to run the task outside a container.

Missuse Docker Container as VM

I've read that you shouldn't ssh into a docker container. But why? I'd like to use a docker container as a replacement for a normal VM. What are the disadvantages? I know that this will create a lot of layers. But I could flatten my container on a regular base.
Can I use the container as a regular vm and what is the "worst case" that can happen?
Docker containers are optimized around running single processes. Virtual machines are optimized around running entire operating systems.
At a technical level you generally can run something that looks like a full VM inside a Docker container, but it's a lot of hand setup. For instance, a typical systemd setup wants to manage several host devices and kernel-level configuration options, and your choices to run systemd are either (a) let it manage the host and possibly conflict with the host's systemd, or (b) manually figure out which unit files you can't run and disable them. All of the prebuilt Docker images run only single services (just MySQL, just Nginx, just a Python runtime, ...) and so you're also giving up this ecosystem.
A VM certainly gives up some amount of efficiency by virtualizing hardware devices and running multiple OS kernels, but if you really want to run a VM, it's not a huge performance loss; just run a VM if that's the model you want to use.
No you can't use it as a replacement for a VM since you can only have one entrypoint on a docker container. You can not expose multiple services on multiple ports like you would on a regular virtual machine.

Is it safe to run docker in docker on Openshift?

I built Docker image on server that can run CI-CD for Jenkins. Because some builds use Docker, I installed Docker inside my image, and in order to allow the inside Docker to run, I had to give it --privilege.
All works good, but I would like to run the docker in docker, on Openshift (or Kubernetes). The problem is with getting the --privilege permissions.
Is running privilege container on Openshift is dangerous, and if so why and how much?
A privileged container can reboot the host, replace the host's kernel, access arbitrary host devices (like the raw disk device), and reconfigure the host's network stack, among other things. I'd consider it extremely dangerous, and not really any safer than running a process as root on the host.
I'd suggest that using --privileged at all is probably a mistake. If you really need a process to administer the host, you should run it directly (as root) on the host and not inside an isolation layer that blocks the things it's trying to do. There are some limited escalated-privilege things that are useful, but if e.g. your container needs to mlock(2) you should --cap-add IPC_LOCK for the specific privilege you need, instead of opening up the whole world.
(My understanding is still that trying to run Docker inside Docker is generally considered a mistake and using the host's Docker daemon is preferable. Of course, this also gives unlimited control over the host...)
In short, the answer is no, it's not safe. Docker-in-Docker in particular is far from safe due to potential memory and file system corruption, and even mounting the host's docker socket is unsafe in effectively any environment as it effectively gives the build pipeline root privileges. This is why tools like Buildah and Kaniko were made, as well as build images like S2I.
Buildah in particular is Red Hat's own tool for building inside containers but as of now I believe they still can't run completely privilege-less.
Additionally, on Openshift 4, you cannot run Docker-in-Docker at all since the runtime was changed to CRI-O.

Docker with different Container OS and Host OS

I am aware that Docker containers shares the host OS, id it possible to run two different container environments on a single host OS/machine?
Yes this is possible. In fact, some enterprise solutions actually take advantage of this solution. Rancher, for example, creates a platform for deploying Kubernetes environments. The underlying operating systems for the nodes are typically deployed as their own OS, RancherOS. Wherein there are two instances of the Docker daemon running. One for userland, and one for system apps. RancherOS is unique in that is runs all essential system services as containers on the host. So when you connect to a node, you can run a system-docker ps and see the state of all the services. However, if you run a docker ps you will only see your userland containers.
Here is more information on this solution: https://rancher.com/docs/os/v1.2/en/system-services/adding-system-services/
As for doing so yourself, this is also possible and somewhat simple. Here is an example of someone doing so: https://www.jujens.eu/posts/en/2018/Feb/25/multiple-docker/
Alternatively, if you didn't want to modify your personal workstation, you can also run docker within a docker container using a project like this: https://github.com/jpetazzo/dind
Let me know if I can help you with anything else. :)

Sharing the host OS or sharing the Container OS

When running an Ubuntu Docker container on Mac or other OS host,
is the UBunto OS really running in the container or is it some kind of a virtual interface?
From my understanding containers share the OS, I just need to understand if it is the OS from the host or it really is a operation system within the container?
If a Docker container is some kind of a virtual interface to the host like; Docker Containe:
http is served from the host
File and folders is served from the host
Then a Docker Container could run on any host OS as long as the interface is there, right?
Thanks for any input
Ok, this is from the documentation - I missed it, reading it the first time:
How does a container work?
A container consists of an operating system, user-added files, and meta-data. As we've seen, each container is built from an image. That image tells Docker what the container holds, what process to run when the container is launched, and a variety of other configuration data. The Docker image is read-only. When Docker runs a container from an image, it adds a read-write layer on top of the image (using a union file system as we saw earlier) in which your application can then run.

Resources