Can docker container call host syscalls? - docker

I'm running a Docker container (alphine) on MacOS 11.6, there's a Typescript app in that container. I need to simulate and record input from Docker on host. Is it possible to setup Docker in a way that would allow my container to control host's input using node.js osx-mouse package, or by writing a Swift wrapper creating CGEvents?

That's almost certainly not possible. In general Docker containers are prohibited from accessing the host display or other host devices. Since Docker Desktop runs a hidden Linux VM, it's especially difficult: the display technologies are totally different and the VM layer makes it look like the container and host are on physically separate systems.
As a general rule, if you need to interact with the host display or any other hardware, it's much easier to run the task outside a container.

Related

Missuse Docker Container as VM

I've read that you shouldn't ssh into a docker container. But why? I'd like to use a docker container as a replacement for a normal VM. What are the disadvantages? I know that this will create a lot of layers. But I could flatten my container on a regular base.
Can I use the container as a regular vm and what is the "worst case" that can happen?
Docker containers are optimized around running single processes. Virtual machines are optimized around running entire operating systems.
At a technical level you generally can run something that looks like a full VM inside a Docker container, but it's a lot of hand setup. For instance, a typical systemd setup wants to manage several host devices and kernel-level configuration options, and your choices to run systemd are either (a) let it manage the host and possibly conflict with the host's systemd, or (b) manually figure out which unit files you can't run and disable them. All of the prebuilt Docker images run only single services (just MySQL, just Nginx, just a Python runtime, ...) and so you're also giving up this ecosystem.
A VM certainly gives up some amount of efficiency by virtualizing hardware devices and running multiple OS kernels, but if you really want to run a VM, it's not a huge performance loss; just run a VM if that's the model you want to use.
No you can't use it as a replacement for a VM since you can only have one entrypoint on a docker container. You can not expose multiple services on multiple ports like you would on a regular virtual machine.

I'm still confused by Docker containers and images

I know that containers are a form of isolation between the app and the host (the managed running process). I also know that container images are basically the package for the runtime environment (hopefully I got that correct). What's confusing to me is when they say that a Docker image doesn't retain state. So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart? Why would I use a database in a Docker container?
It's also difficult for me to grasp LXC. On another question page I see:
LinuX Containers (LXC) is an operating system-level virtualization
method for running multiple isolated Linux systems (containers) on a
single control host (LXC host)
What does that exactly mean? Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
LXC and Docker, Both are completely different. But we say both are container holders.
There are two types of Containers,
1.Application Containers: Whose main motto is to provide application dependencies. These are Docker Containers (Light Weight Containers). They run as a process in your host and gets all the things done you want. They literally don't need any OS Image/ Boot Up thing. They come and they go in a matter of seconds. You cannot run multiple process/services inside a docker container. If you want, you can do run multiple process inside a docker container, but it is laborious. Here, resources (CPU, Disk, Memory, RAM) will be shared.
2.System Containers: These are fat Containers, means they are heavy, they need OS Images
to launch themselves, at the same time they are not as heavy as Virtual Machines, They are very similar to VM's but differ in architecture a bit.
In this, Let us say Ubuntu as a Host Machine, if you have LXC installed and configured in your ubuntu host, You can run a Centos Container, a Ubuntu(with Differnet Version), a RHEL, a Fedora and any linux flavour on top of a Ubuntu Host. You can also run multiple process inside an LXC contianer. Here also resoucre sharing will be done.
So, If you have a huge application running in one LXC Container, it requires more resources, simultaneously if you have another application running inside another LXC container which require less resources. The Container with less requirement will share the resources with the container with more resource requirement.
Answering Your Question:
So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart?
You won't create a database docker image with some data to it(This is not recommended).
You run/create a container from an image and you attach/mount data to it.
So, when you stop/restart a container, data will never gets lost if you attach that data to a volume as this volume resides somewhere other than the docker container (May be a NFS Server or Host itself).
Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
Yes, You can do this. We are running LXC Containers in our production.

Debugging a Go process in a container using Delve/Goland from the host

Before I burn hours trying it out I wanted to ask the community is this even possible?
Scenario:
Running Goland on host (may be any OS)
Running Go dev env in Alpine based container
Code on host volume mapped to container
Can I attach the Goland debugger (Delve) to a Go process in the container? I'm assuming I can run delve in the container headless and run the client on the host, punching whatever port is required? Will I have binary compatibility issues if the host is not linux?
I'd rather not duplicate the entire post in this answer, but have a look at this resource on how to use containers to run applications you write https://blog.jetbrains.com/go/2018/04/30/debugging-containerized-go-applications/
To answer this specifically, as long as you have Go, the application sources, and all dependencies installed on the host machine, you can develop in GoLand and then, using a mapped volume, you can also run it from the container.
However, this workflow sounds more like the workflow you'd normally have using VMs not containers, which is why in the above article all the running/debugging is done using the actual containers, rather than using bash inside a container to run those commands.

Docker with different Container OS and Host OS

I am aware that Docker containers shares the host OS, id it possible to run two different container environments on a single host OS/machine?
Yes this is possible. In fact, some enterprise solutions actually take advantage of this solution. Rancher, for example, creates a platform for deploying Kubernetes environments. The underlying operating systems for the nodes are typically deployed as their own OS, RancherOS. Wherein there are two instances of the Docker daemon running. One for userland, and one for system apps. RancherOS is unique in that is runs all essential system services as containers on the host. So when you connect to a node, you can run a system-docker ps and see the state of all the services. However, if you run a docker ps you will only see your userland containers.
Here is more information on this solution: https://rancher.com/docs/os/v1.2/en/system-services/adding-system-services/
As for doing so yourself, this is also possible and somewhat simple. Here is an example of someone doing so: https://www.jujens.eu/posts/en/2018/Feb/25/multiple-docker/
Alternatively, if you didn't want to modify your personal workstation, you can also run docker within a docker container using a project like this: https://github.com/jpetazzo/dind
Let me know if I can help you with anything else. :)

Which Docker base image should be used to install Apps in a container without any additional OS?

I am running a Docker daemon on my GUEST OS which is CentOS. I want to install software services on top of that in an isolated manner and I do not need another OS image inside my Docker container.
I want to have a Docker container with just the additional binaries and libraries for the software application I am going to install.
Is there a "whiteglove/blank" base image in Docker I can use ? I want a very lean container that uses as a starting point what my GUEST OS has to offer. Is that possible ?
What you're asking for isn't possible out-of-the-box with Docker. Each Docker image has its own root filesystem, which needs to have some sort of OS installed.
Your options are:
Use a minimal base image, such as the BusyBox image. This will give you the absolute minimum you need to get a container running.
Use the CentOS base image, in which case your container will be running the same or very similar OS.
The reason Docker images are like this is because they're meant to be portable. Any Docker image is meant to run anywhere Docker is running, regardless of the operating system. This means that the Docker image must contain an entire root filesystem and OS installation.
What you can do if you need stuff from the host OS is share a directory using Docker volumes. However, this is generally meant to be used for mounting data directories, and it still necessitates the Docker image having an OS.
That said, if you have a statically-linked binary that has absolutely no dependencies, it becomes easy to create a very minimal image. This is called a "microcontainer", and Go in particular is well-suited to producing these. Here is some further reading on microcontainers and how to produce them.
One other option you could look into if all you want is the resource management part of containers is using lxc-execute, as described in this answer. But you lose out on all the other nice Docker features as well. Unfortunately, what you're trying to do is just not what Docker is built for.
As I understood docker, when you use a base image, you really do not install an additional OS.
Its just a directory structure sort of thing with preinstalled programs or we can say a file system of an actual base image OS.
In most cases [click this link for the exception], docker itself [the docker engine] runs on a linux VM when used on mac and windows.
If you are confused with virtualization, there is no virtualization inside Docker Container. Containers run in user space on top of the host operating system's kernel. So, the containers and the host OS would share the same kernel.
So, to sumarize:
Consider the host OS to be windows or mac.
Docker when installed, is inside a linux VM running on these host OS.[use this resource for more info]
The base linux images inside the docker container then use this linux VM machine as host OS and not the native windows or mac.
On linux, The base linux images inside the docker container direclty uses the host OS which is linux itself without any virtualization.
The base image inside Docker Container is just the snapshot of that linux distributions programs and tool.
The base image make use of the host kernel (which in all three cases, is linux).
Hence, there is no virtualisation inside a container but docker can use a single parent linux virtual machine to run itself [the docker engine] inside it.
Conclusion:
When you install a base image inside docker, there is no additional OS that is installed inside the container but just the copy of filesystem with minimal programs and tools is created.
From Docker's best practices:
Whenever possible, use current Official Repositories as the basis for your image. We recommend the Debian image since it’s very tightly controlled and kept extremely minimal (currently under 100 mb), while still being a full distribution.
What you're asking for is completely against the idea of using Docker Containers. You don't want to have any dependability on your GUEST OS. If you do your Docker wont be portable.
When you create a container, you want it to run on any machine that runs Docker. Be it CentoOS, Ubuntu, Mac, or Microsoft Azure :)
Ideally there are no advantages of your base container OS having to do anything with your Host OS.
For any container, you need to have at least a root file system. That is why you need to use a base image that have the root file system. Your idea is not completely against the container paradigm of usage; as opposed to VMs, we want container to be minimal without much of repetitive elements that it can leverage from the underlayer OS.
Following the links of Rohan Singh, I found some related info, that doesn't generally contradict, but relates to the core ide of the question:
The base image for all Docker images is the scratch image. It has essentially nothing in it. This may sound useless, but you can actually use it to create the smallest possible image for your application, if you can compile your application to a static binary with zero dependencies like you can with Go or C.

Resources