Should node-exporter run from host or container? - docker

there is a very simple question: What is best place from where run node-exporter? Directly inside the host or from a container?
What is the pros and cons of both solutions? What is the best practice from the developers? From the usage guidelines is not clear for me!

I would definitely say on the host. This is the recommended way, because node exporter needs access to certain metrics which are not available within the container.
It's true that you still have access to various host metrics even when running in a container if you expose /proc and /sys, but you run the risk of scraping the container-related metrics instead of the host nonetheless.
One example are the network-related metrics. By default containers run in their own network namespace (and this is how you'd want them to run under normal circumstances), so given this default, you're going to scrape information related only to that container instead of that of host, despite the fact that you're exposing the aforementioned pseudo-filesystems.

Related

Read host's ifconfig in the running Docker container

I would like to read host's ifconfig output during the run of the Docker container, to be able to parse it and get OpenVPN interface (tap0) IP address and process it within my application.
Unfortunately, propagating this value via the environment is not my case, because IP address could change in time of running the container and I don't want to restart my application container each time to see a new value.
Current working solution is a CRON on the host which writes the IP into the file on a shared volume and container reads from it - but I am looking for better solution as it seems to me as a workaround. Also, there was a plan to create new container with network: host which will see host's interfaces - it works, but it also looks like a workaround as it involves many steps and probably security issues.
I have a question, is there any valid and more clean way to achieve my goal - read host's ifconfig in docker container in realtime?
A specific design goal of Docker is that containers can’t directly access the host’s network configuration. The workarounds you’ve identified are pretty much the only way to do these.
If you’re trying to modify the host’s network configuration in some way (you’re trying to actually run a VPN, for example) you’re probably better off running it outside of Docker. You’ll still need root permission either way, but you won’t need to disable a bunch of standard restrictions to do what you need.
If you’re trying to provide some address where the service can be reached, using configuration like an environment variable is required. Even if you could access the host’s configuration, this might not be the address you need: consider a cloud environment where you’re running on a cloud instance behind a load balancer, and external clients need the load balancer; that’s not something you can directly know given only the host’s network configuration.

Sharing Memory across Docker containers: '--ipc=host' vs. '--ipc=shareable'

I'm setting up two docker containers - one as a server to hold data in memory, and the other as a client to access that data. In order to do so, I believe I need to use the --ipc flag to share memory between the containers. The Docker documentation explains the --ipc flag pretty well. What makes sense to me according to the documentation is running:
docker run -d --ipc=shareable data-server
docker run -d --ipc=container:data-server data-client
But all of the Stackoverflow questions I've read (1, 2, 3, 4) link both containers directly to the host:
docker run -d --ipc=host data-server
docker run -d --ipc=host data-client
Which is more appropriate for this use case? If ipc=host is better, when would you use ipc=shareable?
From doc:
--ipc="MODE" : Set the IPC mode for the container
"shareable": Own private IPC namespace, with a possibility to share it with other containers.
"host": Use the host system’s IPC namespace.
The difference between shareable and host is whether the host can access the shared memory.
An IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments, semaphores and message queues. Because of this, there should be no difference in performance between two modes.
Shared memory is commonly used by databases and custom-built (typically C/OpenMPI, C++/using boost libraries) high performance applications for scientific computing and financial services industries.
Considering the security of the service, using host exposes the IPC namespace to attackers who have control of the host machine. With shareable, the IPC namespace is only accessible inside of the containers, which may contain any attacks. The host mode exists to allow cooperation between a container and its host.
It's often difficult to know all the details of the environment and requirements of the asker, so host tends to be the most commonly recommended because it is easiest to understand and configure.

Can a docker instance cause harm to the host?

Is the docker host completely protected for anything the docker instance can do?
As long as you don't expose a volume to the docker instance, are there other ways it can actually connect into the host and 'hack' it?
For example, say I allow customers to run code inside of a server that I run. I want to understand the potential security implications of allowing a customer to run arbitrary code inside of an docker instance.
All processes inside Docker are isolated from the host machine. The cannot by default see or interfere with other processes. This is guranteed by the process namespaces used by docker.
As long as you don't mount crucial stuff (example: docker.sock) onto the container, there are no security risks associated with running a container, and even with allowing code execution inside the container.
For a list of security features in docker, check Docker security.
The kernel is shared between the host and the docker container. This is less separation than lets say a VM has.
Runing any untrusted container is NOT SECURE. There are kernel vulnerabilities, which can be abused and ways to break out of containers.
Thats why its a best practice for example to either not use root user in containers or have a separate user namespace for containers.

Running multiple docker containers in same host

I am new to docker. I have a doubt regarding docker. Based on the understanding of docker, Docker will help to create the container of the application we can to deploy along with application dependencies.
My question is that if i have web application inside docker container, is it possible to run multiple containers inside single host? If yes, How will i make sure the request be directed to each app?.
Will there be any change in performance depending on number of core of host?
Is it possible to run multiple containers inside single host?
Yes, you can run many.
If yes, How will direct requests to the right container?
You have many options, the simplest is just to run the container with port forwarding (which is built in to docker), but you could also run a load balancer or proxy on the host.
Will there be any change in performance depending on number of core of host?
There can be, of course. It depends on whether or not you're already reaching a performance bottleneck of some sort before adding another container. All the containers are making use of the same hardware.

How do I do docker clustering or hot copy a docker container?

Is it possible to hotcopy a docker container? or some sort of clustering with docker for HA purposes?
Can someone simplify this?
How to scale Docker containers in production
Docker containers are not designed to be VMs and are not really meant for hot-copies. Instead you should define your container such that it has a well-known start state. If the container goes down the alternate should start from the well-known start state. If you need to keep track of state that the container generates at run time this has to be done externally to docker.
One option is to use volumes to mount the state (files) on to the host filesystem. Then use RAID, NTFS or any other means, to share that file system with other physical nodes. Then you can mount the same files on to a second docker container on a second host with the same state.
Depending on what you are running in your containers you can also have to state sharing inside your containers for example using mongo replication sets. To reiterate though containers are not as of yet designed to be migrated with runtime state.
There is a variety of technologies around Docker that could help, depending on what you need HA-wise.
If you simply wish to start a stateless service container on different host, you need a network overlay, such as weave.
If you wish to replicate data across for something like database failover, you need a storage solution, such as Flocker.
If you want to run multiple services and have load-balancing and forget on which host each container runs, given that X instances are up, then Kubernetes is the kind of tool you need.
It is possible to make many Docker-related tools work together, we have a few stories on our blog already.

Resources