I'm setting up two docker containers - one as a server to hold data in memory, and the other as a client to access that data. In order to do so, I believe I need to use the --ipc flag to share memory between the containers. The Docker documentation explains the --ipc flag pretty well. What makes sense to me according to the documentation is running:
docker run -d --ipc=shareable data-server
docker run -d --ipc=container:data-server data-client
But all of the Stackoverflow questions I've read (1, 2, 3, 4) link both containers directly to the host:
docker run -d --ipc=host data-server
docker run -d --ipc=host data-client
Which is more appropriate for this use case? If ipc=host is better, when would you use ipc=shareable?
From doc:
--ipc="MODE" : Set the IPC mode for the container
"shareable": Own private IPC namespace, with a possibility to share it with other containers.
"host": Use the host system’s IPC namespace.
The difference between shareable and host is whether the host can access the shared memory.
An IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments, semaphores and message queues. Because of this, there should be no difference in performance between two modes.
Shared memory is commonly used by databases and custom-built (typically C/OpenMPI, C++/using boost libraries) high performance applications for scientific computing and financial services industries.
Considering the security of the service, using host exposes the IPC namespace to attackers who have control of the host machine. With shareable, the IPC namespace is only accessible inside of the containers, which may contain any attacks. The host mode exists to allow cooperation between a container and its host.
It's often difficult to know all the details of the environment and requirements of the asker, so host tends to be the most commonly recommended because it is easiest to understand and configure.
Related
For running a container we can specify --net=host to enable host networking, which allows the container shares the host’s networking namespace. But what is the practical use case for this?
I've found it useful in two situations:
You have a server process that listens on a very large number of ports, or does not use a consistent port, so the docker run -p option is impractical or impossible.
You have a process that needs to examine or manage the host network environment. (Its wire protocol somehow depends on sending the host's IP address; it's a service-discovery system and you want it to advertise both Docker and non-Docker services running on the host.)
Host networking disables one of Docker's important isolation systems. If you run a container with host networking, you can't use features like port remapping and you can't accept inbound connections from other containers using the container name as a host name. In both of these cases, running the server outside Docker might be more appropriate.
In SO questions I frequently see --net host suggested as a hack to get around programs that have 127.0.0.1 hard-coded as the location of a database or another external resource. This isn't usually necessary, and adding a layer of configuration (environment variables work well) and the standard Docker networking setup is better practice.
I conduct security testing on programs and applications. Often, when I want to run a tool, I need to add libraries and other dependencies onto the system that I'm testing in order for the tool to work properly. Also, the test environment doesn't have internet access, which makes installing these dependencies more difficult.
I was thinking about containerizing multiple tools so that I could put them onto the systems that I'm testing and they will have all of their dependencies. I am considering doing this in Docker, so anytime you see a reference to a container it implies that it's a Docker container.
Some of the tools I would like to use are nmap, strace, wireshark, and others for monitoring network traffic, processes and memory.
My questions are:
Can I run these tools "locally", or will there be networking required, as though they are coming from a different machine?
Is there anything required to put onto the test system for the container to run properly?
Docker allows you to connect to devices on the host itself using the --privileged flag.
Additionally you can add the --network=host flag which means your docker container network stack is no longer isolated from the host.
using the flags above should allow you to run your tools in docker without additional requirements on the host (other then a Docker runtime + your docker container).
However, it does mean you need to load your Docker container into the host. Normally downloading a docker container requires networking. But what you can do is tar a docker image with docker save and import that tar with docker import
Docker is, first and foremost, an isolation system that hides details of the host system from the container environment. All of the tools you describe need low-level access to the host network and process space: you actively do not want the isolation that Docker is providing here.
To give two specific examples: without special setup, you can’t tcpdump the host’s network interface, because a container has its own isolated network stack; and you can’t strace host processes because a container has its own process ID space and restricted Linux capabilities that disallow it.
(From a security point of view, also consider that anyone who can run any docker command can trivially root the host, and there is a reasonably common misconfiguration to publish this ability to the network. You’d prefer to not need to install Docker if you’re interested in securing an existing system.)
I’d probably put this set of tools on a USB drive (if that’s allowed in your environment) and run them directly off of there. A tar file of tools would work as well, if that’s easier to transfer. Either building them into static binaries or providing a chroot environment would make them independent of what’s installed in the host environment, but also wouldn’t block you from observing all of the things you’re trying to observe.
there is a very simple question: What is best place from where run node-exporter? Directly inside the host or from a container?
What is the pros and cons of both solutions? What is the best practice from the developers? From the usage guidelines is not clear for me!
I would definitely say on the host. This is the recommended way, because node exporter needs access to certain metrics which are not available within the container.
It's true that you still have access to various host metrics even when running in a container if you expose /proc and /sys, but you run the risk of scraping the container-related metrics instead of the host nonetheless.
One example are the network-related metrics. By default containers run in their own network namespace (and this is how you'd want them to run under normal circumstances), so given this default, you're going to scrape information related only to that container instead of that of host, despite the fact that you're exposing the aforementioned pseudo-filesystems.
Question: How can I change a Prometheus container's host address from the default 0.0.0.0:9090 to something like 192.168.1.234:9090?
Background: I am trying to get a Prometheus container to install and start in a production environment on a remote server. Since the server uses an IP other than Prometheus's default (0.0.0.0), I need to update the host address that the Prometheus container uses. If I don't, I can't sign-in to the UI and see any of the metrics. The IP of the remote server is provided by the user during the app's installation.
From what I understand from Prometheus's config document and the output of ./prometheus -h, the host address is immutable and therefore needs to be updated using the --web.listen-address= command-line flag. My problem is I don't know how to pass that flag to my Prometheus container; I can't simply run ./prometheus --web.listen-address="<remote-ip>:9090" because that's not a Docker command. And I can't pass it to the docker run ... command because Docker doesn't recognize that flag.
Environment:
Using SaltStack for config management
I cannot use Docker Swarm (i.e. each container must use its own Dockerfile)
You don't need to change the containerized prometheus' listen address. The 0.0.0.0/0 is the anynet inside the container.
By default, it won't even be accessible from your hosts network, let alone any surrounding networks (like the Internet).
You can map it to a port on a hosts interface though. The command for that looks somewhat like this:
docker run --rm -p 8080:9090 prom/prometheus
which would expose the service at 127.0.0.1:8080 on your host
You can do that with a public (e.g. internet-facing) interface as well, although i'd generally advise against exposing containers like this, due to numerous operational implications, which are somewhat beyond the scope of this answer. You should at least consider a reverse-proxy setup, where the users are only allowed to talk to some heavy-duty webserver which then communicates with prometheus, instead of letting them access your backend directly, even if this is just a small development deployment.
For general considerations on productionizing container setups, i suggest this.
Despite it's clickbaity title, this is a useful read.
Is the docker host completely protected for anything the docker instance can do?
As long as you don't expose a volume to the docker instance, are there other ways it can actually connect into the host and 'hack' it?
For example, say I allow customers to run code inside of a server that I run. I want to understand the potential security implications of allowing a customer to run arbitrary code inside of an docker instance.
All processes inside Docker are isolated from the host machine. The cannot by default see or interfere with other processes. This is guranteed by the process namespaces used by docker.
As long as you don't mount crucial stuff (example: docker.sock) onto the container, there are no security risks associated with running a container, and even with allowing code execution inside the container.
For a list of security features in docker, check Docker security.
The kernel is shared between the host and the docker container. This is less separation than lets say a VM has.
Runing any untrusted container is NOT SECURE. There are kernel vulnerabilities, which can be abused and ways to break out of containers.
Thats why its a best practice for example to either not use root user in containers or have a separate user namespace for containers.