Docker: hiding process command names from the host - docker

When running a docker container, is it possible to obfuscate processes' command names from the host? My problem is that one of my processes currently scans the process list to ensure that it's the unique instance, but I'd like to run separate instances in both the container and the host.

You can change the process title inside your code. E.g. on python you can use https://pypi.org/project/setproctitle/, other programming languages should have similar libraries.

Related

Docker namespace, docker on virtualbox, mirror environment

Let's assume scenario I'm using a set of CLI docker run commands for creating a whole environment of containers, networks (bridge type in my case) and connect containers to particular networks.
Everything works well till the moment I want to have only one such environment at a single machine.
But what if I want to have at the same machine a similar environment to the one I've just created but for a different purpose (testing) I'm having an issue of name collisions since I can't crate and start containers and networks with the same name.
So far I tried to start second environment the same way I did with the first but with prefixing all containers and networks names.That worked but had a flaw: in the application that run all requests to URIs were broken since they had a structure
<scheme>://<container-name>:<port-number>
and the application was not able to reach <prefix-container-name>.
What I want to achieve is to have an exact copy of the first environment running on the same machine as the second environment that I could use to perform the application tests etc.
Is there any concept of namespaces or something similar to it in Docker?
A command that I could use before all docker run etc commands I use to create environment and have just two bash scripts that differ only by the namespace command at their beginning?
Can using virtual machine, ie Oracle Virtualbox be the solution to my problem? Create a VM for the second environment? isn't that an overkill, will it add an additional set of troubles?
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name? Unlucky --hostname only gives ability to access the container by this name form the container itself but not from any other. Perhaps there is an option or command that can make an alias, virtual host or whatever magic common name I could put into apps URIs <scheme>://<magic-name>:<port-number> so creating second environment with different containers and networks names will cause no problem as long as that magic-name is available in the environment network
My need for having exact copy of the environment is because of tests I want to run and check if they fail also on dependency level, I think this is quite simple scenario from the continues integration process. Are there any dedicated open source solutions to what I want to achieve? I don't use docker composer but bash script with all docker cli commands to get the whole env up and running.
Thank you for your help.
Is there any concept of namespaces or something similar to it in Docker?
Not really, no (but keep reading).
Can using virtual machine [...] be the solution to my problem? ... Isn't that an overkill, will it add an additional set of troubles?
That's a pretty reasonable solution. That's especially true if you want to further automate the deployment: you should be able to simulate starting up a clean VM and then running your provisioning script on it, then transplant that into your real production environment. Vagrant is a pretty typical tool for trying this out. The biggest issue will be network connectivity to reach the individual VMs, and that's not that big a deal.
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name?
docker run --network-alias is very briefly mentioned in the docker run documentation and has this effect. docker network connect --alias is slightly more documented and affects a container that's already been created.
Are there any dedicated open source solutions to what I want to achieve?
Docker Compose mostly manages this for you, if you want to move off of your existing shell-script solution: it puts a name prefix on all of the networks and volumes it creates, and creates network aliases for each container matching its name in the YAML file. If your host volume mounts are relative to the current directory then that content is fairly isolated too. The one thing you can't easily do is launch each copy of the stack on a separate host port(s), so you have to resolve those conflicts.
Kubernetes has a concept of a namespace which is in fact exactly what you're asking for, but adopting it is a substantial investment and would involve rewriting your deployment sequence even more than Docker Compose would.

Intro to Docker for FreeBSD Jail User - How and should I start the container with systemd?

We're currently migrating room server to the cloud for reliability, but our provider doesn't have the FreeBSD option. Although I'm prepared to pay and upload a custom system image for deployment, I nontheless want to learn how to start a application system instance using Docker.
in FreeBSD Jail, what I did was to extract an entire base.txz directory hierarchy as system content into /usr/jail/app, and pkg -r /usr/jail/app install apache24 php perl; then I configured /etc/jail.conf to start the /etc/rc script in the jail.
I followed the official FreeBSD Handbook, and this is generally what I've worked out so far.
But Docker is another world entirely.
To build a Docker image, there are two options: a) import from a tarball, b) use a Dockerfile. The latter of which lets you specify a "CMD", which is the default command to run, but
Q1. why isn't it available from a)?
Q2. where are information like "CMD ENV" stored? in the image? in the container?
Q3. How to start a GNU/Linux system in a container? Do I just run systemd and let it figure out the rest from configuration? Do I need to pass to it some special arguments or envvars?
You should think of a Docker container as a packaging around a single running daemon. The ideal Docker container runs one process and one process only. Systemd in particular is so heavyweight and invasive that it's actively difficult to run inside a Docker container; if you need multiple processes in a container then a lighter-weight init system like supervisord can work for you, but that's usually an exception more than a standard packaging.
Docker has an official tutorial on building and running custom images which is worth a read through; this is a pretty typical use case for Docker. In particular, best practice is to write a Dockerfile that describes how to build an image and check it into source control. Containers should avoid having persistent data if they can (storing everything in an external database is ideal); if you change an image, you need to delete and recreate any containers based on it. If local data is unavoidable then either Docker volumes or bind mounts will let you keep data "outside" the container.
While Docker has several other ways to create containers and images, none of them are as reproducible. You should avoid the import, export, and commit commands; and you should only use save and load if you can't use or set up a Docker registry and are forced to move images between systems via a tar file.
On your specific questions:
Q1. I suspect the best reason the non-docker build paths to create images don't easily let you specify things like CMD is just an implementation detail: if you look at the docker history of an image you'll see the CMD winds up being its own layer. Don't worry about it and use a Dockerfile.
Q2. The default CMD, any set ENV variables, and other related metadata are stored in the image alongside the filesystem tree. (Once you launch a container, it has a normal Unix process tree, with the initial process being pid 1.)
Q3. You don't "start a system in a container". Generally run one process or service in a container, and manage their lifecycles independently.

How should I create Dockerfile to run multiple services through docker-compose?

I'm new to Docker. I wanted to create a Dockerfile to start services like RabbitMQ, ftp server and elastic search. But I'm not able to think from where should I start ?
I have asked a similar question here : How should I create a Dockerfile to run more than one services in one instance?
There I got to know, to create different containers : one for rabbitmq, one for ftp server and other for elasticsearch and run them using docker-compose file. There you'll find my created Dockerfile code.
It will be great if someone can help me out with this thing. Thanks!
They are correct. Each container & by extension, each image should be responsible for one concern & that is typically mapped to a single process. So if you need to run more than one thing (or more than one process, generally speaking, not strictly) then you most probably require to build separate images. One of the easiest & recommended ways of creating an image is writing a Dockerfile. This is expected to be an extremely simple process and most of it will be a copy paste of the same commands you would have used to install that component.
One you write the Dockerfile's for each service, you must build them using docker build command, which will result in the images.
When you run an image you get what is known as a container. Think of it roughly like an iso file is the image & the actual vm or running machine is the container.
Now you can use docker-compose to orchestrate how these various containers so they can communicate (or be isolated from) with each other. A docker-compose.yml file is a plain text file in the yaml format that describes the relationship between the different components within the app. Apps can be made up of several services - like webserver, appserver, searchengine, database server, cache engine etc etc. Each of these is a service and runs as a container, but it is also not necessary to run everything as a container. Some can remain running in the traditional way, on VM's or on bare metal servers.
I'll check your other post and add if there is anything needed. But I hope this helps you get started at least.

Run a command on a container from inside another one

I'm trying to develop an application that has two main containers, a Java-Tomcat webserver and a Python and Lua one for machine learning scripts.
Soo here is the issue: I need to send a command on the Python/Lua container's CLI whenever the Java one receives a certain Request. I know that if the webserver wasn't a container I could simply use docker exec, but wouldn't having the Java part of my application as a non-container break the whole security idea of dockers?
Thanks a lot and sorry for my poor english!
(+1 for #larsks) Set up a REST API that allows one container to trigger actions on the other container.
You can setup Container communication across links. Docs here https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
After that you can call from container A to B using B:port/<your API>

Docker separation of concerns / services

I have a laravel project which I am using with docker. Currently I am using a single container to host all the services (apache, mySQL etc) as well as the needed dependencies (project files, git, composer etc) I need for my project.
From what I am reading the current best practice is to put each service into a separate container. So far this seems simple enough since these services are designed to run at length (apache server, mySQL server). When I spin up these 'service' containers using -d they remain running (docker ps) since their main process continuously runs.
However, when I remove all the services from my project container, then there is no main process left to continuously run. This means my container immediately exits once spun up.
I have read the 'hacks' of running other processes like tail -f /dev/null, sleep infinity, using interactive mode, installing supervisord (which I assume would end up watching no processes in such containers?) and even leaving the container to run in the foreground (taking up a terminal console...).
How do I network such a container to keep it running like the abstracted services but detached without these hacks? I cannot seem to find much information on this in the official docker docs nor can I find any examples of other projects (please link any)
EDIT: I am not talking about volumes / storage containers to store the data my project processes, but rather how I can use a container to store the project itself and its dependencies that aren't services (project files, git, composer)
when you run the container try running with the flags ...
docker run -dt ..... etc
you might even try .....
docker run -dti ..... etc
let me know if this brings any joy. has certainly worked for me on occassions.
i know you wanted to avoid hacks but if the above fails then also add ...
CMD cat
to the end of your Dockerfile - it is a hack but is the cleanest hack :)
So after reading this a few times along with Joachim Isaksson's comment, I finally get it. Tools don't need the containers to run continuously to use. Proper separation of the project files, services (mySQL, apache) and tools (git, composer) are done differently.
The project files are persisted within a data volume container. The services are networked since they expose ports. The tools live in their own containers which share the project files data volume - they are not networked. Logs, databases and other output can be persisted in different volumes.
When you wish to run one of these tools, you spin up the tool container by passing the relevant command using docker run. The tool then manipulates the data within the directory persisted within the shared volume. The containers only persist as long as the command to manipulate the data within the shared volume takes to run and then the container stops.
I don't know why this took me so long to grasp, but this is the aha moment for me.

Resources