How can I make my Erlang process run on selected interfaces? - erlang

I have an Erlang process which is started from a shell script. Currently it is started with option inet_dist_use_interface as 0.0.0.0.
Now I have a requirement to run it only on specific interfaces. I see that inet_dist_use_interface takes only one interface.
How can I extend it to take multiple IP addresses?

Related

Docker: hiding process command names from the host

When running a docker container, is it possible to obfuscate processes' command names from the host? My problem is that one of my processes currently scans the process list to ensure that it's the unique instance, but I'd like to run separate instances in both the container and the host.
You can change the process title inside your code. E.g. on python you can use https://pypi.org/project/setproctitle/, other programming languages should have similar libraries.

Docker - inter-container script execution

currently my web application is running on a server, where all the services (nginx, php, etc.) are installed directly in the host system. Now I wanted to use docker to separate these different services into specific containers. Nginx and php-fpm are working fine. But in the web application pdfs can be generated, which is done using wkhtmltopdf and as I want to follow the single-service-per-container pattern, I want to add an additional container which houses wkhtmltopdf and takes care of this specific service.
The problem is: how can I do that? How can I call the wkhtmltopdf binary from the php-fpm container?
One solution is to share the docker.socket, but that is a big security flaw, so I really don‘t like to it.
So, is there any other way to achieve this? And isn‘t this "microservice separation" one of the main purposes/goals of docker?
Thanks for your help!
You can't directly call binaries from one container to another. ("Filesystem isolation" is also a main goal of Docker.)
In this particular case, you might consider "generate a PDF" as an action your service takes and not a separate service in itself, and so executing the binary as a subprocess is a means to an end. This doesn't even raise any complications since presumably mkhtmltopdf isn't a long-running process, you'll launch it once per request and not respond until the subprocess runs to completion. I'd install or include it in the Dockerfile that packages your PHP application, and be architecturally content with that.
Otherwise the main communication between containers is via network I/O and so you'd have to wrap this process in a simple network protocol, probably a minimal HTTP service in your choice of language/framework. That's probably not worth it for this, but it's how you'd turn this binary into "a separate service" that you'd package and run as a separate container.

Why doesn't Docker support multi-tenancy?

I watched this YouTube video on Docker and at 22:00 the speaker (a Docker product manager) says:
"You're probably thinking 'Docker does not support multi-tenancy'...and you are right!"
But never is any explanation of why actually given. So I'm wondering: what did he mean by that? Why Docker doesn't support multi-tenancy?! If you Google "Docker multi-tenancy" you surprisingly get nothing!
One of the key features most assume with a multi-tenancy tool is isolation between each of the tenants. They should not be able to see or administer each others containers and/or data.
The docker-ce engine is a sysadmin level tool out of the box. Anyone that can start containers with arbitrary options has root access on the host. There are 3rd party tools like twistlock that connect with an authz plugin interface, but they only provide coarse access controls, each person is either allowed or disallowed from an entire class of activities, like starting containers, or viewing logs. Giving users access to either the TLS port or docker socket results in the users being lumped into a single category, there's no concept of groups or namespaces for the users connecting to a docker engine.
For multi-tenancy, docker would need to add a way to define users, and place them in a namespace that is only allowed to act on specific containers and volumes, and restrict options that allow breaking out of the container like changing capabilities or mounting arbitrary filesystems from the host. Docker's enterprise offering, UCP, does begin to add these features by using labels on objects, but I haven't had the time to evaluate whether this would provide a full multi-tenancy solution.
Tough question that others might know how to answer better than me. But here it goes.
Let's take this definition of multi tenancy (source):
Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers.
It's really hard to place Docker in this definition. It can be argued that it's both the instance and the application. And that's where the confusion comes from.
Let's break Docker up into three different parts: the daemon, the container and the application.
The daemon is installed on a host and runs Docker containers. The daemon does actually support multi tenancy, as it can be used my many users on the same system, each of which has their own configuration in ~/.docker.
Docker containers run a single process, which we'll refer to as the application.
The application can be anything. For this example, let's assume the Docker container runs a web application like a forum or something. The forum allows users to sign in and post under their name. It's a single instance that serves multiple customers. Thus it supports multi tenancy.
What we skipped over is the container and the question whether or not it supports multi tenancy. And this is where I think the answer to your question lies.
It is important to remember that Docker containers are not virtual machines. When using docker run [IMAGE], you are creating a new container instance. These instances are ephemeral and immutable. They run a single process, and exit as soon as the process exists. But they are not designed to have multiple users connect to them and run commands simultaneously. This is what multi tenancy would be. Instead, Docker containers are just isolated execution environments for processes.
Conceptually, echo Hello and docker run echo Hello are the same thing in this example. They both execute a command in a new execution environment (process vs. container), neither of which supports multi tenancy.
I hope this answers is readable and answers your question. Let me know if there is any part that I should clarify.

Run a command on a container from inside another one

I'm trying to develop an application that has two main containers, a Java-Tomcat webserver and a Python and Lua one for machine learning scripts.
Soo here is the issue: I need to send a command on the Python/Lua container's CLI whenever the Java one receives a certain Request. I know that if the webserver wasn't a container I could simply use docker exec, but wouldn't having the Java part of my application as a non-container break the whole security idea of dockers?
Thanks a lot and sorry for my poor english!
(+1 for #larsks) Set up a REST API that allows one container to trigger actions on the other container.
You can setup Container communication across links. Docs here https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
After that you can call from container A to B using B:port/<your API>

Simplest way to inform a local erlang node from a shell command

I'm running a distributed erlang system with one node per machine.
Since DNS is not available I all start them with the same -sname param e.g.
erl -sname foo ...
A operating-system daemon has the feature to execute shell (/bin/sh) commands when a certain event occurs (when a USB stick is pugged into the system).
I'm looking for a simple way to call a function on the erlang node local on this machine with this shell command (taking further action after the USB stick was detected and mounted).
I was thinking of calling erl -sname bar from the shell and run some code that looks like
[_,Host] = string:tokens(atom_to_list(node()), "#"),
The_node = list_to_atom("foo#" ++ Host),
spawn(The_node, My_fun),
Is this the way to go? Or is starting a whole new erlang node overkill (won't be doing it often though)
Or is it better to talk a socket opened by gen_tcp, or read a named pipe.
Or any other suggestions?
BTW this is running on a Unix system.
What you want to use is actually erl_call, an application that lets you contact currently distributed nodes and run arbitrary code.
erl_call makes it possible to start and/or communicate with a distributed Erlang node. It is built upon the erl_interface library as an example application. Its purpose is to use an Unix shell script to interact with a distributed Erlang node.
You can either give it commands, an escript or pretty much just code to evaluate and it will do it. You have more details and actual example in its documentation.

Resources