Running a docker container on the host from a docker container - docker

I'm trying to dockerize 2 dotnet console application that one depends on the other.
When I run the 1st container I need it to run another container on the host, insert a parameter to it's stdin and wait for it to end the job and exit.
What will be the best solution to achieve this?
Running a container inside a container seems to me like a bad solution,
I've also thought about managing another process with a webserver (nginx or something) on the host that get the request as http request and execute a docker run command in the host but I'm sure there is a better solution for this (in this way the webserver will just run on the host and not inside a container).
There is also this solution but it seems to have major security issues.
I've tried also using the Docker.DotNet library but it does not help with my problem.
any ideas for the best solution?
Thanks in advance.
EDIT:
I will be using docker compose but the problem is that the 2nd container is not running and listening at all time, similar to the Hello-World container it's called, performs it's job and exits.
EDIT2: FIX
I've implemented redis as a message broker to communicate between the different services, while it changed the requirement a little (containers will always run and listen to redis) it helped me to solve the issue.

I am not sure I understand correctly what you need to do, but if you simply need to start two containers in parallel the simplest way I can think of is docker-compose.
Another way is with a python or bash/bat script that launches the containers independently (in python you can either use the Docker API or do it manually with the subprocess module). This allows you to perform other things (like writing on the stdin of one container, as you stated).

Related

What is the best way to execute a shell script via network on docker-compose on an image without a network service?

I'm looking for a suggestion for a good way to access a container that is based off an image without a network service, since i'm using docker-compose the containers only talk to one another via network and the container i need just runs a shell script.
Ideas? should i really be considering adding a webserver with my code to just to access a shell script between containers?

Can a process in one docker container write to stdin of a process in another container

I have an application running in container A, and i'd like to write to stdin of a process running in container B. I see that if I want to write to B's stdin from the host machine I can use docker attach. Effectively, I want to call docker attach B from within A.
Ideally i'd like to be able to configure this through docker-compose.yml. Maybe I could tell docker-compose to create a unix domain socket A that pipes to B's stdin, or connect some magic port number to B's stdin.
I realize that if I have to I can always put a small webserver in B's container that redirects all input from an open port in B to the process, but i'd rather use an out of the box solution if it exists.
For anyone interested in the details, I have a python application running from container A and I want it to talk to stockfish (chess engine) in container B.
A process in one Docker container can't directly use the stdin/stdout/stderr of another container. This is one of the ways containers are "like VMs". Note that this is also pretty much impossible in ordinary Linux/Unix without a parent/child process relationship.
As you say, the best approach is to put an HTTP or other service in front of the process in the other container, or else to use only a single container and launch the thing that only communicates via stdin as a subprocess.
(There might be a way to make this work if you give the calling process access to the host's Docker socket, but you'd be giving it unrestricted access over the host and tying the implementation to Docker; either the HTTP or subprocess paths are straightforward to develop and test without Docker, then move into container land separately, and don't involve the possibility of taking over the host.)
if it will help, you may try to create and read/write to socket.
and mount this socket in both containers like:
docker run -d -v /var/run/app.sock:/var/run/app.sock:ro someapp1
docker run -d -v /var/run/app.sock:/var/run/app.sock someapp2
disclaimer: it is just idea, never did something like it by myself

It is possible to run a command inside a Docker container from another container?

Here's my scenario.
I have 2 Docker containers:
C1: is a container with Ruby (but it could be anything else) that prepares data files on which it must perform a calculation in Julia language
C2: is a container with Julia (or R, or Octave...), used to perform the calculation, so as to avoid installing Julia on the same system or container that run Ruby code
From the host, obviously, I have no problem doing the processing.
Usually when two containers are linked (or belong to the same network) they communicate with each other via a network exposing some door. In this case Julia does not expose any door.
Can I run a command on C2 from C1 similar to what is done between host and C2?
If so, how?
Thanks!
Technically yes, but that's probably not what you want to do.
The Docker CLI is just an interface to the Docker service, which listens at /var/run/docker.sock on the host. Anything that can be done via the CLI can be done by directly communicating with this server. You can mount this socket into a running container (C1) as a volume to allow that container to speak to its host's docker service. Docker has a few permissions that need to be set to allow this; older versions allow containers to run in "privileged" mode, in which case they're allowed to (amongst other things) speak to /var/run/docker.sock with the authority of the host. I believe newer versions of Docker split this permission system up a bit more, but you'd have to look into this. Making this work in swarm mode might be a little different as well. Using this API at a code level without installing the full Docker CLI within the container is certainly possible (using a library or coding up your own interaction). A working example of doing this is JupyterHub+DockerSpawner, which has one privileged Hub server that instantiates new Notebook containers for each logged in user.
I just saw that you explicitly state that the Julia container has no door/interface. Could you wrap that code in a larger container that gives it a server interface while managing the serverless Julia program as a "local" process within the same container?
I needed to solve the same problem. In my case, it all started when I needed to run some scripts located in another container via cron, I tried the following scenarios with no luck:
Forgetting about the two-containers scenario and place all the logic in one container, so inter-container execution is no longer needed: Turns out to be a bad idea since the whole Docker concept is to execute single tasks in each container. In any case, creating a dockerfile to build an image with both my main service (PHP in my case) and a cron daemon proved to be quite messy.
Communicate between containers via SSH: I then decided to try building an image that would take care of running the Cron daemon, that would be the "docker" approach to solve my problem, but the bad idea was to execute the commands from each cronjob by opening an SSH connection to the other container (in your case, C1 connecting via SSH to C2). It turns out it's quite clumsy to implement an inter-container SSH login, and I kept running into problems with permissions, passwordless logins and port routing. It worked at the end, but I'm sure this would add some potential security issues, and I didn't feel it was a clean solution.
Implement some sort of API that I could call via HTTP requests from one container to another, using something like Curl or Wget. This felt like a great solution, but it ultimately meant adding a secondary service to my container (an Nginx to attend HTTP connections), and dealing with HTTP requisites and timeouts just to execute a shell script felt too much of a hassle.
Finally, my solution was to run "docker exec" from within the container. The idea, as described by scnerd is to make sure the docker client interacts with the docker service in your host:
To do so, you must install docker into the container you want to execute your commands from (in your case, C1), by adding a line like this to your Dockerfile (for Debian):
RUN apt-get update && apt-get -y install docker.io
To let the docker client inside your container interact with the docker service on your host, you need to add /var/run/docker.sock as a volume to your container (C1). With Docker compose this is done by adding this to your docker service "volumes" section:
- /var/run/docker.sock:/var/run/docker.sock
Now when you build and run your docker image, you'll be able to execute "docker exec" from within the docker, with a command like this, and you'll be talking to the docker service on the host:
docker exec -u root C2 /path/your_shell_script
This worked well for me. Since, in my case, I wanted the Cron container to launch scripts in other containers, it was as simple as adding "docker exec" commands to the crontab.
This solution, as also presented by scnerd, might not be optimal and I agree with his comments about your structure: Considering your specific needs, this might not be what you need, but it should work.
I would love to hear any comments from someone with more experience with Docker than me!

run docker after setup network

I'm new to docker. Now I encounter some problems, can anyone help me?
I want to run a container with macvlan.
In my case, I will run a container with --net=none first.
Then configure the network using ip command (or using netns in python).
the order is :
run a container
run app inside container
setup network
my question is that how to setup the network first.
Then run the app.
the order is :
run a container
setup network
run app inside docker
Maybe I can write network configuration script on a file and run it before the other stuff on Dockerfile. But in this way, the network and container are highly coupling, and I need edit it for every container everytime manually.
So is there a better way to handle this situation?
thx in advance.
There is the --net=container argument to docker run which shares the network namespace of the container with another container.
So you could first launch a container with --net=none and a script to set up the networking, then launch your application container with --net=network_container to use that networking stack. That would keep the network configuration and application uncoupled.
Also, take a look at the pipework project if you haven't already.
In general though, I would suggest you are better off looking at existing solutions like Weave and Project Calico.

Is it wrong to run a single process in docker without providing basic system services?

After reading the introduction of the phusion/baseimage I feel like creating containers from the Ubuntu image or any other official distro image and running a single application process inside the container is wrong.
The main reasons in short:
No proper init process (that handles zombie and orphaned processes)
No syslog service
Based on this facts, most of the official docker images available on docker hub seem to do things wrong. As an example, the MySQL image runs mysqld as the only process and does not provide any logging facilities other than messages written by mysqld to STDOUT and STDERR, accessible via docker logs.
Now the question arises which is the appropriate way to run an service inside docker container.
Is it wrong to run only a single application process inside a docker container and not provide basic Linux system services like syslog?
Does it depend on the type of service running inside the container?
Check this discussion for a good read on this issue. Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services?
That being said the state of logging in the docker service is famously broken. Even Solomon Hykes the creator of docker admits its a work in progress. In addition you normally need a little more flexibility for a real world deployment. I normally mount my logs onto the host system using volumes and have a log rotate daemon etc running in the host vm. Similarly I either install sshd or leave an interactive shell open in the the container so I can issue minor commands without relaunching, at least until I am really sure my containers are air-tight and no more debugging will be needed.
Edit:
With docker 1.3 and the exec command its no longer necessary to "leave an interactive shell open."
It depends on the type of service you are running.
Docker allows you to "build, ship, and run any app, anywhere" (from the website). That tells me that if an "app" consists of/requires multiple services/processes, then those should be ran in a single Docker container. It would be a pain for a user to have to download, then run multiple Docker images just to run one application.
As a side note, breaking up your application into multiple images is subject to configuration drift.
I can see why you would want to limit a docker container to one process. One reason being uptime. When creating a Docker provisioning system, it's essential to keep the uptime of a container to a minimum so that scaling sideways is fast. This means, that if I can get away with running a single process per Docker container, then I should go for it. But that's not always possible.
To answer your question directly. No, it's not wrong to run a single process in docker.
HTH

Resources