run docker after setup network - docker

I'm new to docker. Now I encounter some problems, can anyone help me?
I want to run a container with macvlan.
In my case, I will run a container with --net=none first.
Then configure the network using ip command (or using netns in python).
the order is :
run a container
run app inside container
setup network
my question is that how to setup the network first.
Then run the app.
the order is :
run a container
setup network
run app inside docker
Maybe I can write network configuration script on a file and run it before the other stuff on Dockerfile. But in this way, the network and container are highly coupling, and I need edit it for every container everytime manually.
So is there a better way to handle this situation?
thx in advance.

There is the --net=container argument to docker run which shares the network namespace of the container with another container.
So you could first launch a container with --net=none and a script to set up the networking, then launch your application container with --net=network_container to use that networking stack. That would keep the network configuration and application uncoupled.
Also, take a look at the pipework project if you haven't already.
In general though, I would suggest you are better off looking at existing solutions like Weave and Project Calico.

Related

Running a docker container on the host from a docker container

I'm trying to dockerize 2 dotnet console application that one depends on the other.
When I run the 1st container I need it to run another container on the host, insert a parameter to it's stdin and wait for it to end the job and exit.
What will be the best solution to achieve this?
Running a container inside a container seems to me like a bad solution,
I've also thought about managing another process with a webserver (nginx or something) on the host that get the request as http request and execute a docker run command in the host but I'm sure there is a better solution for this (in this way the webserver will just run on the host and not inside a container).
There is also this solution but it seems to have major security issues.
I've tried also using the Docker.DotNet library but it does not help with my problem.
any ideas for the best solution?
Thanks in advance.
EDIT:
I will be using docker compose but the problem is that the 2nd container is not running and listening at all time, similar to the Hello-World container it's called, performs it's job and exits.
EDIT2: FIX
I've implemented redis as a message broker to communicate between the different services, while it changed the requirement a little (containers will always run and listen to redis) it helped me to solve the issue.
I am not sure I understand correctly what you need to do, but if you simply need to start two containers in parallel the simplest way I can think of is docker-compose.
Another way is with a python or bash/bat script that launches the containers independently (in python you can either use the Docker API or do it manually with the subprocess module). This allows you to perform other things (like writing on the stdin of one container, as you stated).

Use docker image in another docker image

I have two docker images:
CLI tool
Webserver
The CLI tool is a very heavy docker file which takes hours to compile. I am trying to call the CLI tool from the webserver, but not sure how to go from here. Is there a way to make the command created in 1 available in 2?
At this point I tried working with volumes, but no luck. Thanks!
The design of Docker sort-of assumes that containers communicate through a network, not through the command line. So the cleanest solution is to create a simple microservice that wraps the CLI tool and can be called through HTTP.
As a quick and dirty hack, you could also use sshd as such a microservice without writing any code.
An alternative that doesn't involve the network is to make the socket of the Docker daemon available in the webserver container using a bind mount:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Then you should be able to communicate with the host daemon from within the container, provided that you have installed the docker command line tool in the image. However, note that this makes your application strongly dependent on Docker, which might not be ideal. Also note that it essentially gives the container root access to the host system!
(Note that this is different from Docker-in-Docker, which is running a second Docker daemon inside a container and is generally not recommended except for specialized use cases.)

What is the best way to execute a shell script via network on docker-compose on an image without a network service?

I'm looking for a suggestion for a good way to access a container that is based off an image without a network service, since i'm using docker-compose the containers only talk to one another via network and the container i need just runs a shell script.
Ideas? should i really be considering adding a webserver with my code to just to access a shell script between containers?

It is possible to run a command inside a Docker container from another container?

Here's my scenario.
I have 2 Docker containers:
C1: is a container with Ruby (but it could be anything else) that prepares data files on which it must perform a calculation in Julia language
C2: is a container with Julia (or R, or Octave...), used to perform the calculation, so as to avoid installing Julia on the same system or container that run Ruby code
From the host, obviously, I have no problem doing the processing.
Usually when two containers are linked (or belong to the same network) they communicate with each other via a network exposing some door. In this case Julia does not expose any door.
Can I run a command on C2 from C1 similar to what is done between host and C2?
If so, how?
Thanks!
Technically yes, but that's probably not what you want to do.
The Docker CLI is just an interface to the Docker service, which listens at /var/run/docker.sock on the host. Anything that can be done via the CLI can be done by directly communicating with this server. You can mount this socket into a running container (C1) as a volume to allow that container to speak to its host's docker service. Docker has a few permissions that need to be set to allow this; older versions allow containers to run in "privileged" mode, in which case they're allowed to (amongst other things) speak to /var/run/docker.sock with the authority of the host. I believe newer versions of Docker split this permission system up a bit more, but you'd have to look into this. Making this work in swarm mode might be a little different as well. Using this API at a code level without installing the full Docker CLI within the container is certainly possible (using a library or coding up your own interaction). A working example of doing this is JupyterHub+DockerSpawner, which has one privileged Hub server that instantiates new Notebook containers for each logged in user.
I just saw that you explicitly state that the Julia container has no door/interface. Could you wrap that code in a larger container that gives it a server interface while managing the serverless Julia program as a "local" process within the same container?
I needed to solve the same problem. In my case, it all started when I needed to run some scripts located in another container via cron, I tried the following scenarios with no luck:
Forgetting about the two-containers scenario and place all the logic in one container, so inter-container execution is no longer needed: Turns out to be a bad idea since the whole Docker concept is to execute single tasks in each container. In any case, creating a dockerfile to build an image with both my main service (PHP in my case) and a cron daemon proved to be quite messy.
Communicate between containers via SSH: I then decided to try building an image that would take care of running the Cron daemon, that would be the "docker" approach to solve my problem, but the bad idea was to execute the commands from each cronjob by opening an SSH connection to the other container (in your case, C1 connecting via SSH to C2). It turns out it's quite clumsy to implement an inter-container SSH login, and I kept running into problems with permissions, passwordless logins and port routing. It worked at the end, but I'm sure this would add some potential security issues, and I didn't feel it was a clean solution.
Implement some sort of API that I could call via HTTP requests from one container to another, using something like Curl or Wget. This felt like a great solution, but it ultimately meant adding a secondary service to my container (an Nginx to attend HTTP connections), and dealing with HTTP requisites and timeouts just to execute a shell script felt too much of a hassle.
Finally, my solution was to run "docker exec" from within the container. The idea, as described by scnerd is to make sure the docker client interacts with the docker service in your host:
To do so, you must install docker into the container you want to execute your commands from (in your case, C1), by adding a line like this to your Dockerfile (for Debian):
RUN apt-get update && apt-get -y install docker.io
To let the docker client inside your container interact with the docker service on your host, you need to add /var/run/docker.sock as a volume to your container (C1). With Docker compose this is done by adding this to your docker service "volumes" section:
- /var/run/docker.sock:/var/run/docker.sock
Now when you build and run your docker image, you'll be able to execute "docker exec" from within the docker, with a command like this, and you'll be talking to the docker service on the host:
docker exec -u root C2 /path/your_shell_script
This worked well for me. Since, in my case, I wanted the Cron container to launch scripts in other containers, it was as simple as adding "docker exec" commands to the crontab.
This solution, as also presented by scnerd, might not be optimal and I agree with his comments about your structure: Considering your specific needs, this might not be what you need, but it should work.
I would love to hear any comments from someone with more experience with Docker than me!

Is it ok to run docker from inside docker?

I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

Resources