Using environment variables post docker container linking in java - docker

I am trying to simulate docker container linking using a simple use case which is as follows
1) A docker container with a simple pub-sub java application, there is a publisher and subscriber both within the same container. I have used dockerfiles for building this
2) A docker container running rabbitmq, this was pulled from docker hub.
Now I link both the containers, I am able to see rabbitmq environment variables in my container #1.
Now my question is what is the best way to utilize these container variables in my pub-sub container #1. I can always java System.getenv and hardcode a environment variable. Are there any better ways of doing it?

Hard-coding an environment variable seems OK here. The environment variables follow a standard format, like RABBITMQ_PORT_5672_TCP_ADDR and RABBITMQ_PORT_5672_TCP_PORT. The only bit of those names which would change is the label RABBITMQ, which is set based on the options to docker run. Whoever runs your container controls that bit, either with --link rabbitmq or --link someothercontainer:rabbitmq to set an alias. This just forms part of your container's "contract" with the outside world: the container must be run in a way that adds variables with the right alias.
Incidentally, this doesn't force you to use Docker links if you don't want to, as you can always just pass in the environment variables if Rabbit MQ were on a different machine (e.g. --env RABBITMQ_PORT_5672_TCP_ADDR=1.2.3.4).

Related

Defining Docker runtime flags from within Dockerfile

We're running Docker on Digital Ocean App Engine, and can't pass flags (e.g. --cpus) to docker run.
We can't use docker-compose either.
Is it possible to set an environment variable (ARG? ENV?), e.g. $CPUS=blah) in a way that can be picked up by the Docker instance?
Stated differently, are there internal environment variables that correspond to specific flags, that can be set from with a Dockerfile / environment itself?
As we can see in the very first steps of official documentation of Docker (this link):
Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way.
The main concept of Docker is to isolate each container from the other container, environment variables and anything that relates to the them. So the only thing we can access and modify them is what that reside outside of the container like:
exposing ports
exposing volumes
map container port to the host port
map container volume to the host volume and vice-versa
...

How to expose host machine aliases to Docker container?

Docker has great documentation on linking containers - allowing one container to make use of the other container's environment variables.
However, how would one go about exposing command line aliases (and their respective programs) of the host machine to the Docker container?
Or, perhaps the better way to go about this is to simply configure the Docker container to build from an image that has these aliases / "dotfiles" built in?
I don't think that you approach to docker as you should. A docker container's purpose is to run a network application and expose it to outside world.
If you need aliases for your application running inside a container, then you have to build an image first, that contains the whole environment your app needs...
Or specify them in the Dockerfile, while building your image.

docker containers communication on dev machine

I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch. I am confused as to how I can create a container that can be used in production and on my local machine (mac). How are people providing configuration like this these days?
So far I have come up with having my process take environmental variables as arguments which I can pass to the container with docker run -e. It seems unlikely that I would be doing this type of thing in production.
I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch
If elasticsearch is running in its own container on the same host (managed by the same docker daemon), then you can link it to your own container (at the docker run stage) with the --link option (which sets environment variables)
docker run --link elasticsearch:elasticsearch --name <yourContainer> <yourImage>
See "Linking containers together"
In that case, your container config can be static and known/written in advance, as it will refer to the search machine as 'elasticsearch'.
How about writing it into the configuration file of your application and mount the configuration directory onto your container with -v?
To make it more organized, I use Ansible for orchestration. This way you could have a template of the configuration file for your application while the actually parameters are in the variable file of the corresponding Ansible playbook at a centralized location. Ansible will be in charge of copying the template over to the desired location and do variable substitution for you. It also recently enhanced its Docker support.
Environment variables are absolutely fine (we use them all the time for this sort of thing) as long as you're using service names, not ip addresses. Even with ip addresses you'd have no problem as long as you only have one ES and you're willing to restart your service every time the ES ip address changes.
You should really ask someone who knows for sure how you resolve these things in your production environments, because you're unlikely to be the only person in your org who has had this problem -- connecting to a database poses the same problem.
If you have no constraints at all then you should check out something like Consul from Hashicorp. It'll help you a lot with this problem; if you are allowed to use it.

Docker Container management in production environment

Maybe I missed something in the Docker documentation, but I'm curious and can't find an answer:
What mechanism is used to restart docker containers if they should error/close/etc?
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose? (as they say it is not production ready)
What mechanism is used to restart docker containers if they should error/close/etc?
Docker restart policies, as set with the --restart option to docker run. From the docker-run(1) man page:
--restart=""
Restart policy to apply when a container exits (no, on-fail‐
ure[:max-retry], always)
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose?
Well, you can of course use docker-compose if that is the best match for your requirements, even if it is not labelled as "production ready".
You can investigate larger container management solutions like Kubernetes or even OpenStack (although I would not recommend the latter unless you are already familiar with OpenStack).
You could craft individual systemd unit files for each container.

Is it ok to run docker from inside docker?

I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

Resources