Put applications's public URL in its Docker Compose environment - docker

I have a Python API that has to know its public address to properly create links to itself (needed when doing paging and other HATEOAS stuff) in the responses it creates. The address is given to the application as an environment variable.
In production it's handled by Terraform, but I also have extensive local tests that make use of Docker Compose. In tests for paging I need to be aware of the fact that I'm running locally and I need to replace the placeholder address I'm putting in the app's env with http://localhost:<apps_bound_port> for following the links.
I don't want to do that. I'd like to have a way to put the port assigned by Docker in the app's environment variables. The problem wouldn't be there if I was using fixed ports (then I could just put something like http://localhost:8000 in the public addres variable), because I can have multiple instances of Compose running, which wouldn't work then.
I know I can pass environment variables from the shell running docker-compose to the containers, but I don't know of a way to insert the generated port using this approach.

Only solution that I have for my problem now is to find a free port before Compose runs, and then pass it as an environment variable (API_PORT=<FREE_PORT> docker-compose up), while setting up the port like this in docker-compose.yml:
ports:
- "8000:${API_PORT}"
This isn't ideal, because I run Compose both from the shell (with make) and from Python tests, so I'd need to put the logic for getting the port into an env variable in both places.
Is there something I'm missing, or should I create a feature request for Docker Compose?

Related

Where do we get the list of environment variable for NiFi Docker

I'm a beginner in NiFi setup. I'm planning to start a NiFi cluster on Kubernetes. In normal installation, I saw that, we can change the NiFi configurations under the file 'nifi.properties'. But, when it comes to docker image, I also saw that we can change that by using environment variables. In most of the cases, the properties mentioned in the nifi.properties file can be easily converted into its equivalent environment variable.
Eg:
nifi.web.http.host <=> NIFI_WEB_HTTP_HOST
But in some cases, the environment variable is different. Eg:
nifi.zookeeper.connect.string != NIFI_ZK_CONNECT_STRING
From where do we get the full list of NiFi environment variable for Docker image. Any help like links or directions is very much appreciated.
You need to look into the documentation (or the source code) of the NiFi docker images your are using. For example agturley/nifi and apache/nifi.
When you enter the docker container you can see secure.sh and start.sh under the path /opt/nifi/scripts. These are the scripts that make all prop_replace

Docker namespace, docker on virtualbox, mirror environment

Let's assume scenario I'm using a set of CLI docker run commands for creating a whole environment of containers, networks (bridge type in my case) and connect containers to particular networks.
Everything works well till the moment I want to have only one such environment at a single machine.
But what if I want to have at the same machine a similar environment to the one I've just created but for a different purpose (testing) I'm having an issue of name collisions since I can't crate and start containers and networks with the same name.
So far I tried to start second environment the same way I did with the first but with prefixing all containers and networks names.That worked but had a flaw: in the application that run all requests to URIs were broken since they had a structure
<scheme>://<container-name>:<port-number>
and the application was not able to reach <prefix-container-name>.
What I want to achieve is to have an exact copy of the first environment running on the same machine as the second environment that I could use to perform the application tests etc.
Is there any concept of namespaces or something similar to it in Docker?
A command that I could use before all docker run etc commands I use to create environment and have just two bash scripts that differ only by the namespace command at their beginning?
Can using virtual machine, ie Oracle Virtualbox be the solution to my problem? Create a VM for the second environment? isn't that an overkill, will it add an additional set of troubles?
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name? Unlucky --hostname only gives ability to access the container by this name form the container itself but not from any other. Perhaps there is an option or command that can make an alias, virtual host or whatever magic common name I could put into apps URIs <scheme>://<magic-name>:<port-number> so creating second environment with different containers and networks names will cause no problem as long as that magic-name is available in the environment network
My need for having exact copy of the environment is because of tests I want to run and check if they fail also on dependency level, I think this is quite simple scenario from the continues integration process. Are there any dedicated open source solutions to what I want to achieve? I don't use docker composer but bash script with all docker cli commands to get the whole env up and running.
Thank you for your help.
Is there any concept of namespaces or something similar to it in Docker?
Not really, no (but keep reading).
Can using virtual machine [...] be the solution to my problem? ... Isn't that an overkill, will it add an additional set of troubles?
That's a pretty reasonable solution. That's especially true if you want to further automate the deployment: you should be able to simulate starting up a clean VM and then running your provisioning script on it, then transplant that into your real production environment. Vagrant is a pretty typical tool for trying this out. The biggest issue will be network connectivity to reach the individual VMs, and that's not that big a deal.
Perhaps there is a kind of --hostname for docker run command that will allow to access the container from other container by using this name?
docker run --network-alias is very briefly mentioned in the docker run documentation and has this effect. docker network connect --alias is slightly more documented and affects a container that's already been created.
Are there any dedicated open source solutions to what I want to achieve?
Docker Compose mostly manages this for you, if you want to move off of your existing shell-script solution: it puts a name prefix on all of the networks and volumes it creates, and creates network aliases for each container matching its name in the YAML file. If your host volume mounts are relative to the current directory then that content is fairly isolated too. The one thing you can't easily do is launch each copy of the stack on a separate host port(s), so you have to resolve those conflicts.
Kubernetes has a concept of a namespace which is in fact exactly what you're asking for, but adopting it is a substantial investment and would involve rewriting your deployment sequence even more than Docker Compose would.

How do I finalize my Docker setup and how is it actually called?

I am pretty new to Docker. After reading specifically what I needed I figured out how to create a pretty nice Docker setup. I have created some setup where in I can start up multiple systems using one docker-compose.yml file.
I am currently using this for testing specific PHP code on different PHP and MySQL versions. The file structure looks something like this:
./mysql55/Dockerfile
./mysql56/Dockerfile
./mysql57/Dockerfile
./php53/Dockerfile
./php54/Dockerfile
./php56/Dockerfile
./php70/Dockerfile
./php71/Dockerfile
./php72/Dockerfile
./web (shared folder with test files available on all php machines)
./master_web (web interface to send test request to all possible versions using one call)
./docker-compose.yml
In the docker-compose file I setup different containers most refering to the local Dockerfiles, some refering to online image names. When I run docker-compose up all containers start as expected in the configured network configuration and I'm able to use it as desired.
I would first of all like to know how this setup is called. Is this called a "docker swarn" or is such setup called differently?
Secondly, I'd like to make one "compiled/combined" file (image, container, swarn, engine, machine) or however it is called) of this. That I can save without having to depend on external sources again. Of course the docker-compose.yml file will work as long as all the refered external sources are still available. But I'd like to pusblish my fully confired setup as is. How do I do that?
You can publish built images with Docker registry. You can setup your own or use third-party service.
After that, you need to prefix you image names with your registry IP/DNS in docker-compose.yml. This way, you can deploy it anywhere docker-compose is installed (and docker-compose itself can be run as docker container too), just need to copy your docker-compose.yml file there.
docker-machine is tool to deploy to multiple machines, as is docker swarm.

Is it possible to specify a Docker image build argument at pod creation time in Kubernetes?

I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.

Several docker stacks with the same compose file but different ports

I would like to run several instances of a multi-container application at the same time using the same compose file. One of the containers in the application accepts websockets on a certain port.
I have an nginx proxy to forward different domains or locations to different instances of the application. The instances are actually different tenants using the application.
I would like to simply be able to run:
docker stack deploy -c docker-stack.yml tenant1
docker stack deploy -c docker-stack.yml tenant2
And somehow get different ports to the apps, which I then can use in the proxy to forward different websocket connections to different application instances, either using locations or virtual hosts.
So either:
ws://tenant1.mydomain.com
or
ws://mydomain.com/tenant1
How to configure the proxy to do this can surely be figured out. I've started to read a bit about: https://github.com/jwilder/nginx-proxy, which seems nice. However it requires that I set the virtual host name as environment variable for each app-instance and I can't seem to find a way to pass arguments with my docker stack deploy command?
Ideally I would like to not care about exact ports, they would rather be random. But they need to somehow be known to the nginx proxy to be able to forward. I want to easily be able to spin up a new appinstance (tenant) stack and just set up the proxy for that name (or even better if the proxy can handle that automatically with the naming of the app).
Bonus if both examples above works (both virtual host and location) since that would make it possible to test and develop without making subdomains / new domains.
Suggestions?

Resources