Several docker stacks with the same compose file but different ports - docker

I would like to run several instances of a multi-container application at the same time using the same compose file. One of the containers in the application accepts websockets on a certain port.
I have an nginx proxy to forward different domains or locations to different instances of the application. The instances are actually different tenants using the application.
I would like to simply be able to run:
docker stack deploy -c docker-stack.yml tenant1
docker stack deploy -c docker-stack.yml tenant2
And somehow get different ports to the apps, which I then can use in the proxy to forward different websocket connections to different application instances, either using locations or virtual hosts.
So either:
ws://tenant1.mydomain.com
or
ws://mydomain.com/tenant1
How to configure the proxy to do this can surely be figured out. I've started to read a bit about: https://github.com/jwilder/nginx-proxy, which seems nice. However it requires that I set the virtual host name as environment variable for each app-instance and I can't seem to find a way to pass arguments with my docker stack deploy command?
Ideally I would like to not care about exact ports, they would rather be random. But they need to somehow be known to the nginx proxy to be able to forward. I want to easily be able to spin up a new appinstance (tenant) stack and just set up the proxy for that name (or even better if the proxy can handle that automatically with the naming of the app).
Bonus if both examples above works (both virtual host and location) since that would make it possible to test and develop without making subdomains / new domains.
Suggestions?

Related

How Do I Configure Docker Containers Behind A Load Balancer?

My IT infrastructure department has provided me with the following setup: A netscaler load balancer (lb) in front of 3 virtual machines (vm01, vm02, vm03). Each virtual machine was setup with IIS.
I have installed Docker Engine on all three virtual machines and have replicated the same 3 containers (appcontainer1, appcontainer2, appcontainer3) on all 3 virtual machines. Each container contains a .NET Core Web API application (api1, api2, api3).
Each container is configured to expose its port 80 for access to the api and is mapped to a port on the virtual machine where it is running. In other words appcontainer1 is run with docker run -p 8091:80 ., appcontainer2 is run with docker run -p 8092:80 ., and appcontainer3 is run with docker run -p 8093:80 ..
The problem I am running into is how do I call my web applications from a client machine. For example, if I wanted to directly call ap1 on vm01, I would call vm01.domain.com:8091, but how do I make a call to lb.domain.com:8091 and have it resolve correctly on one of the virtual machines?
A crudely put together paint drawing of the situation:
Do I configure the netscaler load balancer to be a reverse proxy and forward the port along to the virtual machines?
Do I configure a separate DNS entry per application (ap1.domain.com, ap2.domain.com, api3.domain.com) and configure IIS (or nginx or Apache) on each virtual machine to resolve to the appropriate port?
Is there a way to configure Docker to do this?
Am I doing it all wrong and over thinking the whole thing?
Should I be using some sort of container orchestration instead?
Is there a sensible way to do this without bothering the infrastructure team to reconfigure everything?
You need to setup each IIS on each VM as a reverse proxy with ARR (Application request routing) module. There are a few tricks that you will use that MAY arise (Hello Microsoft) during this process. I cannot say anything on the load balancer though. Still, it shouldn't be hard to configure it to evenly distribute the load on the machines. All you need is to tell LB to direct any call to lb.domain.com:XXXX to one of the VMs in a round robin manner. You -probably- can do it to vary the port too, which allows you to have your traffic distributed amongst 3VMs x3containers = 9 containers.
However, it is recommended not to expose Kestrel server on the net. Instead, put it behind IIS or whatever. And to configure IIS to act as a reverse proxy, you can either build 3 sites and bind them to the corresponding ports with minimal configuration, or use a single site that uses IIS and resolve the incoming request using rewrite rules. To be honest IIS is a pain to use with docker.
BUT what I actually recommend is to use swarm if your OS supports it and expose a single port per VM. These are one of:
WS2019,
WS2016 1709 update or later (These have no GUI)
Windows 10 1709 update.
The swarm is still problematic in Windows :/ Also it has very frustrating seemingly random errors involving "localhost:PORT" and stuff. For instance, I cannot access my containers on my server (WS2016, pre-1709) using localhost:PORT combination. Same goes for my development machine (Win10 latest) which has just recently become an issue. It was fine before "something" happened and it stopped working.
If you are flexible about which proxy to use, I recommend taking a look at nginx, Kubernetes and if you are on the experimental side traefik, that allows you to get away without using a container orchestration tool (i.e. swarm)

How docker container can communicate in PCF

I have multiple docker containers. In local they talk to each other using network and compose.yml. I have pushed my containers to PCF only i don't know how to make them talk to each other. Can anyone help me out??
My understanding is that you would utilize Cloud Foundry's container to container networking to do this.
https://docs.cloudfoundry.org/concepts/understand-cf-networking.html
By default, no connections are allowed between containers but you can use the cf cli to allow connections on specific ports between your applications. Your applications just need to be configured to start and listen on the ports that you allow.
While it's not docker specific, there's a good example here.
https://github.com/cloudfoundry/cf-networking-examples/blob/master/docs/c2c-no-service-discovery.md
Using docker should be minimally different. You'll obviously need to create your own docker images and make sure that those are going to listen on the correct ports, otherwise it's just an adjustment to how you push the application (i.e. pass the docker image name to cf push). The cf add-network-policy commands should be the same.
Hope that helps!
** UPDATE **
If you are looking for docker-compose like behavior, kind of a run one command and deploy multiple apps, you can achieve this with cf push and a manifest.yml file.
The manifest.yml file allows you to define multiple applications. Thus you can use it to deploy a series of applications that work together, like you often see with docker-compose.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html
You have quite a bit of flexibility with manifest.yml, you can deploy buildpack based apps or docker image based apps. You can configure routes, bound services, health checks, memory/disk quotas, and if you're on a new enough version, even processes and sidecars. That said, it can't do 100% of what you can with the cf cli, for example you cannot control the above mentioned network policy using a manifest.yml. If you need to control something not exposed through manifest.yml, the other option would be to script the deployment.

How to manage multiple projects on Docker?

In our company ~7 projects, each based on Docker. Each project contain base services, like MySQL, Nginx, PHP. Some of projects communicate with other projects. Because of many services on same port, we make new docker host (docker-machine) for each project. From here few problems are coming:
VirtualBox assign random IP to each Docker host, depends on sequence of executing.
Hard to switch from project to project, need to set different shell envs all the time. Easy to make mistake.
Well, I'm searching for more enterprise solution to manage many docker machines. Or a some technique that can help me with current situation.
I had similar problems last summer.
First, I started to deploy my projects to swarm-cluster as services, instead of clustering several docker VMs. This enabled me to play around services with only the service IDs. It is important that how to separate projects into services, this part may be cumbersome depending on your project.
https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/
Then, I build my configuration and monitoring software once on swarm-manager and use it. You can use your automation tools on docker-manager to control services.
A virtual machine consumes resources and it is better to avoid it if is not necessarily. Instead you could deploy the projects in the docker swarm on bare metals.
But because every project has an entry point that needs to be accesible from the outside world (i.e. https://site1.com and https://site2.com) you can't expose the same port (443 or 80) for all the frontend services in the same swarm. For this you can use a reverse proxy like HAProxy or Nginx that forwards the requests to the right service based on the hostname. The reverse proxy could be also a service in the swarm. In this situation you should not expose the projects' ports anymore.
A reverse proxy has many other advantages, like SSL termination (this makes the SSL certificate management a lot easier).
If you add the projects to the same custom network then the services from different projects could communicate securely and directly, using their docker service name and the internal port (i.e. 80).

Container delivery on amazon ecs

I’m using Amazon ECS to auto deploy my containers on uat/production.
What is the best way to do that?
I have a REST api with a several front-end clients
Should I package my api container with nginx in the same container?
And do the same thing with the others front end clients.
Or I have to write a big task definition to bring together all my containers(db, nginx, php, api, clients) :(, but that's mean that I should redeploy all my infrastructure at each push uat/prod
I'm very confusing.
I would avoid including too much in a single container. Try and distill your containers down to one process doing one thing. If all you're doing is serving up a REST API for consumption by your front end, just put the essential pieces in for that and no more.
In my experience you also want your ECS tasks to be able to handle failure gracefully and restart, and the more complicated your containers are the harder this is to get right.
Depending on your requirements I would look into using ELB instead of nginx, you can have your ECS cluster point at an ELB and not have to deal with that piece at all.
Do not use ECS - it's too crude. I was using it as a platform for our staging/production environments and had odd problems during deployments - sometimes it worked well, sometimes - not (with the same Docker images). ECS provides not clear model of container deployment and maintenance.
There is another good, stable and predictive option - Docker Cloud service. It's new tool (a.k.a. Tutum) that was acquired by Docker. I switched the CI/CD to use it and we're happy with it.
Bind Amazon user credentials to Docker Cloud account. Docker Cloud uses AWS (or other provider) API for creating appropriate computer instances.
Create Node. Select Amazon EC2 instance type and parameters of storage, security group and so on. New instance will contain installed docker software and managing container that handles messages from Docker Cloud (deploy, destroy and others).
Create Stackfile, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/. Stackfile is a definition of container group you required. You can define different scaling/distribution models for your containers using specific Stackfile options like deployment strategy, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#deployment-strategy-1.
Define ELB configurations in AWS for your new instances.
P.S. I'm not a member of Docker team and I like other AWS services :).
Here is my two cents on the topic, the question is not really related to ecs, it applies to any body deploying their apps on docker.
I would suggest separating the containers, one for nginx and one for API.
if they need to be co-located on the same instance, on ECS you can define them as part of the same task and on kubernetes you can make them part of same pod.
Define a docker link between the nginx and the api container. This will allow the nginx process to talk to api container without the api container exposing its ports to the host.
One advantage of using the container running platforms such as kubernetes and ecs is that they ensure each of the container run all the time and dynamically restart if one of the processes/containers go down.
Separating the containers will allow these platforms to monitor both the processes separately. When you combine the two into one container the docker container can only run with one of the processes in foreground, so you will loose the advantage of auto-healing for one of the processes.
Also moving from nginx to ELB is not a straightforward solution, you may have redirections and other things configured on the nginx, which are not available on ELB(As of date).
If you also need the ELB, there is no harm in forwarding the requests from the ELB to the nginx port.

Centralized team development environment with docker

I want to build a "centralized" development environment using docker for my development team (4 PHP developers)
I have one big Linux server (lot of RAM, Disk, CPU) that runs the containers.
All developers have an account on this linux server (a home directory) where they put (git clone) the projects source code. Locally (on their desktop machine) they have access to their home directory via a network share.
I want that all developers are able to work at the same time on the same projects, but viewing the result of their code editing in different containers (or set of containers for project who use linking containers)
The docker PHP development environment by itself is not a problem. I already tried something like that with success : http://geoffrey.io/a-php-development-environment-with-docker.html
I can use fig, with a fig.yml at the root of each project source code, so each developer can do a fig up to launch the set of containers for a given project. I can even use a different FIG_PROJECT_NAME environment variable for each account so I suppose that 2 developer can fig up the same project and their will be no container names collisions
Does it make sense ?
But after, I don't really know how to dynamically giving access to the running containers : when running there will be typically a web server in a container mapped to a random port in the host. How can I setup a sort of "dynamic DNS" to point to the running container(s), accessible, let say, through a nginx reverse proxy (the vhost creation and destroy has to be dynamic too) ?
To summarize, the workflow I would like to have :
A developer ssh into the dev env (the big linux server).
from his home directory he goes into the project directory and do a fig up
a vhost is created in the nginx reverse proxy, pointing to the running container and a DNS entry (or /etc/hosts entry) is added that is the server_name of this previously generated vhost.
The source code is mounted into the container from a host directory (-v host/dir:container/dir, so the developer can edit any file while the container is running
The result can be viewed by accessing the vhost, for example :
randomly-generated-id.dev.example.org
when the changes are ok, the developper can do a git commit/push
then the dev do a fig stop which in turn delete the nginx reverse proxy corresponding vhost and also delete the dynamic DNS entry.
So, how would to do a setup like this ? I mentioned tool like fig but if you have any other suggestions ... but remember that I would like to keep a lightweight workflow (after all we are a small team :))
Thanks for your help.
Does it make sense ?
yes, that setup makes sense
I would suggest taking a look at one of these projects:
https://github.com/crosbymichael/skydock
https://github.com/progrium/registrator
https://github.com/bnfinet/docker-dns
They're all designed to create DNS entries for containers as they start. Then just point your DNS server at it and you should get a nice domain name every time someone starts up an environment (I don't think you'll need a nginx proxy). But you might also be interested in this approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
Now, there's an even better option for you: Traefik. It will act as a reverse proxy, listening on 80/443, and will differentiate by hostname. Then, it will forward traffic dynamically, based on labels applied to the containers.
Here is a good solution to your issue:
1) Setup Traefik to listen to the docker daemon, forwarding based on ports
2) Ensure the frontend app servers for your devs are on the same docker network as traefik
3) Set a wildcard dns entry point to your server. For example: *.localdev.example.com.
4) On each container, set the hostname in that wildcard namespace. For example: jsmith-dev1localdev.example.com. This would be specified in a docker label such as: traefik.frontend.rule=Host:jsmith-dev1localdev.example.com.
This would allow developers to dynamically forward traffic by domain to their own dev containers.
Yes, I'm aware this is a 3 year old question. It still comes up in 2018 first on google for "centralized docker development server" so I'm going to post this anyways for the help of those currently searching.

Resources