Run two applications on same port on same machine - docker

I had a interview 3 years back and in one of the design interview rounds a question came up ,how can you have two java application (Deployed on tomcat) run on the same port . You can use any tools like docker etc but you can't have a separate virtual machine (like Vmware or VM virtual box) . I am not sure if docker can be used (I just said may be we can use two docker containers, not sure if it would be the right approach) . Any ideas if its possible and how .

You can't have 2 programs that use the same port.
To solve that, you can set up a reverse proxy (Nginx, Traefik or the like) that listens on the port and then routes the traffic to the applications based on what the requests look like. The applications would listen on their own ports. So one port each.
You can route on different things, but in your case you might set it up so requests that start with /app1/ go to one application and requests that start with /app2/ go to the other.
Nginx and Traefik both have standard images available that are pretty easy to set up in Docker.

You can't have two processes listen to the same ports on the same IP address on the same machine.
To work around this, as Hans Kilian says, you'll need a reverse proxy.
Alternatively, if the machine's network interfaces are configured for multiple IP addresses, you can assign one to each running server - and then you're free to use the same port on the other IP-address(es). This is independent of the actual server that you use - be it Tomcat, Docker, or anything else.
Naturally, configuring the different processes to listen to specific IP addresses is dependent on the software itself. As you're asking about Tomcat: It's connectors (see server.xml) have the configuration.
I consider Reverse Proxies more the standard approach found in the wild, but as you were talking about an interview question: This is another option

The only approach to have this scenario is to use reverse proxy which can route the request to the specific java applications based on url match and redirect logic.
The apps must be running on different ports, eg. in case of nginx as proxy create two server blocks with each having a location block.

Related

How we can map a common ip to all worker ip in docker swarm

As we know that in docker swarm we create multiple worker and one manager.
The conatiner is running in mutliple worker. So we can access that in the browser by putting that worker node ip and then port like (ip:80). We can access another worker node by putting their ip and port. But What if I want that I run put one commone IP and run the container. So if anyh one of the nodes goes down then It my site does not goes down. it use another runnig worker.
worker1: 192.168.99.100:80
wokere2: 192.168.99.100:80
worker3: 192.168.99.100:80
I want one common IP so that if any one goes down the it should not goes down.
You basically have primarily two ways of doing this:
You can put an HTTP proxy in front of the docker swarm, and then this proxy has health check routes on the node if any goes down the proxy removes the IP of the downed node from rotation until it comes back up (traefik, Nginx, caddy, ...).
You can use keepalived, and with this approach, you point the domain to your virtual VRRP IP which then is "floating" around.
I know a very good ops person and they use keepalived in their company without any issues and complications. In our company, we decided to go with the proxy because we also route other "traffic" over them to different systems (legacy, ...) and we have VMWare's top licence with veeam and we can solve real-time duplications (in case of VM going down and such) with that.
So both methods are proven and tested :)

Docker based Web Hosting

I am posting this question due to lack of experience and I need professional suggestions. The questions in SO are mainly on how to deploy or host multiple websites using Docker running on a single Web Host. This can be done, but is it ideal for moderate traffic websites.
I deploy Docker based Containers in my local machine for development. A software container has a copy of the primary application, as well all dependencies — libraries, languages, frameworks, and everything else.
It becomes easy for me to simply migrate the “docker-compose.yml” or “dockerfile” into any remote Web Server. All the softwares and dependencies get installed and will run just like my local machine.
(Say) I have a VPS and I want to host multiple websites using Docker. The only thing that I need to configure is the Port, so that the domains can be mapped to port 80. For this I have to use an extra NGINX for routing.
But VPS can be used to host multiple websites without the need of Containerisation. So, is there any special benefit of running Docker in Web Servers like AWS, Google, Hostgator, etc., OR Is Docker best or idle for development only in local machine and not to be deployed in Web Servers for Hosting.
The main benefits of docker for simple web hosting are imo the following:
isolation each website/service might have different dependency requirements (one might require php 5, another php 7 and another nodejs).
separation of concerns if you split your setup into multiple containers you can easily upgrade or replace one part of it. (just consider a setup with 2 websites, which need a postgres database each. If each website has its own db container you won't have any issue bumping the postgres version of one of the websites, without affecting the other.)
reproducibility you can build the docker image once, test it on acceptance, promote the exact same image to staging and later to production. also you'll be able to have the same environment locally as on your server
environment and settings each of your services might depend on a different environment (for example smtp settings or a database connection). With containers you can easily supply each container it's specific environment variables.
security one can argue about this one as containers itself won't do much for you in terms of security. However due to easier dependency upgrades, seperated networking etc. most people will end up with a setup which is more secure. (just think about the db containers again here, these can share a network with your app/website container and there is no need to expose the port locally.)
Note that you should be careful with dockers port mapping. It uses the iptables and will override the settings of most firewalls (like ufw) per default. There is a repo with information on how to avoid this here: https://github.com/chaifeng/ufw-docker
Also there are quite a few projects which automate the routing of requests to the applications (in this case containers) very enjoyable and easy. They usually integrate a proper way to do ssl termination as well. I would strongly recommend looking into traefik if you setup a webserver with multiple containers which should all be accessible at port 80 and 443.

How Do I Configure Docker Containers Behind A Load Balancer?

My IT infrastructure department has provided me with the following setup: A netscaler load balancer (lb) in front of 3 virtual machines (vm01, vm02, vm03). Each virtual machine was setup with IIS.
I have installed Docker Engine on all three virtual machines and have replicated the same 3 containers (appcontainer1, appcontainer2, appcontainer3) on all 3 virtual machines. Each container contains a .NET Core Web API application (api1, api2, api3).
Each container is configured to expose its port 80 for access to the api and is mapped to a port on the virtual machine where it is running. In other words appcontainer1 is run with docker run -p 8091:80 ., appcontainer2 is run with docker run -p 8092:80 ., and appcontainer3 is run with docker run -p 8093:80 ..
The problem I am running into is how do I call my web applications from a client machine. For example, if I wanted to directly call ap1 on vm01, I would call vm01.domain.com:8091, but how do I make a call to lb.domain.com:8091 and have it resolve correctly on one of the virtual machines?
A crudely put together paint drawing of the situation:
Do I configure the netscaler load balancer to be a reverse proxy and forward the port along to the virtual machines?
Do I configure a separate DNS entry per application (ap1.domain.com, ap2.domain.com, api3.domain.com) and configure IIS (or nginx or Apache) on each virtual machine to resolve to the appropriate port?
Is there a way to configure Docker to do this?
Am I doing it all wrong and over thinking the whole thing?
Should I be using some sort of container orchestration instead?
Is there a sensible way to do this without bothering the infrastructure team to reconfigure everything?
You need to setup each IIS on each VM as a reverse proxy with ARR (Application request routing) module. There are a few tricks that you will use that MAY arise (Hello Microsoft) during this process. I cannot say anything on the load balancer though. Still, it shouldn't be hard to configure it to evenly distribute the load on the machines. All you need is to tell LB to direct any call to lb.domain.com:XXXX to one of the VMs in a round robin manner. You -probably- can do it to vary the port too, which allows you to have your traffic distributed amongst 3VMs x3containers = 9 containers.
However, it is recommended not to expose Kestrel server on the net. Instead, put it behind IIS or whatever. And to configure IIS to act as a reverse proxy, you can either build 3 sites and bind them to the corresponding ports with minimal configuration, or use a single site that uses IIS and resolve the incoming request using rewrite rules. To be honest IIS is a pain to use with docker.
BUT what I actually recommend is to use swarm if your OS supports it and expose a single port per VM. These are one of:
WS2019,
WS2016 1709 update or later (These have no GUI)
Windows 10 1709 update.
The swarm is still problematic in Windows :/ Also it has very frustrating seemingly random errors involving "localhost:PORT" and stuff. For instance, I cannot access my containers on my server (WS2016, pre-1709) using localhost:PORT combination. Same goes for my development machine (Win10 latest) which has just recently become an issue. It was fine before "something" happened and it stopped working.
If you are flexible about which proxy to use, I recommend taking a look at nginx, Kubernetes and if you are on the experimental side traefik, that allows you to get away without using a container orchestration tool (i.e. swarm)

How to manage multiple projects on Docker?

In our company ~7 projects, each based on Docker. Each project contain base services, like MySQL, Nginx, PHP. Some of projects communicate with other projects. Because of many services on same port, we make new docker host (docker-machine) for each project. From here few problems are coming:
VirtualBox assign random IP to each Docker host, depends on sequence of executing.
Hard to switch from project to project, need to set different shell envs all the time. Easy to make mistake.
Well, I'm searching for more enterprise solution to manage many docker machines. Or a some technique that can help me with current situation.
I had similar problems last summer.
First, I started to deploy my projects to swarm-cluster as services, instead of clustering several docker VMs. This enabled me to play around services with only the service IDs. It is important that how to separate projects into services, this part may be cumbersome depending on your project.
https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/
Then, I build my configuration and monitoring software once on swarm-manager and use it. You can use your automation tools on docker-manager to control services.
A virtual machine consumes resources and it is better to avoid it if is not necessarily. Instead you could deploy the projects in the docker swarm on bare metals.
But because every project has an entry point that needs to be accesible from the outside world (i.e. https://site1.com and https://site2.com) you can't expose the same port (443 or 80) for all the frontend services in the same swarm. For this you can use a reverse proxy like HAProxy or Nginx that forwards the requests to the right service based on the hostname. The reverse proxy could be also a service in the swarm. In this situation you should not expose the projects' ports anymore.
A reverse proxy has many other advantages, like SSL termination (this makes the SSL certificate management a lot easier).
If you add the projects to the same custom network then the services from different projects could communicate securely and directly, using their docker service name and the internal port (i.e. 80).

Several docker stacks with the same compose file but different ports

I would like to run several instances of a multi-container application at the same time using the same compose file. One of the containers in the application accepts websockets on a certain port.
I have an nginx proxy to forward different domains or locations to different instances of the application. The instances are actually different tenants using the application.
I would like to simply be able to run:
docker stack deploy -c docker-stack.yml tenant1
docker stack deploy -c docker-stack.yml tenant2
And somehow get different ports to the apps, which I then can use in the proxy to forward different websocket connections to different application instances, either using locations or virtual hosts.
So either:
ws://tenant1.mydomain.com
or
ws://mydomain.com/tenant1
How to configure the proxy to do this can surely be figured out. I've started to read a bit about: https://github.com/jwilder/nginx-proxy, which seems nice. However it requires that I set the virtual host name as environment variable for each app-instance and I can't seem to find a way to pass arguments with my docker stack deploy command?
Ideally I would like to not care about exact ports, they would rather be random. But they need to somehow be known to the nginx proxy to be able to forward. I want to easily be able to spin up a new appinstance (tenant) stack and just set up the proxy for that name (or even better if the proxy can handle that automatically with the naming of the app).
Bonus if both examples above works (both virtual host and location) since that would make it possible to test and develop without making subdomains / new domains.
Suggestions?

Resources