I have two express web apps (server and client) that I am using docker-compose and / or docker stack to deploy in docker swarm. They both have APIs that communicate with each other via their service names, as they are both connected to the same overlay network. A snippet of the config file that client uses to make REST calls to server follows:
"server": {
"url":"http://server:8085",
"endpoints": {
"devices": "/devices",
"temperature": "/temperature",
"mock": "/mock"
}
}
Finding the server by host name is no issue from the node side as it is running directly inside the docker container. However, both express apps serve web pages. Both client and server's css and js dependencies are almost identical and I do not want to write each stylesheet twice. I'd rather server a single copy from server that both index.html files from server and client can use.
In the index.html, of server I can use relative paths because the host is the same, and thus implied. But, in index.html of client I need a fully qualified url. Something like:
<link rel="stylesheet" href="http://server:8085/style.css">
Obviously this would not work once I serve index.html from client to a browser because the browser is going to look for http://server over the internet, rather than in the docker overlay network for these services.
I thought about downloading the files in client's node app before it serves index.html but, that's not the cleanest solution.
Is there an elegant way to accomplish this without binding server to a static ip / domain or programmatically downloading these files first?
If your external users' browser needs to access files on client and server then you will need to publish both Swarm Services to the external IP's of the Swarm nodes, and then put those IP's in DNS names or an external LB, and only use those URL's for remote connectivity.
When you do that, you'll likely need to bind both services to the same port (443). If that's the case, then you also need another layer of proxy that routes traffic to the proper container based on path or DNS name.
Both http://proxy.dockerflow.com/ and https://traefik.io/ work for that purpose.
Related
I have several docker containers with some web applications running via docker compose. One of the containers is a custom DNS server with Bind and Webmin installed. Webmin gives a nice web UI allowing me to update Bind DNS configuration without directly modifying the files or SSHing into the container. I have docker setup to lookup DNS in this order:
my docker dns server
my companies internal dns server
google dns server
I have one master zone file for top level domain "example.com" defined in dns server 1. I added an address for server1.example.com and dns resolves correctly. I want other subdomains to be resolved from my companies internal dns server.
server1.example.com - resolves correctly
server2.example.com - this host is not referenced in the zone file for docker dns server. I would like to somehow delegate this to my companies dns server (server 2)
The goal is I should be able to do software development for web applications and deploy them on my docker containers. The code makes internal calls to other "example.com" hosts. I want some of those calls to get directed back to other docker containers rather than the real server because I am developing code on both and want to test it end to end.
I don't want to (and can't) modify my companies dns configuration. I am not an expert in bind or dns setup and looking for the simplest solution.
What configuration can achieve this?
I guess the workaround is to use fully qualified name when creating the zone file. Instead of creating a master zone example.com and listing server1 inside that zone I am creating a master zone with server1.example.com. It means I have to create a zone file for every server but I guess its ok to manage with a smaller number of hosts. server2.example.com then doesnt fall inside of a zone and gets resolved using the next dns server in the chain.
I am in the process of evaluating moving a very large Azure Cloud Service (Web Role) microservice architecture to AKS and have been working through the necessary code and build changes to support it.
In order to replicate the production environment locally for the developers, we run nginx on the host with SSL offloading and DNS (hosted in Azure) A records pointing to 127.0.0.1. When running in the Azure Emulator, the net affect is the ability for both the developer to visit the various web front ends in their browser (i.e. https://myapp.mydomain.dev) as well as hit the various API's in the solution (Web API 2) in Postman/cURL, etc.
Additionally due to how the networking of the Azure Emulator works, the apps themselves can resolve each other through nginx on the host (i.e. MVC app at https://myapp.mydomain.dev can obtain a token from the IdP web API at https://identity.mydomain.dev and then use that token at the API at https://api.mydomain.dev). This is the critical piece and the source of my question.
All attempts at getting the containers themselves to resolve each other the same way the host OS can (browser/Postman, SSL offloading via nginx) have failed. Many of the instructions out there are understandably for linux containers but having adapted the various networking docker-compose settings for the windows container equivalent have not yet yielded an success. In order to keep the development environments aligned with the real work systems, which are tenantized and make sure of the default mapping in nginx to catch all incoming traffic and route it to a specific user facing app/container, it is not as simple as determining a "static" method of addressing these on startup and why the effort was put in to produce the development environments we have today.
Right now when one service (container) attempts to communication with another, it ultimately results in a resolution error as all requests resolve to https://127.0.0.1 due to the DNS A records hosted in Azure for the domain. Since this migration will be a longer term project, the environments need to co-exist so changing the way that DNS is resolved (real DNS A records pointing to 127.0.0.1), host running nginx and handling SSL offloading to the various webroles normally running in the Azure Emulator is not an option.
Is there a way (with Windows containers) to either:
Allow the container to utilize nginx on the host OS transparently (app must still call the API at https://api.mydomain.dev), which will cause the traffic to be routed properly to the correct container/port defined in the docker-compose file?
OR
Run nginx on each container, allowing each container to then resolve and route appropriately without knowing the IP of the other container, possibly through an alias which could be added to the containers nginx.conf before the service starts?
The platform utilizes OAuth2/OIDC and it is critical to maintain the full URL to the other services from the applications perspective. Beyond mirroring production and sandbox environments, this URL's are utilized for redirect URL and post logout redirect URL validation among other things so using "https://myContainerNameForOtherContainerAlias" is not a workable solution.
Will I have the same problem when setting up the AKS environment as well?
I have a docker container which exposes a RESTful API on a specific port (e.g. 4567) on a host machine. According to security requirements, I need to block all requests coming to this port (i.e. 4567) except the one coming from a specific application (say a scheduler like oozie). I'm not very familiar with firewalls, but I'm guessing the first part (blocking on access to the port) can be done in the firewall, but how can I open access to only one application?
Firstly, this is a great place to learn and share new information.
Now I've an issue with hosting two websites on the same server but before describing my problem, there are some terms that I'll use henceforth to make things easier.
Website Setup
Server OS - CentOS 7 x64, Docker version - 18.03.0-ce, build 0520e24
1st Website: example.com - hosted via Nginx (Runs as a service on the host machine and not via Docker Container) on port 80 (re-directs to 443) : static website with HTML / CSS code.
2nd Website: http://art.example.com:8080/ : served on port 8080 via this Docker Image
SSL - using LetsEncrypt for both the above domains.
Requirements
To serve both sites (and possibly more) via HTTPs without breaking either of them.
This is because when I browse the 2nd website using art.example.com:8080, it works fine but if I browse the 1st website then subsequent requests to 2nd website somehow start going on HTTPs, causing the page to not load.
Questions
Can both sites (and more) be served via port 80|443 via Nginx VHosts (or any other alternative) without using a different port, i.e: 8080 for 2nd website? This is to not have any ports in the URL as mentioned above but just the domain name.
Or, is there a way to forward traffic to the Docker service on a different port while the main main web service listens to port 80|443? What config changes would I need to do?
I've searched on this forum as much possible but couldn't get much.
Please let me know if any more information would be required from me.
Thanks in advance!
Solution: As recommended by this Docker image maintainer - kdelfour as well as a quick recommendation by BretFisher, we can remake the 1st Website as a Docker Container like the 2nd Website and then load-balance them using Traefik as a reverse proxy to manage SSL
Marking this as solved until an even better solution is found, cheers!
Let's say we have 2 separate applications, a Web Api application and a MVC application both written in .NET 4.5. If you were to host the MVC application in IIS under the host header "https://www.mymvcapp.com/" would it be possible to host the Web Api application separately in IIS under the host header "https://www.mymvcapp.com/api/"?
The processes running the 2 applications in IIS need to be separate. I know of the separate methods of hosting, self hosting and hosting using IIS. I would like to use IIS if at all possible.
Also, how would I host two applications (an API and a web application) if each were on a separate server so that I could serve the api from http://www.mymvcapp.com/api?
There are at least 4 ways of doing what you want to do. The first two methods are for if you have 1 web server, and both applications are served from that one web server running IIS. This method also works if you have multiple web servers running behind a load-balancer, so long as the API and the Web site are running on the same server.
The second two methods are using what's called a "Reverse Proxy", essentially a way to route traffic from one server (the proxy server) to multiple internal servers depending on what type of traffic you're receiving. This is for when you run your web servers on a set of servers and run your API on a different set of servers. You can use any reverse proxy software you want, I mention nginx and HAProxy because I've used both in the past.
Single Web Server running IIS
There are two ways to do it in IIS:
If your physical folder structure is as follows:
c:\sites\mymvcapp
c:\sites\mymvcapp\api
You can do the following:
Create a Child Application
Creating a child application will allow your "API" site to be reachable from www.mymvcapp.com/api, without any routing changes needed.
To do that:
Open IIS Manager
Click on the appropriate site in the "Sites" folder tree on the left side
Right Click on the API folder
click "Convert to Application"
The downside is that all Child Applications inherit the web config of their parent, and if you have conflicting settings in there, you'll see some runtime weirdness (if it works at all).
Create a directory Junction
The second way is a way to do it so that the applications maintain their separateness; and again you don't have to do any routing.
Assuming two folder structures:
c:\sites\api
c:\sites\mvcapp
You can set up Junctions in Windows. From the command line*:
cd c:\sites
mklink /D /J mymvcapp c:\sites\mvcapp
cd mymvcapp
mklink /D /J api c:\sites\api
Then go into IIS Manager, and convert both to applications. This way, the API will be available in \api\, but not actually share its web.config settings with the parent.
Multiple Servers
If you use nginx or haproxy as a reverse proxy, you can set it up to route calls to each app depending.
nginx Reverse Proxy settings
In your nginx.conf (best practice is to create a sites-enabled conf that's a symlink to sites-available, and you can destroy that symlink whenever deploying) do the following:
location / {
proxy_pass http://mymvcapp.com:80
}
location /api {
proxy_pass http://mymvcapp.com:81
}
and then you'd set the correct IIS settings to have each site listen on ports 80 (mymvcapp) and ports 81 (api).
HAProxy
acl acl_WEB hdr_beg(host) -i mymvcapp.com
acl acl_API path_beg -i /api
use_backend API if acl_API
use_backend WEB if acl_WEB
backend API
server web mymvcapp.com:81
backend WEB
server web mymvcapp.com:80
*I'm issuing the Junction command from memory; I did this a few months ago, but not recently, so let me know if there are issues with the command
NB: the config files are not meant to be complete config files -- only to show the settings necessary for reverse proxying. Depending on your environment there may be other settings you need to set.