Docker as a Proxy server for a web service - docker

My app integrates with a web service that supports a proxy server. So I need to have integration tests that prove that works.
So I wanted to use Docker to create a local proxy server that I can run real integration tests to verify that my web service can be called through the proxy interface without errors.
So I tried https://github.com/jwilder/nginx-proxy
I started up the container with:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
When I use it i get a 503 error 503 Service Temporarily Unavailable
Am I misunderstanding what this proxy does?

Although this has been resolved in the comments, I'll try to answer the following question:
Am I misunderstanding what this proxy does?
Yes. What your project requires, is the availability of a forward-proxy and what you are trying to use, is a reverse-proxy. This will become more clear once you go through the most top rated answers at Difference between proxy server and reverse proxy server
For a TL;DR moment:
There are many forward-proxy software available. You could choose any one of them for your project. Some of them are:
Squid
Polipo
Apache Traffic Server
Privoxy
TinyProxy

Related

Running Docker on a ubuntu 20 Webserver

I have a web server with ubuntu20 and installed nginx, which is also running well.
Now, I try to install docker and use nginx on the server side as a reverse proxy to forward subdomains to docker.
At this point, I am not sure if this has to be handled by the server sided nginx, or inside a docker container. If I use docker, I always get a 401 response.
Which is the best way to handle multiple domains on a server (pointed there by A-Record) with docker?
Do I really need the server sided nginx, or is this no necessary?
Which is the best practice to handle this?

NGINX Reverse Proxy with Docker Host Mode for Local Development

Most of the things I'm finding online are all about using docker-compose and more to create a reverse proxy for local development of dockerized applications. This is for my local development environment.
I have a need to create an nginx reverse proxy that can route requests to applications on my local computer that are not running in docker containers (non-dockerized).
Example:
I start up a web app A (not in docker) running on http://localhost:8822
I start up another web app B (not in docker) running on https://localhost:44320
I have an already running publicly available api on https://public-url-for-api-app-a.net
I also have a public A Record setup in my DNS for *.mydomain.local.com -> 127.0.0.1
I am trying to figure out how to use a nginx:mainline-alpine container in host mode to allow me to do the following:
I type http://web-app-a.mydomain.local.com -> reverse proxy to http://localhost:8822
I type http://web-app-b.mydomain.local.com -> reverse proxy to https://localhost:44320
I type http://api-app-a.mydomain.local.com -> reverse proxy to https://public-url-for-api-app-a.net
Ideally, this "solution" would run on both Windows and Mac but I am currently falling short in my attempts at this on my Windows machine.
Some stuff I've tried:
Following this tutorial, Start up my nginx docker container in "host" mode via:
docker run --rm -d --network host --name my_nginx nginx:mainline-alpine
I'm unable to get it to load on http://localhost:80. I'm wondering if I'm hitting some limitation of docker and windows? -- I receive a "The site can't be reached" here.
Custom building my own docker image with nginx configs and exposed ports (before trying host network mode)
Other relevant information:
Docker-Desktop on Windows version: 4.4.4 (73704)
Nginx Container via nginx:mainline-alpine tag.
Web App A = Front End Vue App
Web App B = Front End .NET Framework App
Web App C = Backend .NET Framework App
At this point, I've read too many posts that my brain is mush -- so it could very well be something obvious I'm missing. I'm beginning to think it may be better to simply run nginx.exe locally but that's not ideal because I don't want to have to check in binaries to my source in order for this setup to work.

Remote HTTP Endpoint to Docker Application

I have a demo application running perfectly on my local environment. However, I would like to run the same application remotely by giving it a HTTP endpoint. My goal is to test the performance of the application.
How to give a HTTP endpoint to any multi container docker application?
The following is the Github repository link for the demo application
https://github.com/LonareAman/BankCQRS.git
Use docker-compose and handle containers based on what you need
One of your containers should be web server like nginx. And then bind your machine port to your nginx like 80:80
Then handle your containers in nginx and make a proxy to them
You can find some samples in https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/

How to create a web policy agent in OpenAM given that the server URL has a not fully qualified hostname?

Question: How to create a web policy agent in OpenAM given that the server URL, which OpenAM runs on, has a not fully qualified hostname?
Initial situation:
For a Prove of concept (POC), I emulate a server structure using docker. I have an apache webserver as a resource server (docker container), an OpenAM docker container for the access management, and a flask web app running in a third container as the client. I configured OpenAM via the GUI.
Sofar my flask app can authenticate, request, and retrieve access tokens using simple requests as specified here.
However now I also want to protect the apache resource server.
For the start without flask and simply by installing an OpenAM Web Policy Agent on the apache webserver and configuring a web policy agent profile in OpenAM following this official ForgeRock guide.
Problem:
When configuring the agent profile in OpenAM using the GUI the OpenAM container's domain name http://openam:8080/openam is not accepted as a valid server URL.
If I use instead e.g. http://openam.local:8080/openam the error does not show.
What I tried so far:
I added an Nginx container that functions as a reverse proxy and used it to change the container's hostnames to <container>.local. Now I can reach the containers e.g. via http://openam.local:8080/openam and http://apache.local:8080.
However, when I now access the OpenAM GUI using http://openam.local:8080/openam, enter the default passwords, and press Create Configuration the configuration fails with the following message:
Unable to solve the problem from (1) I figured that I recall the Nginx setup and instead try to configure the agent profile using the command line - in the hope that the above error Hostname of server URL is not fully qualified is restricted to the GUI. For the setup via the command line there existed the easy command ./ssoadm create-agent ... as descript here. But ssoadm was deprecated in favor of Amster and I am unable to figure out how to configure the agent policy using Amster.
That's a bug in OpenAM console / service validation, it's tracked as OPENAM-16073
However these times there are some OpenAM forks. I would encourage those people to rename their product / project as it's quite confusing.
When using docker as described in the original question you can simply set the hostname of the container using -h flag.
Example OpeanAM:
docker run -h openam.example.com -p 8080:8080 --name openam openidentityplatform/openam
Example Apache Web Server:
docker run -it --name apache_agent -p 80:80 -h example.com --shm-size 2G --link=openam apache_agent
OpenAM can now be reached via http://openam.example.com:8080/openam and the apache server via http://example.com.
The OpenAM configuration runs through without an error and when configuring the Web Policy Agent the URL is fully qualified.
Reference and best resource to get started with OpenAM is this Quick-Start-Guide from the OpenAM git repo's wiki.

docker app serving on https and connecting to external rethinkdb

I'm trying to launch a docker container that is running a tornado app in python 3.
It serves a few API calls and is writing data to a rethinkdb service on the system. RethinkDB does not run inside a container.
The system it runs on is ubuntu 16.04.
Whenever I tried to launch the docker with docker-compose, it would crash saying the connection to localhost:28015 was refused.
I went researching the problem and realized that docker has its own network and that external connections must be configured prior to launching the container.
I used this command from a a question I found to make it work:
docker run -it --name "$container_name" -d -h "$host_name" -p 9080:9080 -p 1522:1522 "$image_name"
I've changed the container name, host name, ports and image name to fit my own application.
Now, the docker is not crashing, but I have two problems:
I can't reach it from a browser by pointing to https://localhost/login
I lose the docker-compose usage. This is problematic if we want to add more services that talk to each other in the future.
So, how do I launch a docker that can talk to my rethinkdb database without putting that DB into a container?
Please, let me know if you need more information to answer this question.
I'd appreciate your guidance in this.
The end result is that the docker will serve requests coming over https.
for exmaple I have an end-point called /getURL.
The request includes a token verified in the DB. The URL is like this:
https://some-domain.com/getURL
after verification with the DB it will send back a relevant response.
the docker needs to be able to talk on 443 and also on 28015 with the rethinkdb service.
(Since 443 and https include the use of certificates, I'd appreciate a solution that handles this on regular http with some random port too and I'll take it from there)
Thanks!
P.S. The service works when I launch it without a docker on pycharm it's the docker configuration I have problems with.
I found a solution.
I needed to add this so that the container can connect to both the database and the rethinkdb:
--network="host"
Since this solution works for me right now, but it isn't the best solution, I won't mark this as the answer for now.

Resources