How to proxy a docker compose container? - docker

I'm setting up Atlassian Crowd using Docker. I'm using docker compose to run Crowd from this github repository https://github.com/teamatldocker/crowd
crowd is running on port 8095, now I need to make it available on port 80. I tried with nginx, there are many docker image out there but I didn't have any success.
What is the easiest and best way to have port 8095 proxied to port 80?
Thank you.

Related

Setting up nginx and ssl in docker (Asp .Net API in VPS)

I want to deploy api service (asp .net) to VPS.
What is at the moment:
VPS ubuntu 22.10
Container api service with open port http.
Container mongodb.
Network bridge for communication between these containers.
Volume for storing mongodb collections.
Configured dns subdomain, which translates to ip VPS.
What I want:
To add nginx.
To add ssl (Let's Encrypt with certbot).
I don't want to use docker compose because I want to understand how things works.
I'm not strong in terminology, but perhaps what I want to do is called an open nginx proxy.
Please tell me if I understand correctly what I need to do.
Nginx:
To run a separate nginx container.
To add the nginx configuration to the docker volume.
To add nginx to the bridge network (close ports on the api container, open ports on the nginx container)
To set up nginx location configs to work internally through the network bridge.
SSL:
On the VPS machine (not in the docker container) to install and run certbot
To enabled automatic certificate renewal
I'm not sure where I need to run certbot. On vps machine or in nginx docker container.
I don't know how to configure nginx to work through the bridge.

How to share the port 8080 between docker host and docker container with out port conflict?

I am new to docker and not able to implement a requirement. I have to run the apache Tomcat on docker host and apache on container.
But here is the catch or confusing part, apache Tomcat is already running on docker host( which default using port 8080) which launches the docker container. Now, I am not able to launch the apache configclient.html from container. The docker network I used is host network, -net=host , while running the container.
The point here is docker host , on which container is running, is also using the port 8080 for running apache tomcat. So, now docker host and container both using same port 8080.
How can I resolve the port conflict between docker host and docker container where both are using port 8080!?
Any help/suggestion to make way here to run the apache on container and apache tomcat on host with out changing the port address?! . Please note, I am using private network here. (ip ->192.168.xx.xx).
I have found many links which explains the sharing port 80 among the containers. But my requirement is different here!!. Please forgive me for any silly query or bad presentation while framing this question.
You need a reverse www proxy on the host that sends a subpath to the container.
If this is not the answer, than you have an ambiguity in your requirement as it must be possible for the host OS to resolve which calls go to the host and which to the container.

Accessing Apache Nifi through traefik load balancer on docker swarm

Trying to setup Apache NiFi docker container, with traefik as load balancer over docker swarm network, We are able to access web UI, while browsing through UI, it redirects to docker internal host instead of proxy host name, As per below thread from Nifi here looks we need to pass http headers from proxy, couldn't find a way to set it through Traefik, any help here is much appreciated.
On a side note tested Nifi with another reverse proxy, it works fine without any extra configurations needed.
Adding below label in docker-compose for the service resolved the issue.
traefik.frontend.headers.customRequestHeaders=X-ProxyScheme:https||X-ProxyHost:<Virtual HostName>||X-ProxyPort:<Virtual Port>

Rancher CLI random host port mapping

I am planning to use rancher for managing my containers. On my dev box, we plan to bring up several containers each serving a REST api.
I am able to automate the process of building up my containers using jenkins and want to run the container using rancher to take advantage of random host port mapping. I am able to do this using rancher UI but unable to find the way to automate it using cli.
ex:
Jennkins builds Container_A exposes 8080 -> Jenkins also executes rancher cli to run the container mapping 8080 to a random port of host. And the same for Container_B exposing 8080.
Hope my question makes sense.
Thanks
Vijay
You should just be able to do this in the service definition in the Docker compose yaml file:
...
publish:
8080
...
If you generate something in the UI and look at the configuration of the stack, you'll see the corresponding compose yml.
Alternatively, you can use:
rancher run --publish 8080 nginx
then get the randomly assigned port:
rancher inspect <stackname>/<service_name> | jq .publicEndpoints[].port

Docker swarm, Consul and Spring Boot

I have 6 microservices packed in docker containers. On every swarm node, i have installed consul agent, binded to host ip, and client in 0.0.0.0 mode.
All microservices are in docker-compose file which I am running from Swarm manager.
Microservices are written in Java and in bootstrap.yml I must to specify consul agent endpoint. Possible choices are:
localhost
${HOSTIP} environment variable
Problems:
- localhost is not localhost of host, but container localhost, and I don't have consul agent on container localhost but host.
- ${HOSTIP} in compose file i have to supply this env var. But, I don't know where Swarm MAnager will schedule microservice start so I cannot know which IP address will be used.
I tried to expose on each node host ip address but since i am running compose from manager, it will not read this variable.
Do you have any proposal how to solve this? I have consul cluster, 3 managers and 3 nodes. on each manager and node i have consul agent started (as docker container). No matter what type of networking i am using, i am not able to start up microservice. I started consul as --net=host and --net=bridge, but this is not working.
Is there anyone with some idea?
Thanks ahead.
So you are running consul in containers also, right? Is it possible in your setup to link containers? So you could start the consul containers as "consul" on each host and link your microservices to it. Linked containers get a hosts entry and so the consul service should be reachable at "consul:8500" from within your services.
Edit: If you are using the official Consul Docker image from Hashicorp, you can configure the client address to 0.0.0.0, this should make the consul API available to the other containers running on the host.
Let me answer my own Q: This is not a way we want to do this, I mean, we cannot put some things in Swarm and some thing outside Swarm with expectation that it will work. It will not. Consul as a service discovery cannot be used outside Swarm, too. Simple answer would be to use Docker Orchestration and Service discovery and not to involve Consul. If someone is using Swarm, everything should be in overlay networks (rabbit, redis, elk and so on)...

Resources