I have a development server with a list of docker container running. Every one of them have an application and an nginx in it, listening on port 80 with no ssl encryption, serving the application. So if I have 10 dockers, I would have 10 nginx (I know nginx is designed to serve multiple app, here is not the question).
I would like to have a single point of entry on the server, which would be an nginx, auto redirecting to http, with a certificate generated by let's encrypt.
Is this possible? Listening on port 443 with a let's encrypt, and redirecting to the port 80 of another nginx?
The goal here is to secure all the connections to my different dockers.
For you information, I was trying to use valian/docker-nginx-auto-ssl docker with the command
docker run -d --name main-nginx \
--restart on-failure \
-p 80:80 -p 443:443 \
-e ALLOWED_DOMAINS=www.scaniat.io,dev.scaniat.io,www.dev.scaniat.io,scaniat.io \
-e SITES='scaniat.io=scaniat-frontend-master;dev.scaniat.io=scaniat-frontend-develop' \
--network custom \
valian/docker-nginx-auto-ssl
with no luck.
Related
I have successfully created a client inside Keycloak using Dynamic Client Registration
The response body contains:
"registration_client_uri":"https://127.0.0.1:8443/auth/realms...",
This is because Keycloak is installed with Docker, and is fronted by NginX. I want to replace the IP address/port with the actual public hostname.
Where are the docs / configurations for this?
I started keycloak as follows:
docker run -itd --name keycloak \
--restart unless-stopped \
--env-file keycloak.env \
-p 127.0.0.1:8443:8443 \
--network keycloak \
jboss/keycloak:11.0.0 \
-Dkeycloak.profile=preview
And inside keycloak.env, I have set KEYCLOAK_HOSTNAME=example.com
Configure env variable PROXY_ADDRESS_FORWARDING=true, because Keycloak is running behind Nginx reverse proxy - https://hub.docker.com/r/jboss/keycloak/
Enabling proxy address forwarding
When running Keycloak behind a proxy, you will need to enable proxy address forwarding.
docker run -e PROXY_ADDRESS_FORWARDING=true jboss/keycloak
I am running several services on my CentOS 7 Linux server. Nginx and netdata are being run as root and are working well.
I started Portainer as a Docker container:
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer
I can connect to the Portainer port locally with telnet localhost 9000. But, when I try to telnet ip 9000 from an external client PC on the same network, it doesn't connect.
The Linux server does not have a firewall. Nginx, netdata, and myapp that are not running in Docker work fine. In short, all other services can be accessed from a Linux server without a firewall, but Docker's internal container service is inaccessible.
What do I need to change to be able to reach the container?
You have to disable you ipv6
add this links to /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
to effect those
sysctl -p
I have a web app running in a docker container at port 9000. I need to route the traffic to Nginx in another container in the same network to access it at port 80. How do I achieve this? I tried building an Nginx image and added Nginx.conf. But my Nginx container stops immediately after it runs.
contents of Nginx.conf file
Snippet of containers
You need to bind internal port form containers to the host like:
application
docker run -d \
--network=randon_name \
<image>
nginx
You need to bind internal port form containers to the host like:
docker run -d \
--network=randon_name \
-p 80:80 \ # <host>:<containerPort>
-p 443:443 \ # <host>:<containerPort>
<image>
I need to setup nginx-proxy container to forward requests to the container with my app. I use the following commands to start containers:
# app
docker run -d -p 8080:2368 \
--name app \
app
# nginx
docker run -d -p 80:8080 \
--name nginx-proxy \
jwilder/nginx-proxy
But when I try to access port 80 on my server, I get ERR_CONNECTION_REFUSED. It's clear for me that nginx container is forwarding not the port I want because on server port 8080 I can access the app.
I tried using network like this:
# network
docker network create -d bridge net
# app
docker run -d -p 8080:2368 \
--name app \
--network net \
app
# nginx
docker run -d -p 80:8080 \
--name nginx-proxy \
--network net \
jwilder/nginx-proxy
But the result seems to be the same.
I need to understand how to make nginx container proxy requests from server port 80 to my app.
It is looking that your app is running on port 2368 which users should not need to reach directly. So the app container's port does not need to be exposed.
You are correct in creating a bridge network and create the containers on it.
You need to remove port mapping from app container and change the port mapping of nginx-proxy container from 80:8080 to 80:80.
You also need to setup nginx-proxy to proxy requests from port 80 to app:2386
This way users hitting the port 80 on the host machine Docker runs will be proxied to your app.
The VIRTUAL_HOST env var with domain name for app container was required to let nginx proxy requests to the app container. No network setup or ports forwarding is needed with this approach. Here is the working setup I came up with:
# app
docker run -d \
--name app \
-e VIRTUAL_HOST=mydomain.com \
app
# nginx
docker run -d -p 80:80 \
--name nginx-proxy \
jwilder/nginx-proxy
I have a container that runs a node app with three servers: one server for public data and two webpack servers. By default these run on ports 3000, 3001, and 3002, but these ports can all be configured.
It seems that I would be able to run the container like so:
docker run -p 3000:3003 -p 3001:3004 -p 3002:3005 -e 'APP_PORT=3003' \
-e 'NG_PORT=3004' -e 'RC_PORT=3005' --expose 3003 --expose 3004 --expose 3005 \
ajcrites/webf
However there are two problems with this approach:
There is a tremendous amount of redundancy
I want the default ports to be used/exposed if nothing is specified
Is there a simpler way to expose all of the configurable ports whether or not they are changed from the defaults?
You wouldn't want to expose ALL ports, however you can expose and bind by range since at least docker 1.5:
docker run -p 3000-3002:3003-3005
I don't think you need to use --expose when you publish.