Reverse Proxy Nginx Docker Container - docker

I have problems to run multiple proxies and connect a nginx reverse proxy to it.
The image show what I want to archive
What works is when I connect to a proxy directly
# proxy 1
print(requests.get("https://api.ipify.org?format=json", proxies={
"http": "127.0.0.1:9000",
"https": "127.0.0.1:9000"
}).content)
# proxy 2
print(requests.get("https://api.ipify.org?format=json", proxies={
"http": "127.0.0.1:9001",
"https": "127.0.0.1:9001"
}).content)
But it did not work when I use the nginx reverse proxy
# nginx
print(requests.get("https://api.ipify.org?format=json", proxies={
"http": "127.0.0.1:8080",
"https": "127.0.0.1:8080"
}).content)
Response:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='api.ipify.org', port=443): Max retries exceeded with url: /?format=json (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 400 Bad Request')))
That's my docker container yml file
docker-compose.yml
version: "2.4"
services:
proxy:
image: qmcgaw/private-internet-access
cap_add:
- NET_ADMIN
restart: always
ports:
- 127.0.0.1:9000-9001:8888/tcp
environment:
- VPNSP=Surfshark
- OPENVPN_USER=${user}
- PASSWORD=${pass}
- HTTPPROXY=ON
scale: 2
nginx:
image: nginx
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
and my nginx configuration
default.conf
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://proxy:8888;
}
}
I'd appreciate any advice you can give me.

Actually that's not what I wanted, but it works with twisted instead of nginx. Maybe someone finds a better solution.
docker-compose.yml
version: "2.4"
services:
proxy:
image: qmcgaw/private-internet-access
cap_add:
- NET_ADMIN
restart: always
environment:
- VPNSP=Surfshark
- OPENVPN_USER=${user}
- PASSWORD=${pass}
- HTTPPROXY=ON
scale: 2
twisted:
container_name: twisted
build: .
restart:
always
ports:
- 127.0.0.1:8080:8080/tcp
healthcheck:
test: ["CMD-SHELL", "curl https://google.de --proxy 127.0.0.1:8080"]
interval: 20s
timeout: 10s
retries: 5
Dockerfile
FROM stibbons31/alpine-s6-python3:latest
ENV SRC_IP="0.0.0.0"
ENV SRC_PORT=8080
ENV DST_IP="proxy"
ENV DST_PORT=8888
RUN apk add --no-cache g++ python3-dev
RUN pip3 install --no-cache --upgrade pip
RUN pip3 install service_identity twisted
WORKDIR /app
ADD ./app /app
CMD [ "twistd", "-y", "main.py", "-n"]
main.py
import os
from twisted.application import internet, service
from twisted.protocols.portforward import ProxyFactory
SRC_PORT = int(os.environ["SRC_PORT"])
DST_PORT = int(os.environ["DST_PORT"])
application = service.Application("Proxy")
ps = internet.TCPServer(SRC_PORT,
ProxyFactory(os.environ["DST_IP"], DST_PORT),
50,
os.environ["SRC_IP"])
ps.setServiceParent(application)

Related

Strapi dockerize with docker-compose complete guide

https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/installation/docker.html#creating-a-strapi-project
dockerize strapi with docker and dockercompose
Resolve different error
strapi failed to load resource: the server responded with a status of 404 ()
you can use my dockerized project.
Dockerfile:
FROM node:16.15-alpine3.14
RUN mkdir -p /opt/app
WORKDIR /opt/app
RUN adduser -S app
COPY app/ .
RUN npm install
RUN npm install --save #strapi/strapi
RUN chown -R app /opt/app
USER app
RUN npm run build
EXPOSE 1337
CMD [ "npm", "run", "start" ]
if you don't use RUN npm run build your project on port 80 or http://localhost work but strapi admin templates call http://localhost:1337 on your system that you are running on http://localhost and there is no http://localhost:1337 stabile url and strapi throw exceptions like:
Refused to connect to 'http://localhost:1337/admin/init' because it violates the document's Content Security Policy.
Refused to connect to 'http://localhost:1337/admin/init' because it violates the following Content Security Policy directive: "connect-src 'self' https:".
docker-compose.yml:
version: "3.9"
services:
#Strapi Service (APP Service)
strapi_app:
build:
context: .
depends_on:
- strapi_db
ports:
- "80:1337"
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
volumes:
- /var/scrapi/public/uploads:/opt/app/public/uploads
- /var/scrapi/public:/opt/app/public
networks:
- app-network
#PostgreSQL Service
strapi_db:
image: postgres
container_name: strapi_db
environment:
POSTGRES_USER: strapi_db
POSTGRES_PASSWORD: strapi_db
POSTGRES_DB: strapi_db
ports:
- '5432:5432'
volumes:
- dbdata:/var/lib/postgresql/data
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
in docker compose file I used postgres as database, you can use any other databases and set its config in app service environment variables like:
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
for using environment variables in project you must use process.env for getting operating system environment variables.
change app/config/database.js file to:
module.exports = ({ env }) => ({
connection: {
client: process.env.DATABASE_CLIENT,
connection: {
host: process.env.DATABASE_HOST,
port: parseInt(process.env.DATABASE_PORT),
database: process.env.DATABASE_NAME,
user: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
// ssl: Boolean(process.env.DATABASE_SSL),
ssl: false,
},
},
});
Dockerize Strapi with Docker-compose
FROM node:16.14.2
# Set up the working directory that will be used to copy files/directories below :
WORKDIR /app
# Copy package.json to root directory inside Docker container of Strapi app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
CMD ["npm", "start"]
#docker-compose file
version: '3.7'
services:
strapi:
container_name: strapi
restart: unless-stopped
build:
context: ./strapi
dockerfile: Dockerfile
volumes:
- strapi:/app
- /app/node_modules
ports:
- '1337:1337'
volumes:
strapi:
driver: local

Docker Compose with NGINX proxy pass thru not working

Following is my Docker Compose file & NGINX conf file.
The application seems to work and NGINX is also up, but the proxy_pass setting doesn't seem to work properly.
File
docker-compose.yaml
networks:
webapp:
services:
web:
image: nginx
volumes:
- ./data/ntemplates:/etc/nginx/templates
- ./webapp.conf:/etc/nginx/conf.d/webapp.conf
ports:
- "8080:80"
networks:
- webapp
pyweb:
build: .
ports:
- "5000:5000"
networks:
- webapp
redis:
image: "redis:alpine"
networks:
- webapp
File webapp.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
proxy_pass "http://pyweb_1:5000/";
}
#error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
}
Service pyweb is working if properly if directly accessed by http://pyweb_1:5000
I created this app based on docker getting started page
For completeness below are other files and seems to be working just fine.
File Dockerfile
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
File requirement.txt
flask
redis
File app.py
import time
import redis
from flask import Flask
app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)
def get_hit_count():
retries = 5
while True:
try:
return cache.incr('hits')
except redis.exceptions.ConnectionError as exc:
if retries == 0:
raise exc
retries -= 1
time.sleep(0.5)
#app.route('/')
def hello():
count = get_hit_count()
return 'Hello World! I have been seen {} times.\n'.format(count)
EDIT:
You're currently not using the nginx configuration. I didn't read carefully your docker-compose file. You can fix it by mapping the webapp.conf on /etc/nginx/conf.d/default.conf. e.g.
services:
web:
image: nginx
volumes:
- ./data/ntemplates:/etc/nginx/templates
- ./webapp.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
depends_on:
- pyweb
networks:
- webapp
There are 2 issues:
you don't know what container name will be used by docker-compose
you don't know the order used to start the containers
docker-compose allows you to solve the first issue in 2 ways:
define a container_name subsection
use the service name
This means that you can simply use proxy_pass "http://pyweb:5000/"; in your nginx setup
The second issue can be fixed by adding a depends_on subsection in the nginx service. e.g.
services:
web:
image: nginx
volumes:
- ./data/ntemplates:/etc/nginx/templates
- ./webapp.conf:/etc/nginx/conf.d/webapp.conf
depends_on:
- pyweb
ports:
- "8080:80"
networks:
- webapp
Nevertheless, the depends_on might not be enough since it does not check the service status but it only make sure that the docker service is started (as stated in the documentation).
You'll need to find another way to monitor if the service is actually started.

docker port mapping using docker-gen and letsencrypt-companion

i have several flask applications which i want to run on a server as separate docker containers. on the server i already have several websites running with a reverse proxy and the letsencrypt-nginx-proxy-companion. unfortunately i can't get the containers to run. I think it is because of the port mapping. When I start the containers on port 80, I get the following error message "[ERROR] Can't connect to ('', 80)" from gunicorn. On all other ports it starts successfully, but then I can't access it from outside.
what am I doing wrong?
docker-compose.yml
version: '3'
services:
db:
image: "mysql/mysql-server:5.7"
env_file: .env-mysql
restart: always
app:
build: .
env_file: .env
expose:
- "8001"
environment:
- VIRTUAL_HOST:example.com
- VIRTUAL_PORT:'8001'
- LETSENCRYPT_HOST:example.com
- LETSENCRYPT_EMAIL:foo#example.com
links:
- db:dbserver
restart: always
networks:
default:
external:
name: nginx-proxy
Dockerfile
FROM python:3.6-alpine
ARG CONTAINER_USER='flask-user'
ENV FLASK_APP run.py
ENV FLASK_CONFIG docker
RUN adduser -D ${CONTAINER_USER}
USER ${CONTAINER_USER}
WORKDIR /home/${CONTAINER_USER}
COPY requirements requirements
RUN python -m venv venv
RUN venv/bin/pip install -r requirements/docker.txt
COPY app app
COPY migrations migrations
COPY run.py config.py entrypoint.sh ./
# runtime configuration
EXPOSE 8001
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/sh
source venv/bin/activate
flask deploy
exec gunicorn -b :8001 --access-logfile - --error-logfile - run:app
reverse-proxy/docker-compose.yml
version: '3'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: nginx-gen
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- /srv/www/nginx-proxy/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
DEBUG: "true"
networks:
default:
external:
name: nginx-proxy

docker-compose uwsgi connect() failed (111: Connection refused)

I made infra with docker.
Also used docker-compose to tide each container.
Below is images that I used.
nginx:latest
mongo:latest
python:3.6.5
To deploy flask webserver, I used uwsgi.
(uwsgi installed at python:3.6.5)
[docker-compose.yml]
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: docker/nginx/dockerfile
container_name: nginx
hostname: nginx-dev
ports:
- '80:80'
networks:
- backend
links:
- web_project
mongodb:
build:
context: .
dockerfile: docker/mongodb/dockerfile
container_name: mongodb
hostname: mongodb-dev
ports:
- '27017:27017'
networks:
- backend
web_project:
build:
context: .
dockerfile: docker/web/dockerfile
container_name: web_project
hostname: web_project_dev
ports:
- '5000:5000'
networks:
- backend
tty: true
depends_on:
- mongodb
links:
- mongodb
networks:
backend:
driver: 'bridge'
[/docker/nginx/dockerfile]
FROM nginx:latest
COPY . ./home
WORKDIR home
RUN rm /etc/nginx/conf.d/default.conf
COPY ./config/nginx.conf /etc/nginx/conf.d/default.conf
[/config/nginx.conf]
upstream flask {
server web_project:5000;
}
server {
listen 80;
location / {
uwsgi_pass flask;
include /home/config/uwsgi_params;
}
}
[/docker/web/dockerfile]
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
RUN apt-get update && apt-get install -y uwsgi-plugin-python
RUN uwsgi --ini config/uwsgi.ini
[uwsgi.ini]
[uwsgi]
chdir = /home/app
socket = :5000
chmod-socket = 666
logto = /home/web.log
master = true
process = 2
daemonize = /home/uwsgi.log
Defined socket = :5000.
After build/up and access to website, it throws 502 bad gateway throw error to console.
nginx | 2018/11/12 06:28:55 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.27.0.1, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://172.27.0.3:5000", host: "localhost"
I searched in google long times, but I can't find the solution.
Is there any solution here?
Thanks.
You must expose the port 5000 in the Python app
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
RUN apt-get update && apt-get install -y uwsgi-plugin-python
EXPOSE 5000 # <---- add this
RUN uwsgi --ini config/uwsgi.ini

Docker-compose, Laravel-echo-server and Redis connectivity issue: [ioredis] Unhandled error event: Error: connect ECONNREFUSED 172.20.0.2:6379

I'm setting up a docker stack with PHP, PostgreSQL, Nginx, Laravel-Echo-Server and Redis and having some issues with Redis and the echo-server connecting. I'm using a docker-compose.yml:
version: '3'
networks:
app-tier:
driver: bridge
services:
app:
build:
context: .
dockerfile: .docker/php/Dockerfile
networks:
- app-tier
ports:
- 9002:9000
volumes:
- .:/srv/app
nginx:
build:
context: .
dockerfile: .docker/nginx/Dockerfile
networks:
- app-tier
ports:
- 8080:80
volumes:
- ./public:/srv/app/public
db:
build:
context: .docker/postgres/
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 5433:5432
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: secret
volumes:
- .docker/postgres/data:/var/lib/postgresql/data
laravel-echo-server:
build:
context: .docker/laravel-echo-server
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 6001:6001
links:
- 'redis:redis'
redis:
build:
context: .docker/redis
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
volumes:
- .docker/redis/data:/var/lib/redis/data
My echo-server Dockerfile:
FROM node:10-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN apk add --update \
python \
python-dev \
py-pip \
build-base
RUN npm install
COPY laravel-echo-server.json /usr/src/app/laravel-echo-server.json
EXPOSE 3000
CMD [ "npm", "start" ]
Redis Dockerfile:
FROM redis:latest
LABEL maintainer="maintainer"
COPY . /usr/src/app
COPY redis.conf /usr/src/app/redis/redis.conf
VOLUME /data
EXPOSE 6379
CMD ["redis-server", "/usr/src/app/redis/redis.conf"]
My laravel-echo-server.json:
{
"authHost": "localhost",
"authEndpoint": "/broadcasting/auth",
"clients": [],
"database": "redis",
"databaseConfig": {
"redis": {
"port": "6379",
"host": "redis"
}
},
"devMode": true,
"host": null,
"port": "6001",
"protocol": "http",
"socketio": {},
"sslCertPath": "",
"sslKeyPath": ""
}
The redis.conf is the default right now. The error I am getting from the laravel-echo-server is:
[ioredis] Unhandled error event: Error: connect ECONNREFUSED 172.20.0.2:6379 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1163:14)
Redis is up and running fine, using the configuration file and ready to accept connections. docker ps shows both redis and echo-server are up, so they're just not connecting as the error indicates. If I change the final line in the Redis Dockerfile to just CMD ["redis-server"] it appears to connect and auto uses the default config (which is the same as the one I have in my .docker directory), but I get this error: Possible SECURITY ATTACK detected. It looks like somebody is sending POST or Host: commands to Redis. This is likely due to an attacker attempting to use Cross Protocol Scripting to compromise your Redis instance. Connection aborted.

Resources