I made infra with docker.
Also used docker-compose to tide each container.
Below is images that I used.
nginx:latest
mongo:latest
python:3.6.5
To deploy flask webserver, I used uwsgi.
(uwsgi installed at python:3.6.5)
[docker-compose.yml]
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: docker/nginx/dockerfile
container_name: nginx
hostname: nginx-dev
ports:
- '80:80'
networks:
- backend
links:
- web_project
mongodb:
build:
context: .
dockerfile: docker/mongodb/dockerfile
container_name: mongodb
hostname: mongodb-dev
ports:
- '27017:27017'
networks:
- backend
web_project:
build:
context: .
dockerfile: docker/web/dockerfile
container_name: web_project
hostname: web_project_dev
ports:
- '5000:5000'
networks:
- backend
tty: true
depends_on:
- mongodb
links:
- mongodb
networks:
backend:
driver: 'bridge'
[/docker/nginx/dockerfile]
FROM nginx:latest
COPY . ./home
WORKDIR home
RUN rm /etc/nginx/conf.d/default.conf
COPY ./config/nginx.conf /etc/nginx/conf.d/default.conf
[/config/nginx.conf]
upstream flask {
server web_project:5000;
}
server {
listen 80;
location / {
uwsgi_pass flask;
include /home/config/uwsgi_params;
}
}
[/docker/web/dockerfile]
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
RUN apt-get update && apt-get install -y uwsgi-plugin-python
RUN uwsgi --ini config/uwsgi.ini
[uwsgi.ini]
[uwsgi]
chdir = /home/app
socket = :5000
chmod-socket = 666
logto = /home/web.log
master = true
process = 2
daemonize = /home/uwsgi.log
Defined socket = :5000.
After build/up and access to website, it throws 502 bad gateway throw error to console.
nginx | 2018/11/12 06:28:55 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.27.0.1, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://172.27.0.3:5000", host: "localhost"
I searched in google long times, but I can't find the solution.
Is there any solution here?
Thanks.
You must expose the port 5000 in the Python app
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
RUN apt-get update && apt-get install -y uwsgi-plugin-python
EXPOSE 5000 # <---- add this
RUN uwsgi --ini config/uwsgi.ini
Related
https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/installation/docker.html#creating-a-strapi-project
dockerize strapi with docker and dockercompose
Resolve different error
strapi failed to load resource: the server responded with a status of 404 ()
you can use my dockerized project.
Dockerfile:
FROM node:16.15-alpine3.14
RUN mkdir -p /opt/app
WORKDIR /opt/app
RUN adduser -S app
COPY app/ .
RUN npm install
RUN npm install --save #strapi/strapi
RUN chown -R app /opt/app
USER app
RUN npm run build
EXPOSE 1337
CMD [ "npm", "run", "start" ]
if you don't use RUN npm run build your project on port 80 or http://localhost work but strapi admin templates call http://localhost:1337 on your system that you are running on http://localhost and there is no http://localhost:1337 stabile url and strapi throw exceptions like:
Refused to connect to 'http://localhost:1337/admin/init' because it violates the document's Content Security Policy.
Refused to connect to 'http://localhost:1337/admin/init' because it violates the following Content Security Policy directive: "connect-src 'self' https:".
docker-compose.yml:
version: "3.9"
services:
#Strapi Service (APP Service)
strapi_app:
build:
context: .
depends_on:
- strapi_db
ports:
- "80:1337"
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
volumes:
- /var/scrapi/public/uploads:/opt/app/public/uploads
- /var/scrapi/public:/opt/app/public
networks:
- app-network
#PostgreSQL Service
strapi_db:
image: postgres
container_name: strapi_db
environment:
POSTGRES_USER: strapi_db
POSTGRES_PASSWORD: strapi_db
POSTGRES_DB: strapi_db
ports:
- '5432:5432'
volumes:
- dbdata:/var/lib/postgresql/data
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
in docker compose file I used postgres as database, you can use any other databases and set its config in app service environment variables like:
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
for using environment variables in project you must use process.env for getting operating system environment variables.
change app/config/database.js file to:
module.exports = ({ env }) => ({
connection: {
client: process.env.DATABASE_CLIENT,
connection: {
host: process.env.DATABASE_HOST,
port: parseInt(process.env.DATABASE_PORT),
database: process.env.DATABASE_NAME,
user: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
// ssl: Boolean(process.env.DATABASE_SSL),
ssl: false,
},
},
});
Dockerize Strapi with Docker-compose
FROM node:16.14.2
# Set up the working directory that will be used to copy files/directories below :
WORKDIR /app
# Copy package.json to root directory inside Docker container of Strapi app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
CMD ["npm", "start"]
#docker-compose file
version: '3.7'
services:
strapi:
container_name: strapi
restart: unless-stopped
build:
context: ./strapi
dockerfile: Dockerfile
volumes:
- strapi:/app
- /app/node_modules
ports:
- '1337:1337'
volumes:
strapi:
driver: local
I have problems to run multiple proxies and connect a nginx reverse proxy to it.
The image show what I want to archive
What works is when I connect to a proxy directly
# proxy 1
print(requests.get("https://api.ipify.org?format=json", proxies={
"http": "127.0.0.1:9000",
"https": "127.0.0.1:9000"
}).content)
# proxy 2
print(requests.get("https://api.ipify.org?format=json", proxies={
"http": "127.0.0.1:9001",
"https": "127.0.0.1:9001"
}).content)
But it did not work when I use the nginx reverse proxy
# nginx
print(requests.get("https://api.ipify.org?format=json", proxies={
"http": "127.0.0.1:8080",
"https": "127.0.0.1:8080"
}).content)
Response:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='api.ipify.org', port=443): Max retries exceeded with url: /?format=json (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 400 Bad Request')))
That's my docker container yml file
docker-compose.yml
version: "2.4"
services:
proxy:
image: qmcgaw/private-internet-access
cap_add:
- NET_ADMIN
restart: always
ports:
- 127.0.0.1:9000-9001:8888/tcp
environment:
- VPNSP=Surfshark
- OPENVPN_USER=${user}
- PASSWORD=${pass}
- HTTPPROXY=ON
scale: 2
nginx:
image: nginx
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
and my nginx configuration
default.conf
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://proxy:8888;
}
}
I'd appreciate any advice you can give me.
Actually that's not what I wanted, but it works with twisted instead of nginx. Maybe someone finds a better solution.
docker-compose.yml
version: "2.4"
services:
proxy:
image: qmcgaw/private-internet-access
cap_add:
- NET_ADMIN
restart: always
environment:
- VPNSP=Surfshark
- OPENVPN_USER=${user}
- PASSWORD=${pass}
- HTTPPROXY=ON
scale: 2
twisted:
container_name: twisted
build: .
restart:
always
ports:
- 127.0.0.1:8080:8080/tcp
healthcheck:
test: ["CMD-SHELL", "curl https://google.de --proxy 127.0.0.1:8080"]
interval: 20s
timeout: 10s
retries: 5
Dockerfile
FROM stibbons31/alpine-s6-python3:latest
ENV SRC_IP="0.0.0.0"
ENV SRC_PORT=8080
ENV DST_IP="proxy"
ENV DST_PORT=8888
RUN apk add --no-cache g++ python3-dev
RUN pip3 install --no-cache --upgrade pip
RUN pip3 install service_identity twisted
WORKDIR /app
ADD ./app /app
CMD [ "twistd", "-y", "main.py", "-n"]
main.py
import os
from twisted.application import internet, service
from twisted.protocols.portforward import ProxyFactory
SRC_PORT = int(os.environ["SRC_PORT"])
DST_PORT = int(os.environ["DST_PORT"])
application = service.Application("Proxy")
ps = internet.TCPServer(SRC_PORT,
ProxyFactory(os.environ["DST_IP"], DST_PORT),
50,
os.environ["SRC_IP"])
ps.setServiceParent(application)
I have docker-compose that consists of 2 services:
front-end application that runs on port 3000
back-end applications that runs on port 443
mt_symfony:
container_name: mt_symfony
build:
context: ./html
dockerfile: dev.dockerfile
environment:
XDEBUG_CONFIG: "remote_host=192.168.220.1 remote_port=10000"
PHP_IDE_CONFIG: "serverName=mt_symfony"
ports:
- 443:443
- 80:80
networks:
- mt_network
volumes:
- ./html:/var/www/html
sysctls:
- net.ipv4.ip_unprivileged_port_start=0
mt_angular:
container_name: mt_angular
build:
context: ./web
dockerfile: dev.dockerfile
ports:
- 3000:3000
networks:
- mt_network
command: ./dev.entrypoint.sh
networks:
mt_network:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.220.0/28
And also in my php.ini file I have this:
[xdebug]
error_reporting = E_ALL
display_startup_errors = On
display_errors = On
xdebug.remote_enable=1
mt_symfony dockerfile:
FROM php:5.6.37-apache
EXPOSE 443 80
RUN pecl install xdebug-2.5.5
RUN docker-php-ext-enable xdebug
COPY ./docker/php5.6-fpm.conf /etc/apache2/conf-available
RUN a2enmod headers \
&& a2enmod ssl \
&& a2enmod rewrite \
&& a2enconf php5.6-fpm.conf \
&& a2ensite httpd.conf
In PhpStorm:
"Build, Execution, Deployment -> Docker" shows "Connection successful"
"Languages & Frameworks -> PHP -> CLI Interpreter" connects to docker mt_symfony container and detects installed Xdebug
"Languages & Frameworks -> PHP -> Xdebug -> Validate" I'm able to validate Xdebug on port 80, but it does not work at all on port 443
I'm setting up a docker stack with PHP, PostgreSQL, Nginx, Laravel-Echo-Server and Redis and having some issues with Redis and the echo-server connecting. I'm using a docker-compose.yml:
version: '3'
networks:
app-tier:
driver: bridge
services:
app:
build:
context: .
dockerfile: .docker/php/Dockerfile
networks:
- app-tier
ports:
- 9002:9000
volumes:
- .:/srv/app
nginx:
build:
context: .
dockerfile: .docker/nginx/Dockerfile
networks:
- app-tier
ports:
- 8080:80
volumes:
- ./public:/srv/app/public
db:
build:
context: .docker/postgres/
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 5433:5432
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: secret
volumes:
- .docker/postgres/data:/var/lib/postgresql/data
laravel-echo-server:
build:
context: .docker/laravel-echo-server
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
ports:
- 6001:6001
links:
- 'redis:redis'
redis:
build:
context: .docker/redis
dockerfile: Dockerfile
restart: unless-stopped
networks:
- app-tier
volumes:
- .docker/redis/data:/var/lib/redis/data
My echo-server Dockerfile:
FROM node:10-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN apk add --update \
python \
python-dev \
py-pip \
build-base
RUN npm install
COPY laravel-echo-server.json /usr/src/app/laravel-echo-server.json
EXPOSE 3000
CMD [ "npm", "start" ]
Redis Dockerfile:
FROM redis:latest
LABEL maintainer="maintainer"
COPY . /usr/src/app
COPY redis.conf /usr/src/app/redis/redis.conf
VOLUME /data
EXPOSE 6379
CMD ["redis-server", "/usr/src/app/redis/redis.conf"]
My laravel-echo-server.json:
{
"authHost": "localhost",
"authEndpoint": "/broadcasting/auth",
"clients": [],
"database": "redis",
"databaseConfig": {
"redis": {
"port": "6379",
"host": "redis"
}
},
"devMode": true,
"host": null,
"port": "6001",
"protocol": "http",
"socketio": {},
"sslCertPath": "",
"sslKeyPath": ""
}
The redis.conf is the default right now. The error I am getting from the laravel-echo-server is:
[ioredis] Unhandled error event: Error: connect ECONNREFUSED 172.20.0.2:6379 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1163:14)
Redis is up and running fine, using the configuration file and ready to accept connections. docker ps shows both redis and echo-server are up, so they're just not connecting as the error indicates. If I change the final line in the Redis Dockerfile to just CMD ["redis-server"] it appears to connect and auto uses the default config (which is the same as the one I have in my .docker directory), but I get this error: Possible SECURITY ATTACK detected. It looks like somebody is sending POST or Host: commands to Redis. This is likely due to an attacker attempting to use Cross Protocol Scripting to compromise your Redis instance. Connection aborted.
I'm trying to run Nginx through docker-compose to perform load balancing over Node.js back-end. I tried to ways:
Writing all configurations in docker-compose.yml while passing a nginx.conf as volume as follows:
version: '3'
services:
#nginx
nginx:
image: nginx
container_name: books-nginx
ports:
- 80:80
links:
- node1:node1
- node2:node2
- node3:node3
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
# Back-end
node1:
image: node
container_name: books-api1
ports:
- 3000
volumes:
- ./books-backend:/app
links:
- mongodb
environment:
- admin=admin-password
- secret=strong-password
command: bash -c "cd /app && npm install && npm start"
node2:
.
.
.
node3:
.
.
.
# MongoDB
mongodb:
image: mongo
container_name: books-mongo
ports:
- 27017
volumes:
- ./db/mongo:/data/db
In this case nginx runs perfectly.
Writing the configurations in a dockerfile inside 'nginx' directory and then starting it from docker-compose as follows:
nginx/Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
docker-compose:
#nginx
nginx:
build: ./nginx
container_name: books-nginx
ports:
- 80:80
links:
- node1:node1
- node2:node2
- node3:node3
But in this case whenever I send a request (eg. for admin authentication) to the back-end I get the following error:
books-nginx | 2017/07/31 22:19:33 [error] 6#6: *1 open() "/usr/share/nginx/html/api/admin/authenticate" failed (2: No such file or directory), client: 172.18.0.1, server: localhost, request: "POST /api/admin/authenticate HTTP/1.1", host: "localhost"
books-nginx | 172.18.0.1 - - [31/Jul/2017:22:19:33 +0000] "POST /api/admin/authenticate HTTP/1.1" 404 571 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Safari/537.36" "-"
Any ideas on how to make the second way work like the first one (with COPY command) ?
Update
I tried creating the following dockerfile
FROM nginx
MAINTAINER Tamer Mohamed Bahgat
RUN rm -v /etc/nginx/nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
ADD nginx/nginx.conf /etc/nginx/
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
CMD service nginx start
and then creating the image separtely using docker build -t test-nginx . and using it in docker-compose.yml using image: test-nginx . It worked and gave no errors.
But, using build: . (. is the location of the same dockerfile) still gives me the same error.