I'm trying to run Nginx through docker-compose to perform load balancing over Node.js back-end. I tried to ways:
Writing all configurations in docker-compose.yml while passing a nginx.conf as volume as follows:
version: '3'
services:
#nginx
nginx:
image: nginx
container_name: books-nginx
ports:
- 80:80
links:
- node1:node1
- node2:node2
- node3:node3
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
# Back-end
node1:
image: node
container_name: books-api1
ports:
- 3000
volumes:
- ./books-backend:/app
links:
- mongodb
environment:
- admin=admin-password
- secret=strong-password
command: bash -c "cd /app && npm install && npm start"
node2:
.
.
.
node3:
.
.
.
# MongoDB
mongodb:
image: mongo
container_name: books-mongo
ports:
- 27017
volumes:
- ./db/mongo:/data/db
In this case nginx runs perfectly.
Writing the configurations in a dockerfile inside 'nginx' directory and then starting it from docker-compose as follows:
nginx/Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
docker-compose:
#nginx
nginx:
build: ./nginx
container_name: books-nginx
ports:
- 80:80
links:
- node1:node1
- node2:node2
- node3:node3
But in this case whenever I send a request (eg. for admin authentication) to the back-end I get the following error:
books-nginx | 2017/07/31 22:19:33 [error] 6#6: *1 open() "/usr/share/nginx/html/api/admin/authenticate" failed (2: No such file or directory), client: 172.18.0.1, server: localhost, request: "POST /api/admin/authenticate HTTP/1.1", host: "localhost"
books-nginx | 172.18.0.1 - - [31/Jul/2017:22:19:33 +0000] "POST /api/admin/authenticate HTTP/1.1" 404 571 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Safari/537.36" "-"
Any ideas on how to make the second way work like the first one (with COPY command) ?
Update
I tried creating the following dockerfile
FROM nginx
MAINTAINER Tamer Mohamed Bahgat
RUN rm -v /etc/nginx/nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
ADD nginx/nginx.conf /etc/nginx/
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
CMD service nginx start
and then creating the image separtely using docker build -t test-nginx . and using it in docker-compose.yml using image: test-nginx . It worked and gave no errors.
But, using build: . (. is the location of the same dockerfile) still gives me the same error.
Related
https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/installation/docker.html#creating-a-strapi-project
dockerize strapi with docker and dockercompose
Resolve different error
strapi failed to load resource: the server responded with a status of 404 ()
you can use my dockerized project.
Dockerfile:
FROM node:16.15-alpine3.14
RUN mkdir -p /opt/app
WORKDIR /opt/app
RUN adduser -S app
COPY app/ .
RUN npm install
RUN npm install --save #strapi/strapi
RUN chown -R app /opt/app
USER app
RUN npm run build
EXPOSE 1337
CMD [ "npm", "run", "start" ]
if you don't use RUN npm run build your project on port 80 or http://localhost work but strapi admin templates call http://localhost:1337 on your system that you are running on http://localhost and there is no http://localhost:1337 stabile url and strapi throw exceptions like:
Refused to connect to 'http://localhost:1337/admin/init' because it violates the document's Content Security Policy.
Refused to connect to 'http://localhost:1337/admin/init' because it violates the following Content Security Policy directive: "connect-src 'self' https:".
docker-compose.yml:
version: "3.9"
services:
#Strapi Service (APP Service)
strapi_app:
build:
context: .
depends_on:
- strapi_db
ports:
- "80:1337"
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
volumes:
- /var/scrapi/public/uploads:/opt/app/public/uploads
- /var/scrapi/public:/opt/app/public
networks:
- app-network
#PostgreSQL Service
strapi_db:
image: postgres
container_name: strapi_db
environment:
POSTGRES_USER: strapi_db
POSTGRES_PASSWORD: strapi_db
POSTGRES_DB: strapi_db
ports:
- '5432:5432'
volumes:
- dbdata:/var/lib/postgresql/data
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
in docker compose file I used postgres as database, you can use any other databases and set its config in app service environment variables like:
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
for using environment variables in project you must use process.env for getting operating system environment variables.
change app/config/database.js file to:
module.exports = ({ env }) => ({
connection: {
client: process.env.DATABASE_CLIENT,
connection: {
host: process.env.DATABASE_HOST,
port: parseInt(process.env.DATABASE_PORT),
database: process.env.DATABASE_NAME,
user: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
// ssl: Boolean(process.env.DATABASE_SSL),
ssl: false,
},
},
});
Dockerize Strapi with Docker-compose
FROM node:16.14.2
# Set up the working directory that will be used to copy files/directories below :
WORKDIR /app
# Copy package.json to root directory inside Docker container of Strapi app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
CMD ["npm", "start"]
#docker-compose file
version: '3.7'
services:
strapi:
container_name: strapi
restart: unless-stopped
build:
context: ./strapi
dockerfile: Dockerfile
volumes:
- strapi:/app
- /app/node_modules
ports:
- '1337:1337'
volumes:
strapi:
driver: local
I have problems to run multiple proxies and connect a nginx reverse proxy to it.
The image show what I want to archive
What works is when I connect to a proxy directly
# proxy 1
print(requests.get("https://api.ipify.org?format=json", proxies={
"http": "127.0.0.1:9000",
"https": "127.0.0.1:9000"
}).content)
# proxy 2
print(requests.get("https://api.ipify.org?format=json", proxies={
"http": "127.0.0.1:9001",
"https": "127.0.0.1:9001"
}).content)
But it did not work when I use the nginx reverse proxy
# nginx
print(requests.get("https://api.ipify.org?format=json", proxies={
"http": "127.0.0.1:8080",
"https": "127.0.0.1:8080"
}).content)
Response:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='api.ipify.org', port=443): Max retries exceeded with url: /?format=json (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 400 Bad Request')))
That's my docker container yml file
docker-compose.yml
version: "2.4"
services:
proxy:
image: qmcgaw/private-internet-access
cap_add:
- NET_ADMIN
restart: always
ports:
- 127.0.0.1:9000-9001:8888/tcp
environment:
- VPNSP=Surfshark
- OPENVPN_USER=${user}
- PASSWORD=${pass}
- HTTPPROXY=ON
scale: 2
nginx:
image: nginx
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
and my nginx configuration
default.conf
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://proxy:8888;
}
}
I'd appreciate any advice you can give me.
Actually that's not what I wanted, but it works with twisted instead of nginx. Maybe someone finds a better solution.
docker-compose.yml
version: "2.4"
services:
proxy:
image: qmcgaw/private-internet-access
cap_add:
- NET_ADMIN
restart: always
environment:
- VPNSP=Surfshark
- OPENVPN_USER=${user}
- PASSWORD=${pass}
- HTTPPROXY=ON
scale: 2
twisted:
container_name: twisted
build: .
restart:
always
ports:
- 127.0.0.1:8080:8080/tcp
healthcheck:
test: ["CMD-SHELL", "curl https://google.de --proxy 127.0.0.1:8080"]
interval: 20s
timeout: 10s
retries: 5
Dockerfile
FROM stibbons31/alpine-s6-python3:latest
ENV SRC_IP="0.0.0.0"
ENV SRC_PORT=8080
ENV DST_IP="proxy"
ENV DST_PORT=8888
RUN apk add --no-cache g++ python3-dev
RUN pip3 install --no-cache --upgrade pip
RUN pip3 install service_identity twisted
WORKDIR /app
ADD ./app /app
CMD [ "twistd", "-y", "main.py", "-n"]
main.py
import os
from twisted.application import internet, service
from twisted.protocols.portforward import ProxyFactory
SRC_PORT = int(os.environ["SRC_PORT"])
DST_PORT = int(os.environ["DST_PORT"])
application = service.Application("Proxy")
ps = internet.TCPServer(SRC_PORT,
ProxyFactory(os.environ["DST_IP"], DST_PORT),
50,
os.environ["SRC_IP"])
ps.setServiceParent(application)
I am new to docker
Here is my configuration
Folder Structure
Test :
- docker-compose.yml
- Dockerfile
- www
- index.html
Docker YML
version: "3"
services:
www:
build: .
ports:
- "8001:80"
volumes:
- ./www:/var/www/html/
links:
- db
networks:
- default
db:
image: mysql:8.0.16
command: ['--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci','--default-authentication-plugin=mysql_native_password']
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: myDb
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
- ./dump:/docker-entrypoint-initdb.d
- persistent:/var/lib/mysql
networks:
- default
phpmyadmin:
image: phpmyadmin/phpmyadmin:4.8
links:
- db:db
ports:
- 8000:80
environment:
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
persistent:
Dockerfile
FROM php:7.2.6-apache
RUN docker-php-ext-install mysqli pdo pdo_mysql gd curl
RUN a2enmod rewrite
RUN chmod -R 775 /var/www/html
phpmyadmin dashboard working correctly, But when i enter the web url it shows 403 forbidden error
When it check the log it shows an error like this :
[Mon Sep 02 12:00:44.290707 2019] [autoindex:error] [pid 18] [client 192.168.99.1:52312] AH01276: Cannot serve directory /var/www/html/: No matching DirectoryIndex (index.php,index.html) found, and server-generated directory index forbidden by Options directive
192.168.99.1 - - [02/Sep/2019:12:00:44 +0000] "GET / HTTP/1.1" 403 508 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36"
and my "/var/www/html" directory was empty. How can i fix it ?
Update
I created a file index.php using bash it worked, but i can't locate index.php file on my file system
Please help
If you need any additional info feel free to ask :).
Thanks
Finally i figured the issue, i am using docker toolbox so it only mount C:\Users directory but my project folder is on d drive. so i have to mount my D:\projects directory to vm shared folder. i followed the below steps
In Virtual Box under 'Settings' -> 'Shared Folders' added 'projects'
and pointed it to the location I want to mount. In my case this is
'D:\projects' (Auto-mount and Make Permanent enabled)
Start Docker Quickstart Terminal
Type 'docker-machine ssh default' (the VirtualBox VM that Docker
uses is called 'default')
Go to the root of the VM filesystem, command 'cd /'
Switch to the user root by typing 'sudo su'
Create the directory you want to use as a mount point. Which in my
case is the same as the name of the shared folder in VirtualBox:
'mkdir projects'
Mount the VirtualBox shared folder by typing 'mount -t vboxsf -o
uid=1000,gid=50 projects /projects' (the first 'projects' is the
VirtualBox shared folder name, the second '/projects' is the
directory I just created and want to use as the mount point).
Now I can add a volume to my Docker file like this: '-
/projects/test/www/build/:/var/www/html/' (left side is the
/projects mount in my VM, the right side is the directory to mount
in my docker container)
Run the command 'docker-compose up' to start using the mount (to be
clear: run this command via the Docker Quickstart Terminal outside
of your SSH session on your local file system where your
docker-compose.yml file is located).
And i change the docker-compose.yml like this :
version: "3"
services:
www:
build:
context: .
dockerfile: Dockerfile
ports:
- "8001:80"
volumes:
- /projects/test/www:/var/www/html/
links:
- db
networks:
- default
db:
image: mysql:8.0.16
command: ['--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci','--default-authentication-plugin=mysql_native_password']
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: myDb
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
- ./dump:/docker-entrypoint-initdb.d
- persistent:/var/lib/mysql
networks:
- default
phpmyadmin:
image: phpmyadmin/phpmyadmin:4.8
links:
- db:db
ports:
- 8000:80
environment:
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
persistent:
i also updated the oracle vm.
I found this solution from here : https://github.com/moby/moby/issues/22430#issuecomment-215974613
Thanks bro :)
You need to modify your Dockerfile,
FROM php:7.2.6-apache
RUN docker-php-ext-install pdo_mysql
RUN a2enmod rewrite
COPY www/ /var/www/html
RUN chmod -R 775 /var/www/html
This will copy your www dir to /var/www/html dir inside container, let your web service run.
I made infra with docker.
Also used docker-compose to tide each container.
Below is images that I used.
nginx:latest
mongo:latest
python:3.6.5
To deploy flask webserver, I used uwsgi.
(uwsgi installed at python:3.6.5)
[docker-compose.yml]
version: '3.7'
services:
nginx:
build:
context: .
dockerfile: docker/nginx/dockerfile
container_name: nginx
hostname: nginx-dev
ports:
- '80:80'
networks:
- backend
links:
- web_project
mongodb:
build:
context: .
dockerfile: docker/mongodb/dockerfile
container_name: mongodb
hostname: mongodb-dev
ports:
- '27017:27017'
networks:
- backend
web_project:
build:
context: .
dockerfile: docker/web/dockerfile
container_name: web_project
hostname: web_project_dev
ports:
- '5000:5000'
networks:
- backend
tty: true
depends_on:
- mongodb
links:
- mongodb
networks:
backend:
driver: 'bridge'
[/docker/nginx/dockerfile]
FROM nginx:latest
COPY . ./home
WORKDIR home
RUN rm /etc/nginx/conf.d/default.conf
COPY ./config/nginx.conf /etc/nginx/conf.d/default.conf
[/config/nginx.conf]
upstream flask {
server web_project:5000;
}
server {
listen 80;
location / {
uwsgi_pass flask;
include /home/config/uwsgi_params;
}
}
[/docker/web/dockerfile]
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
RUN apt-get update && apt-get install -y uwsgi-plugin-python
RUN uwsgi --ini config/uwsgi.ini
[uwsgi.ini]
[uwsgi]
chdir = /home/app
socket = :5000
chmod-socket = 666
logto = /home/web.log
master = true
process = 2
daemonize = /home/uwsgi.log
Defined socket = :5000.
After build/up and access to website, it throws 502 bad gateway throw error to console.
nginx | 2018/11/12 06:28:55 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.27.0.1, server: , request: "GET / HTTP/1.1", upstream: "uwsgi://172.27.0.3:5000", host: "localhost"
I searched in google long times, but I can't find the solution.
Is there any solution here?
Thanks.
You must expose the port 5000 in the Python app
FROM python:3.6.5
COPY . ./home
WORKDIR home
RUN pip install -r app/requirements.txt
RUN apt-get update && apt-get install -y uwsgi-plugin-python
EXPOSE 5000 # <---- add this
RUN uwsgi --ini config/uwsgi.ini
I have a docker setup with the following
rails api backend
mysql db
redis db
node/react frontend (webpack)
nginx serving the frontend
(the rails backend is currently being served through the builtin puma server - I think I will move it to the same nginx server runing the node app)
My problem is that the frontend will request stuff on the backend, but this does not work.
I have set up a proxy on nginx as follows:
#nginx.conf
server {
listen 8080;
# Always serve index.html for any request
location / {
# Set path
root /wwwroot/;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://127.0.0.1:3000;
}
}
But when I when I initiate an api call I get the following in the nginx log:
nginx-server | 2017/05/13 20:56:08 [error] 5#5: *19 connect() failed (111: Connection refused) while connecting to upstream, client: 172.21.0.1, server: , request: "POST /api/authenticate HTTP/1.1", upstream: "http://127.0.0.1:3000/api/authenticate", host: "localhost:8080", referrer: "http://localhost:8080/"
nginx-server | 172.21.0.1 - - [13/May/2017:20:56:08 +0000] "POST /api/authenticate HTTP/1.1" 502 575 "http://localhost:8080/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" "-"
And I do not see any thing hitting the puma server.
I am not sure where I should be looking. Is this a problem with my docker-compose file or is it a nginx issue (or both).
I have included my docker-compose.yml below:
version: '2'
services:
nginx:
build:
context: .
dockerfile: docker.nginx
image: pt-nginx
container_name: nginx-server
ports:
- "8080:8080"
volumes:
- wwwroot:/wwwroot
webpack:
build:
context: ./frontend
dockerfile: docker.webpack
image: pt-webpack
container_name: react-frontend
ports:
- "35729:35729"
volumes:
- ./frontend:/app
- /app/node_modules
- wwwroot:/wwwroot
db:
build:
context: ./backend
dockerfile: docker.mysql
image: pt-mysql
container_name: mysql-server
env_file: ./backend/.env
ports:
- "3306:3306"
volumes:
- ./sql/data/:/var/lib/mysql
redis:
build:
context: ./backend
dockerfile: docker.redis
image: pt-redis
container_name: redis-server
app:
build:
context: ./backend
dockerfile: docker.rails
image: pt-rails
container_name: rails-server
command: >
sh -c '
rake resque:start_workers;
bundle exec rails s -p 3000 -b 0.0.0.0;
'
env_file: ./backend/.env
volumes:
- ./backend:/usr/src/app
- /Users/mh/Pictures/ROR_PT/phototank:/Users/martinhinge/Pictures/ROR_PT/phototank
ports:
- "3000:3000"
expose:
- "3000"
depends_on:
- db
- redis
volumes:
wwwroot:
driver: local
The problem is that your nginx service is requesting upstream of localhost (127.0.0.1). But the application is in another container (with another IP). You can reference the rails application by DNS at appsince they are both in a default network. The upstream in nginx configuration would look something like proxy_pass http://app:3000; instead.
Read more about the networking at https://docs.docker.com/compose/networking/, specifically:
Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.