[emerg]: host not found in upstream - docker

I'm a little confused. I am trying to build a Flask application and am deploying it in a docker image with nginx as the web server. After I run docker-compose up I am able to get the containers into a running state but in the output of the command I get the following: [emerg] 1#1: host not found in upstream "flask:8000" in in /etc/nginx/nginx.conf:12. The odd thing is if I access localhost I'm able to see the application working. I don't know if this is something I need to address, since it's working? Additionally, I'm trying to setup redirects from HTTP to HTTPS, that seems to not work. Below is my docker-compose file and the nginx.conf file, and the docker file I have for the nginx container. Any advice would be helpful.
Docker Compose
version: "3"
services:
app:
build: ./app
container_name: containerize_app_1
command: gunicorn --chdir app/src --bind 0.0.0.0:8000 --workers 2 "server:app"
volumes:
- ./:/var/www
expose:
- 8000
networks:
- MyNetwork
nginx:
build: ./nginx
container_name: containerize_nginx_1
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
depends_on:
- app
networks:
MyNetwork:
aliases:
- flask-app
networks:
MyNetwork:
nginx.conf
user nginx nginx;
worker_processes 2;
error_log /var/log/nginx/error.log;
worker_rlimit_nofile 8192;
events {
worker_connections 4096;
}
http {
upstream flask-app {
server app:8000;
}
server {
listen 80;
server_name localhost;
return 301 https://$host$request_uri;
location / {
proxy_pass http://flask-app;
proxy_set_header Host "localhost";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
}
}
}
nginx dockerfile
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/nginx.conf
docker-compose up
docker-compose up
containerize_app_1 is up-to-date
containerize_nginx_1 is up-to-date
Attaching to containerize_app_1, containerize_nginx_1
app_1 | [2021-12-14 21:48:01 +0000] [1] [INFO] Starting gunicorn 19.9.0
app_1 | [2021-12-14 21:48:01 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
app_1 | [2021-12-14 21:48:01 +0000] [1] [INFO] Using worker: sync
app_1 | [2021-12-14 21:48:01 +0000] [12] [INFO] Booting worker with pid: 12
app_1 | [2021-12-14 21:48:01 +0000] [13] [INFO] Booting worker with pid: 13
app_1 | [2021-12-14 22:21:03 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:12)
app_1 | [2021-12-14 22:21:03 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:13)
app_1 | [2021-12-14 22:21:03 +0000] [13] [INFO] Worker exiting (pid: 13)
app_1 | [2021-12-14 22:21:03 +0000] [12] [INFO] Worker exiting (pid: 12)
app_1 | [2021-12-14 22:21:03 +0000] [14] [INFO] Booting worker with pid: 14
app_1 | [2021-12-14 22:21:04 +0000] [15] [INFO] Booting worker with pid: 15
app_1 | [2021-12-14 22:27:44 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:14)
app_1 | [2021-12-14 22:27:44 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:15)
app_1 | [2021-12-14 22:27:44 +0000] [14] [INFO] Worker exiting (pid: 14)
app_1 | [2021-12-14 22:27:44 +0000] [15] [INFO] Worker exiting (pid: 15)
app_1 | [2021-12-14 22:27:44 +0000] [16] [INFO] Booting worker with pid: 16
app_1 | [2021-12-14 22:27:45 +0000] [17] [INFO] Booting worker with pid: 17
app_1 | [2021-12-14 22:32:53 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:16)
app_1 | [2021-12-14 22:32:53 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:17)
app_1 | [2021-12-14 22:32:53 +0000] [16] [INFO] Worker exiting (pid: 16)
app_1 | [2021-12-14 22:32:53 +0000] [17] [INFO] Worker exiting (pid: 17)
app_1 | [2021-12-14 22:32:53 +0000] [18] [INFO] Booting worker with pid: 18
app_1 | [2021-12-14 22:32:53 +0000] [19] [INFO] Booting worker with pid: 19
app_1 | [2021-12-14 22:42:31 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:18)
app_1 | [2021-12-14 22:42:31 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:19)
app_1 | [2021-12-14 22:42:31 +0000] [19] [INFO] Worker exiting (pid: 19)
app_1 | [2021-12-14 22:42:31 +0000] [18] [INFO] Worker exiting (pid: 18)
app_1 | [2021-12-14 22:42:31 +0000] [20] [INFO] Booting worker with pid: 20
app_1 | [2021-12-14 22:42:31 +0000] [21] [INFO] Booting worker with pid: 21
app_1 | [2021-12-14 22:48:26 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:20)
app_1 | [2021-12-14 22:48:26 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:21)
app_1 | [2021-12-14 22:48:26 +0000] [21] [INFO] Worker exiting (pid: 21)
app_1 | [2021-12-14 22:48:26 +0000] [20] [INFO] Worker exiting (pid: 20)
app_1 | [2021-12-14 22:48:26 +0000] [22] [INFO] Booting worker with pid: 22
app_1 | [2021-12-14 22:48:26 +0000] [23] [INFO] Booting worker with pid: 23
app_1 | [2021-12-15 00:11:42 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:22)
app_1 | [2021-12-15 00:11:42 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:23)
app_1 | [2021-12-15 00:11:42 +0000] [23] [INFO] Worker exiting (pid: 23)
app_1 | [2021-12-15 00:11:42 +0000] [22] [INFO] Worker exiting (pid: 22)
app_1 | [2021-12-15 00:11:42 +0000] [24] [INFO] Booting worker with pid: 24
app_1 | [2021-12-15 00:11:42 +0000] [25] [INFO] Booting worker with pid: 25
app_1 | [2021-12-15 01:55:27 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:24)
app_1 | [2021-12-15 01:55:27 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:25)
app_1 | [2021-12-15 01:55:27 +0000] [24] [INFO] Worker exiting (pid: 24)
app_1 | [2021-12-15 01:55:27 +0000] [25] [INFO] Worker exiting (pid: 25)
app_1 | [2021-12-15 01:55:27 +0000] [26] [INFO] Booting worker with pid: 26
app_1 | [2021-12-15 01:55:27 +0000] [27] [INFO] Booting worker with pid: 27
app_1 | [2021-12-15 02:34:31 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:26)
app_1 | [2021-12-15 02:34:31 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:27)
app_1 | [2021-12-15 02:34:31 +0000] [27] [INFO] Worker exiting (pid: 27)
app_1 | [2021-12-15 02:34:31 +0000] [26] [INFO] Worker exiting (pid: 26)
app_1 | [2021-12-15 02:34:31 +0000] [28] [INFO] Booting worker with pid: 28
app_1 | [2021-12-15 02:34:31 +0000] [29] [INFO] Booting worker with pid: 29
nginx_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx_1 | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx_1 | 2021/12/14 23:37:02 [emerg] 1#1: host not found in upstream "flask:8000" in /etc/nginx/nginx.conf:12
nginx_1 | nginx: [emerg] host not found in upstream "flask:8000" in /etc/nginx/nginx.conf:12
nginx_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx_1 | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx_1 | 172.18.0.1 - - [14/Dec/2021:23:48:17 +0000] "GET / HTTP/1.1" 200 128 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.0 Safari/605.1.15"
nginx_1 | 172.18.0.1 - - [15/Dec/2021:02:22:03 +0000] "GET / HTTP/1.1" 200 128 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.0 Safari/605.1.15"

Related

Cannot run Airflow on Docker using MacOS - Even with more memory allocated

I am trying to run Airflow on Docker (using a MacOS, Version 12.6.2, with 8 GB of Memory). I have downloaded the docker-compose.yaml file here (apache/airflow:2.4.2), and have set my .env file to this:
AIRFLOW_IMAGE_NAME=apache/airflow:2.4.2
AIRFLOW_UID=50000
When I run docker-compose up -d, and wait, the webserver containers never become healthy:
As suggested by numerous people on MacOS, I have upped my Docker memory:
I have tried numerous combinations of Docker memory (all 8GB of Memory, 7 GB of Memory, 6 GB of Memory, 5 and 4 GB of Memory) as well as testing different combinations of CPU, Swap, and Virtual Disk Limit (I have not tried going higher than 160 GB in the Virtual Disk Limit). I have also seen that it is a bad idea to use all 4 CPUs, so I have not tried that.
Here is the log I get for the webserver container:
2023-01-12 03:26:30 [2023-01-12 11:26:30 +0000] [79] [CRITICAL] WORKER TIMEOUT (pid:215)
2023-01-12 03:26:31 [2023-01-12 11:26:31 +0000] [79] [CRITICAL] WORKER TIMEOUT (pid:216)
2023-01-12 03:26:33 [2023-01-12 11:26:32 +0000] [79] [CRITICAL] WORKER TIMEOUT (pid:217)
2023-01-12 03:26:33 [2023-01-12 11:26:33 +0000] [79] [WARNING] Worker with pid 215 was terminated due to signal 9
2023-01-12 03:26:34 [2023-01-12 11:26:34 +0000] [262] [INFO] Booting worker with pid: 262
2023-01-12 03:26:36 [2023-01-12 11:26:36 +0000] [79] [WARNING] Worker with pid 217 was terminated due to signal 9
2023-01-12 03:26:36 [2023-01-12 11:26:36 +0000] [79] [CRITICAL] WORKER TIMEOUT (pid:219)
2023-01-12 03:26:36 [2023-01-12 11:26:36 +0000] [263] [INFO] Booting worker with pid: 263
2023-01-12 03:26:37 [2023-01-12 11:26:37 +0000] [79] [WARNING] Worker with pid 216 was terminated due to signal 9
2023-01-12 03:26:38 [2023-01-12 11:26:38 +0000] [265] [INFO] Booting worker with pid: 265
2023-01-12 03:26:39 [2023-01-12 11:26:39 +0000] [79] [WARNING] Worker with pid 219 was terminated due to signal 9
2023-01-12 03:26:40 [2023-01-12 11:26:40 +0000] [266] [INFO] Booting worker with pid: 266
2023-01-12 03:28:34 [2023-01-12 11:28:33 +0000] [79] [CRITICAL] WORKER TIMEOUT (pid:262)
2023-01-12 03:28:36 [2023-01-12 11:28:36 +0000] [79] [CRITICAL] WORKER TIMEOUT (pid:263)
2023-01-12 03:28:38 [2023-01-12 11:28:38 +0000] [79] [CRITICAL] WORKER TIMEOUT (pid:265)
2023-01-12 03:28:39 [2023-01-12 11:28:39 +0000] [79] [CRITICAL] WORKER TIMEOUT (pid:266)
...And the "worker timeout-booting worker-worker timeout" cycle continues forever.
Now, if I comment out (remove) the redis, airflow-workflow, and airflow-triggerrer parts of the compose file as suggested by this article under the "Airflow Installation -- Lite Version" section of the article, everything runs fine and everything is healthy. But, I know that I'm going to need those containers in the future.
If I've maxed out my MacOS resources, what do you suggest I do?
(NOTE: This question on Stack Overflow mentions improving Docker Memory when running on desktop as the solution. However, as you can see by my screenshot and text above, I have already tried that and it did not work.)

Mlflow UI can't show artifacts

I have mlflow running on an azure VM and connected to Azure Blob as the artifact storage.
After uploading artifacts to the storage from the Client.
I tried the MLflow UI and successfully was able to show the uploaded file.
The problem happens when I try to run MLFLOW with Docker, I get the error:
Unable to list artifacts stored under {artifactUri} for the current run. Please contact your tracking server administrator to notify them of this error, which can happen when the tracking server lacks permission to list artifacts under the current run's root artifact directory
Dockerfile:
FROM python:3.7-slim-buster
# Install python packages
RUN pip install mlflow pymysql
RUN pip install azure-storage-blob
ENV AZURE_STORAGE_ACCESS_KEY="#########"
ENV AZURE_STORAGE_CONNECTION_STRING="#######"
docker-compose.yml
web:
restart: always
build: ./mlflow_server
image: mlflow_server
container_name: mlflow_server
expose:
- "5000"
networks:
- frontend
- backend
environment:
- AZURE_STORAGE_ACCESS_KEY="#####"
- AZURE_STORAGE_CONNECTION_STRING="#####"
command: mlflow server --backend-store-uri mysql+pymysql://mlflow_user:123456#db:3306/mlflow --default-artifact-root wasbs://etc..
I tried multiple solutions:
Making sure that boto3 is installed (Didn't do anything)
Adding Environment Variables in the Dockerfile so the command runs after they're set
I double checked the url of the storage blob
And MLFLOW doesn't show any logs it just kills the process and restarts again.
Anyone got any idea what might be the solution or how can i access the logs
here're the docker logs of the container:
[2022-07-28 12:23:33 +0000] [10] [INFO] Starting gunicorn 20.1.0
[2022-07-28 12:23:33 +0000] [10] [INFO] Listening at: http://0.0.0.0:5000 (10)
[2022-07-28 12:23:33 +0000] [10] [INFO] Using worker: sync
[2022-07-28 12:23:33 +0000] [13] [INFO] Booting worker with pid: 13
[2022-07-28 12:23:33 +0000] [14] [INFO] Booting worker with pid: 14
[2022-07-28 12:23:33 +0000] [15] [INFO] Booting worker with pid: 15
[2022-07-28 12:23:33 +0000] [16] [INFO] Booting worker with pid: 16
[2022-07-28 12:24:24 +0000] [10] [CRITICAL] WORKER TIMEOUT (pid:14)
[2022-07-28 12:24:24 +0000] [14] [INFO] Worker exiting (pid: 14)
[2022-07-28 12:24:24 +0000] [21] [INFO] Booting worker with pid: 21

Nextcloud container after move

I have server, which is hp t620 with 64gb ssd inside and two 1tb hdd disk in the cover connected by USB 3.0 and configured in RAID 1 (cover has option to configured RAID 1, 0 and JBOD). HDD disk are mounted under /srv/dev-disk-by-uuid-9eac4f7c-81d6-48a7-9a4a-c8f20ceba7b5
It operates under OMV 5.10.
I installed the docker by OMV web GUI in /srv/dev-disk-by-uuid-9eac4f7c-81d6-48a7-9a4a-c8f20ceba7b5/dane_aplikacji/docker. I run on the docker the nextcloud server app from the docker-compose file:
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 88:80
- 444:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
- VIRTUAL_PROTO=https
- VIRTUAL_PORT=444
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
cap_add:
- NET_ADMIN
networks:
- nextcloud_network
environment:
- PUID=998 #change PUID if needed
- PGID=100 #change PGID if needed
- TZ=Europe/Warsaw # change Time Zone if needed
- URL=myurl.duckdns.org
- SUBDOMAINS=www
- VALIDATION=https
- EMAIL=myaccount#gmail.com
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
command: --skip-innodb-read-only-compressed
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD= SUPER_PASSWORD
- MYSQL_PASSWORD=SUPER_PASSWORD
- MYSQL_USER=nextcloud
- MYSQL_DATABASE=nextcloud
- MYSQL_HOST=db
restart: unless-stopped
app:
image: nextcloud:latest #ghcr.io/linuxserver/nextcloud
container_name: nextcloud-app
hostname: myurl.duckdns.org
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html:z
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- /srv/dev-disk-by-uuid-9eac4f7c-81d6-48a7-9a4a-c8f20ceba7b5/nextcloud:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- TZ=Europe/Warsaw
- VIRTUAL_HOST=myurl.duckdns.org
- LETSENCRYPT_HOST=myurl.duckdns.org:444
- LETSENCRYPT_EMAIL=myaccount#gmail.com
- PHP_MEMORY_LIMIT=20G
- MYSQL_ROOT_PASSWORD=SUPER_PASSWORD
- MYSQL_PASSWORD=SUPER_PASSWORD
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- NEXTCLOUD_HOSTNAME=myurl.duckdns.org
restart: unless-stopped
news-updater:
image: kr3ssh/nextcloud-news-updater
environment:
- INTERVAL=60
- NEXTCLOUD_URL=http://myurl.duckdns.org:88
- NEXTCLOUD_ADMIN_USER=PAN_ADMIN
- NEXTCLOUD_ADMIN_PASSWORD=SUPER_PASSWORD
restart: always
volumes:
nextcloud:
db:
networks:
nextcloud_network:
When I started to use the server i have only 16gb ssd inside and I had have docker files on the /srv/dev-disk-by-uuid-9eac4f7c-81d6-48a7-9a4a-c8f20ceba7b5/dane_aplikacji/docker but recently I installed 64gb ssd. I have taken OS img and installed it on the 64gb disk by rufus and expand partition. Docker and nextcloud work without no problem. Next I have copied docker file from /srv/dev-disk-by-uuid-9eac4f7c-81d6-48a7-9a4a-c8f20ceba7b5/dane_aplikacji/docker to /var/lib.docker and run docker from there by OMV web GUI. It was bad idea because nextcloud container hasn't worked. On the web page I only see:
Internal Server Error
The server encountered an internal error and was unable to complete your request.
Please contact the server administrator if this error reappears multiple times, please include the technical details below in your report.
More details can be found in the server log.
In the nextcloud log file I see:
[Sat Jan 15 15:14:11.109982 2022] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.51 (Debian) PHP/8.0.14 configured -- resuming normal operations
[Sat Jan 15 15:14:11.110121 2022] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
ITS.MYI.P - - [15/Jan/2022:15:14:12 +0100] "GET / HTTP/1.1" 500 717 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ITS.MYI.P Safari/537.36 Edg/ITS.MYI.P"
ITS.MYI.P - admin [15/Jan/2022:15:14:38 +0100] "GET /index.php/apps/news/api/v1-2/cleanup/before-update HTTP/1.1" 500 717 "-" "Python-urllib/3.7"
In the nginx log file I see:
Lines
100
Actions
nginx.1 | 2022/01/15 15:14:08 [notice] 24#24: start worker processes
nginx.1 | 2022/01/15 15:14:08 [notice] 24#24: start worker process 30
nginx.1 | 2022/01/15 15:14:08 [notice] 24#24: start worker process 31
nginx.1 | 2022/01/15 15:14:08 [notice] 24#24: start worker process 32
nginx.1 | 2022/01/15 15:14:08 [notice] 24#24: start worker process 33
dockergen.1 | 2022/01/15 15:14:08 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
dockergen.1 | 2022/01/15 15:14:08 Watching docker events
dockergen.1 | 2022/01/15 15:14:08 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx.1 | myurl.duckdns.org ITS.MYI.P - admin [15/Jan/2022:15:14:08 +0100] "GET /index.php/apps/news/api/v1-2/cleanup/before-update HTTP/1.1" 503 190 "-" "Python-urllib/3.7" "-"
dockergen.1 | 2022/01/15 15:14:09 Received event start for container 70f8ad9b1266
dockergen.1 | 2022/01/15 15:14:09 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx.1 | myurl.duckdns.org ITS.MYI.P - - [15/Jan/2022:15:14:09 +0100] "GET / HTTP/1.1" 503 592 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ITS.MYI.P Safari/537.36 Edg/ITS.MYI.P" "-"
dockergen.1 | 2022/01/15 15:14:10 Received event start for container 94b2d83900f4
dockergen.1 | 2022/01/15 15:14:10 Generated '/etc/nginx/conf.d/default.conf' from 10 containers
dockergen.1 | 2022/01/15 15:14:10 Running 'nginx -s reload'
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: signal 1 (SIGHUP) received from 37, reconfiguring
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: reconfiguring
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: using the "epoll" event method
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: start worker processes
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: start worker process 38
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: start worker process 39
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: start worker process 40
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: start worker process 41
nginx.1 | 2022/01/15 15:14:10 [notice] 30#30: gracefully shutting down
nginx.1 | 2022/01/15 15:14:10 [notice] 31#31: gracefully shutting down
nginx.1 | 2022/01/15 15:14:10 [notice] 30#30: exiting
nginx.1 | 2022/01/15 15:14:10 [notice] 32#32: gracefully shutting down
nginx.1 | 2022/01/15 15:14:10 [notice] 32#32: exiting
nginx.1 | 2022/01/15 15:14:10 [notice] 30#30: exit
nginx.1 | 2022/01/15 15:14:10 [notice] 32#32: exit
nginx.1 | 2022/01/15 15:14:10 [notice] 33#33: gracefully shutting down
nginx.1 | 2022/01/15 15:14:10 [notice] 33#33: exiting
nginx.1 | 2022/01/15 15:14:10 [notice] 33#33: exit
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: signal 17 (SIGCHLD) received from 33
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: worker process 33 exited with code 0
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: signal 29 (SIGIO) received
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: signal 17 (SIGCHLD) received from 32
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: worker process 32 exited with code 0
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: signal 29 (SIGIO) received
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: signal 17 (SIGCHLD) received from 30
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: worker process 30 exited with code 0
nginx.1 | 2022/01/15 15:14:10 [notice] 24#24: signal 29 (SIGIO) received
nginx.1 | 2022/01/15 15:14:11 [notice] 31#31: exiting
I try with chmod 775 for /srv/dev-disk-by-uuid-9eac4f7c-81d6-48a7-9a4a-c8f20ceba7b5/nextcloud, /var/www and /var/lib/docker reinstalation apache2 and installation php and effect is the same.

Running Gunicorn Flask app in Docker [CRITICAL] WORKER TIMEOUT when starting up

I want to run a Flask web services app with gunicorn in Docker. Upon startup, the app loads a large machine learning model.
However, when I run gunicorn within Docker I received the following timeouts and it just keeps spawning workers.
[2019-12-12 21:52:42 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:1198)
[2019-12-12 21:52:42 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:1204)
[2019-12-12 21:52:42 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:1210)
[2019-12-12 21:52:42 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:1211)
[2019-12-12 21:52:42 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:1222)
[2019-12-12 21:52:42 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:1223)
[2019-12-12 21:52:42 +0000] [1264] [INFO] Booting worker with pid: 1264
[2019-12-12 21:52:42 +0000] [1265] [INFO] Booting worker with pid: 1265
[2019-12-12 21:52:42 +0000] [1276] [INFO] Booting worker with pid: 1276
[2019-12-12 21:52:42 +0000] [1277] [INFO] Booting worker with pid: 1277
[2019-12-12 21:52:42 +0000] [1278] [INFO] Booting worker with pid: 1278
[2019-12-12 21:52:42 +0000] [1289] [INFO] Booting worker with pid: 1289
Running it as a flask app within Docker or running the flask app with (or without) gunicorn from the command line works fine. It also works with gunicorn if I remove the machine learning model.
For example:
$python app.py
$gunicorn -b 0.0.0.0:8080 --workers=2 --threads=4 app:app
$gunicorn app:app
Here is my Dockerfile with the Flask development server. Works fine.
ADD . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD python app.py
If I run gunicorn as follows it just keeps spawning workers:
CMD gunicorn -b 0.0.0.0:8080 --workers=2 --threads=4 app:app
or
CMD ["gunicorn", "app:app"]
gunicorn has a --timeout=30 parameter. Defaults to 30 seconds which I increased to 300. This did not appear to have an affect.
Note: I rewrote the app for the Starlette library and received the same results!
Any guidance is appreciated.
Thanks,
Jay
I needed to add the gunicorn --timeout as follows:
CMD gunicorn --timeout 1000 --workers 1 --threads 4 --log-level debug --bind 0.0.0.0:8000 app:app
I also ran into problems deploy on Google Cloud Platform. The log only showed a kill message. Increasing the memory in the compute instance solved that problem.
try this
CMD["gunicorn", "--timeout", "1000", "--workers=1","-b", "0.0.0.0:8000","--log-level", "debug", "manage"]

Can't connect to postgres when using docker-compose

I am new to docker and still learning how to use it,
I am trying to use docker-compose to run Django and Postgres together
and they run perfectly and the migrate done and everything, but i have a problem i cant connect into the database using pgAdmin4 to look at the database
that's my setting.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'slack',
'USER': 'username',
'PASSWORD': 'password',
'HOST': 'db',
'PORT': 5432,
}
}
and that's my docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_DB: slack
POSTGRES_USER: username
POSTGRES_PASSWORD: password
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/slack_code
ports:
- "8000:8000"
depends_on:
- db
everything works seems to be fine :
sudo docker-compose up
slackwebapp_db_1 is up-to-date
Creating slackwebapp_web_1 ... done
Attaching to slackwebapp_db_1, slackwebapp_web_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 |
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | waiting for server to start....2018-01-18 19:46:43.851 UTC [38] LOG: listening on IPv4 address "127.0.0.1", port 5432
db_1 | 2018-01-18 19:46:43.851 UTC [38] LOG: could not bind IPv6 address "::1": Cannot assign requested address
db_1 | 2018-01-18 19:46:43.851 UTC [38] HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
db_1 | 2018-01-18 19:46:43.853 UTC [38] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-01-18 19:46:43.864 UTC [39] LOG: database system was shut down at 2018-01-18 19:46:43 UTC
db_1 | 2018-01-18 19:46:43.867 UTC [38] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 | CREATE DATABASE
db_1 |
db_1 | CREATE ROLE
db_1 |
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | 2018-01-18 19:46:44.388 UTC [38] LOG: received fast shutdown request
db_1 | waiting for server to shut down....2018-01-18 19:46:44.389 UTC [38] LOG: aborting any active transactions
db_1 | 2018-01-18 19:46:44.390 UTC [38] LOG: worker process: logical replication launcher (PID 45) exited with exit code 1
db_1 | 2018-01-18 19:46:44.391 UTC [40] LOG: shutting down
db_1 | 2018-01-18 19:46:44.402 UTC [38] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2018-01-18 19:46:44.501 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-01-18 19:46:44.501 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-01-18 19:46:44.502 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-01-18 19:46:44.514 UTC [65] LOG: database system was shut down at 2018-01-18 19:46:44 UTC
db_1 | 2018-01-18 19:46:44.518 UTC [1] LOG: database system is ready to accept connections
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 |
web_1 | You have 14 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
web_1 | Run 'python manage.py migrate' to apply them.
web_1 | January 18, 2018 - 19:48:49
web_1 | Django version 2.0.1, using settings 'slack_webapp.settings'
web_1 | Starting development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
web_1 | [18/Jan/2018 19:56:03] "GET / HTTP/1.1" 200 16559
web_1 | [18/Jan/2018 19:56:03] "GET /static/admin/css/fonts.css HTTP/1.1" 200 423
web_1 | [18/Jan/2018 19:56:04] "GET /static/admin/fonts/Roboto-Bold-webfont.woff HTTP/1.1" 200 82564
web_1 | [18/Jan/2018 19:56:04] "GET /static/admin/fonts/Roboto-Regular-webfont.woff HTTP/1.1" 200 80304
web_1 | [18/Jan/2018 19:56:04] "GET /static/admin/fonts/Roboto-Light-webfont.woff HTTP/1.1" 200 81348
web_1 | [18/Jan/2018 19:56:08] "GET /admin/ HTTP/1.1" 302 0
web_1 | [18/Jan/2018 19:56:09] "GET /admin/login/?next=/admin/ HTTP/1.1" 200 1855
web_1 | [18/Jan/2018 19:56:09] "GET /static/admin/css/base.css HTTP/1.1" 200 16106
web_1 | [18/Jan/2018 19:56:09] "GET /static/admin/css/responsive.css HTTP/1.1" 200 17894
web_1 | [18/Jan/2018 19:56:09] "GET /static/admin/css/login.css HTTP/1.1" 200 1203
web_1 | [18/Jan/2018 19:58:58] "POST /admin/login/?next=/admin/ HTTP/1.1" 302 0
web_1 | [18/Jan/2018 19:58:58] "GET /admin/ HTTP/1.1" 200 2988
web_1 | [18/Jan/2018 19:58:58] "GET /static/admin/css/base.css HTTP/1.1" 304 0
web_1 | [18/Jan/2018 19:58:58] "GET /static/admin/css/dashboard.css HTTP/1.1" 200 412
web_1 | [18/Jan/2018 19:58:58] "GET /static/admin/css/responsive.css HTTP/1.1" 304 0
web_1 | [18/Jan/2018 19:58:58] "GET /static/admin/css/fonts.css HTTP/1.1" 304 0
web_1 | [18/Jan/2018 19:58:58] "GET /static/admin/fonts/Roboto-Bold-webfont.woff HTTP/1.1" 304 0
web_1 | [18/Jan/2018 19:58:58] "GET /static/admin/fonts/Roboto-Light-webfont.woff HTTP/1.1" 304 0
web_1 | [18/Jan/2018 19:58:58] "GET /static/admin/fonts/Roboto-Regular-webfont.woff HTTP/1.1" 304 0
web_1 | [18/Jan/2018 19:58:58] "GET /static/admin/img/icon-addlink.svg HTTP/1.1" 200 331
web_1 | [18/Jan/2018 19:58:58] "GET /static/admin/img/icon-changelink.svg HTTP/1.1" 200 380
web_1 | [18/Jan/2018 19:59:05] "GET /admin/ HTTP/1.1" 200 2988
web_1 | [18/Jan/2018 19:59:07] "GET /admin/ HTTP/1.1" 200 2988
web_1 | [18/Jan/2018 19:59:11] "GET /admin/ HTTP/1.1" 200 2988
^CGracefully stopping... (press Ctrl+C again to force)
Stopping slackwebapp_web_1 ... done
Stopping slackwebapp_db_1 ... done
but still i cant connect and i don't know how to set up a password for the Postgres default user like we do in
sudo docker run --name test -e POSTGRES_PASSWORD=password -d postgres
cuz i cant do the same with docker-compose i guess, Thanks in advance.
Hostname should be name of service defined in your docker-compose.yml
This is because you're in the docker network
You cannot use localhost or 127.0.0.1 here, because pgadmin is in container, and localhost here means 'pgadmin container'.
Let's consider your case:
version: '3'
services:
db:
image: postgres
ports:
- 5432:5432
environment:
POSTGRES_DB: slack
POSTGRES_USER: snowflake
POSTGRES_PASSWORD: 1Stepclose
pgadmin:
image: chorss/docker-pgadmin4
ports:
- 5050:5050
In this case,
Hostname: db
port : 5432
user: snowflake
pass: 1Stepclose
Hope this helps :)
In order to access the postgres database from an external program you will need to mount port 5432 which is exposed by the postgres service to a port on your host.
With the following changes to your docker-compose.yml file you should be able to connect to the database using pgadmin (on localhost:5432) as well as have postgres create your user for you.
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_DB=slack
- POSTGRES_USER=snowflake
- POSTGRES_PASSWORD=1Stepclose
Edit:
I did not realize that you were trying to connect with pgadmin4 running in another docker container. The easiest way to set that up to allow pgadmin4 container to communicate with the postgres container is to add pgadmin as a service in your docker-compose.yml file. Update your docker-compose.yml file to contain the following configuration:
version: '3'
services:
db:
image: postgres
ports:
- 5432:5432
environment:
POSTGRES_DB: slack
POSTGRES_USER: snowflake
POSTGRES_PASSWORD: 1Stepclose
pgadmin:
image: chorss/docker-pgadmin4
ports:
- 5050:5050
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/slack_code
ports:
- "8000:8000"
depends_on:
- db
Browse to localhost:5050 > Click add new server > Enter any name > Click on connection tab > Enter db for Hostname/Address > Enter snowflake for username > Enter 1Stepclose for password > Click save
In your setting.py file you are calling the db host 'db' but in your compose output it appears to be called 'slackwebapp_db_1'. Try changing setting.py to the full name.
I have used Pycharm to control my project, and the built-in database management system in Pycharm has connected successfully, maybe it was something wrong with pgadmin4 then, i was using chorss/docker-pgadmin4 image for pgAdmin4 as i am linux so maybe it was something wrong with the image or something. thanks guys for your effort.
I looked at postgres, from which I found the git repository (according to your docker-compose, you use the latest tag). Looks like the default user name and password 'postgres' are hard-coded. You might want to try 'postgres' in settings.py for user name and password, and see if that works.

Resources