Numpy error occurs even though uWSGI Emperor is enabled - uwsgi

overview
uwsgi emperor is enabled but not working numpy on ubuntu.
Why doesn't it work? My portfolio is useless.
I tried a lot but I don't understand anymore
emperor.uwsgi.service - uWSGI Emperor
Loaded: loaded (/etc/systemd/system/emperor.uwsgi.service; disabled; vendor preset: enabled)
Active: active (running) since Sat 2022-03-19 22:15:50 JST; 12h ago
Main PID: 435890 (uwsgi)
Status: "The Emperor is governing 1 vassals"
Tasks: 462 (limit: 462)
Memory: 119.8M
CGroup: /system.slice/emperor.uwsgi.service
├─435890 /var/www/html/venv/bin/uwsgi --master --emperor /etc/uwsgi/vassals
├─435899 /var/www/html/venv/bin/uwsgi --master --emperor /etc/uwsgi/vassals
settings
uwsgi.ini
[uwsgi]
chdir=/var/www/html/portfolio/mysite
module=mysite.wsgi:application
master=True
pidfile=/tmp/project-master.pid
vacuum=True
max-requests=5000
daemonize=/var/log/uwsgi/uwsgi.log
single-interpreter=True
add user
# adduser --group uwsgi-data
# adduser --home /etc/uwsgi --no-create-home --shell /sbin/nologin --ingroup uwsgi-data --disabled-login uwsgi-data
change permission
# chown -R uwsgi-data:uwsgi-data /etc/uwsgi
# mkdir /var/log/uwsgi
# chown -R uwsgi-data:uwsgi-data /var/log/uwsgi
create a service file emperor.uwsgi.service
[Unit]
Description=uWSGI Emperor
After=syslog.service
[Service]
# 「/etc/uwsgi/vassals」に存在する設定ファイルを探しuWSGIデーモンを起動する
ExecStart=/var/www/html/venv/bin/uwsgi --master --emperor /etc/uwsgi/vassals
RuntimeDirectory=uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
emperor.ini
[uwsgi]
emperor = /etc/uwsgi/vassals
uid = uwsgi-data
gid = uwsgi-data
turn on
# systemctl start emperor.uwsgi.service
# systemctl status emperor.uwsgi.service
● emperor.uwsgi.service - uWSGI Emperor
Loaded: loaded (/etc/systemd/system/emperor.uwsgi.service; disabled; vendor preset: enabled)
Active: active (running)
# service apache2 restart

Related

Where do you create the ini file? (Uwsgi)

my reference: https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/uwsgi/
my project name:
'mysite'
my directories:
I create 'uwsgi.ini'.
then, I written in ini file.
[uwsgi]
chdir=/var/www/html/portfolio/mysite
module=mysite.wsgi:application
master=True
pidfile=/tmp/project-master.pid
vacuum=True
max-requests=5000
daemonize=/var/log/uwsgi/yourproject.log
single-interpreter=True
then, I command 'service apache2 restart'
I command this 'uwsgi --ini uwsgi.ini'
# uwsgi --ini uwsgi.ini
# open("/var/log/uwsgi/yourproject.log"): No such file or directory [core/logging.c line 288]
reference(japanese)
Ubuntu server fails
(console)
# pip3 install uwsgi
Installing collected packages: uwsgi
Successfully installed uwsgi-2.0.20
# mkdir -p /etc/uwsgi/vassals
# cd /etc/uwsgi/vassals
# source /var/www/html/venv/bin/activate
# vi uwsgi.ini
(uwsgi.ini(new))
[uwsgi]
chdir=/var/www/html/portfolio/mysite
module=mysite.wsgi:application
master=True
pidfile=/tmp/project-master.pid
vacuum=True
max-requests=5000
daemonize=/var/log/uwsgi/uwsgi.log
single-interpreter=True
(console)
# adduser --group uwsgi-data
# adduser --home /etc/uwsgi --no-create-home --shell /sbin/nologin --ingroup uwsgi-data --disabled-login uwsgi-data
Adding user `uwsgi-data' ...
Adding new user `uwsgi-data' (1001) with group `uwsgi-data' ...
Not creating home directory `/etc/uwsgi'.
Changing the user information for uwsgi-data
Enter the new value, or press ENTER for the
default
Full Name []: # Empty and OK
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] Y
(console)
# chown -R uwsgi-data:uwsgi-data /etc/uwsgi
# mkdir /var/log/uwsgi
# chown -R uwsgi-data:uwsgi-data /var/log/uwsgi
# vi /etc/systemd/system/emperor.uwsgi.service
(emperor.uwsgi.service)
[Unit]
Description=uWSGI Emperor
After=syslog.service
[Service]
# Find the configuration file that exists in "/etc/uwsgi/vassals" and start the uWSGI daemon.
ExecStart=/var/www/html/venv/bin/uwsgi --master --emperor /etc/uwsgi/vassals
RuntimeDirectory=uwsgi
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
(console)
# chmod 755 /etc/systemd/system/emperor.uwsgi.service
# vi /etc/uwsgi/vassals/emperor.ini
(emperor.ini)
[uwsgi]
emperor = /etc/uwsgi/vassals
uid = uwsgi-data
gid = uwsgi-data
(console)
# systemctl start emperor.uwsgi.service
# systemctl status emperor.uwsgi.service
● emperor.uwsgi.service - uWSGI Emperor
Loaded: loaded (/etc/systemd/system/emperor.uwsgi.service; disabled; vendor preset: enabled)
Active: active (running)
# service apache2 restart

Tensorflow sess.run() doesn't execute in route function when hosting app in wsgi

I have the following app.py file:
import json
from flask import Flask
import numpy as np
from dssm.model_dense_ngram import *
app = Flask(__name__)
sess = tf.compat.v1.Session()
init = tf.compat.v1.global_variables_initializer()
sess.run(init)
print("making representation")
representation, = sess.run([y], feed_dict={x: np.zeros((1, NO_OF_TRIGRAMS))})
print("Sum of representation: {}".format(np.sum(representation)))
def get_representation():
print("Making representation")
representation, = sess.run([y], feed_dict={x: np.zeros((1, NO_OF_TRIGRAMS))})
print("Made representation")
return np.sum(representation)
# We call the API like: localhost:5000/neuralSearch/
#app.route("/neuralSearch")
def get_neural_search():
return json.dumps({
"result": get_representation(),
}, indent=4)
I'm hosting it in a docker container with nginx and wsgi. Here's the Dockerfile:
FROM maven:3.6.3-jdk-11
RUN apt-get clean \
&& apt-get -y update
RUN apt-get -y install python3.7
RUN apt-get -y install nginx \
&& apt-get -y install python3-dev \
&& apt-get -y install build-essential
RUN apt-get -y install python3-setuptools
RUN apt -y install python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install wheel
RUN apt-get -y install libpcre3 libpcre3-dev
RUN pip3 install uwsgi
RUN mkdir -p /srv/flask_app
COPY dssm /srv/flask_app/dssm
COPY uwsgi.ini /srv/flask_app
COPY requirements.txt /srv/flask_app
COPY start.sh /srv/flask_app
COPY wsgi.py /srv/flask_app
COPY app.py /srv/flask_app/app.py
WORKDIR /srv/flask_app
RUN pip install -r requirements.txt --src /usr/local/src
RUN rm /etc/nginx/sites-enabled/default
RUN rm -r /root/.cache
COPY nginx.conf /etc/nginx/
RUN chmod +x ./start.sh
ENV FLASK_APP app.py
ENV NEURALSEARCH_TRIGRAMS_PATH /srv/preprocessed_datasets/trigrams.txt
ENV CONFLUENCE_INDICES_FILE /srv/preprocessed_datasets/confluence/data.csv
ENV CONFLUENCE_TEXT_FILE /srv/preprocessed_datasets/confluence/mid.json
EXPOSE 80
ENTRYPOINT ["./start.sh"]
I build with docker build . -t flask_image and run the container with docker run --name flask_container -p 80:80 flask_image. When I run the container, I get the following output:
Starting nginx: nginx.
[uWSGI] getting INI configuration from uwsgi.ini
*** Starting uWSGI 2.0.17.1 (64bit) on [Wed Jul 29 22:33:23 2020] ***
compiled with version: 8.3.0 on 29 July 2020 22:31:17
os: Linux-4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020
nodename: 2a28f8711a05
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /srv/flask_app
detected binary path: /usr/local/bin/uwsgi
setgid() to 33
setuid() to 33
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.socket fd 3
Python version: 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x558ebb239230
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 437424 bytes (427 KB) for 5 cores
*** Operational MODE: preforking ***
2020-07-29 22:33:24.500245: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-07-29 22:33:24.500335: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-07-29 22:33:26.270527: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-07-29 22:33:26.270706: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2020-07-29 22:33:26.270745: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (2a28f8711a05): /proc/driver/nvidia/version does not exist
2020-07-29 22:33:26.271023: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-07-29 22:33:26.279470: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2492630000 Hz
2020-07-29 22:33:26.280063: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x558ebe1eb190 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-07-29 22:33:26.280148: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
making representation
Sum of representation: -0.7329927682876587
WSGI app 0 (mountpoint='') ready in 3 seconds on interpreter 0x558ebb239230 pid: 23 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 23)
spawned uWSGI worker 1 (pid: 47, cores: 1)
spawned uWSGI worker 2 (pid: 48, cores: 1)
spawned uWSGI worker 3 (pid: 49, cores: 1)
spawned uWSGI worker 4 (pid: 50, cores: 1)
spawned uWSGI worker 5 (pid: 51, cores: 1)
As evidenced by the line Sum of representation: -0.7329927682876587, the first sess.run() call in app.py executes successfully.
However, if I call the endpoint /neuralSearch, the whole program execution comes to a halt at sess.run() in the function get_representation(). I get the following output:
Making representation
And nothing more, and the server does not return a response, it just freezes there. Why does this happen? How can I fix it?
ADDITIONAL FILES THAT MAY BE NECESSARY:
nginx.conf configures the nginx server:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
access_log /dev/stdout;
error_log /dev/stdout;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
index index.html index.htm;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /var/www/html;
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi.socket;
uwsgi_read_timeout 1h;
uwsgi_send_timeout 1h;
proxy_read_timeout 1h;
proxy_send_timeout 1h;
}
}
}
wsgi.py:
from app import app
uwsgi.ini:
[uwsgi]
module = wsgi:app
uid = www-data
gid = www-data
master = true
processes = 5
socket = /tmp/uwsgi.socket
chmod-sock = 664
vacuum = true
die-on-term = true
start.sh:
#!/usr/bin/env bash
service nginx start
uwsgi --ini uwsgi.ini
EDIT:
Output of docker container top <container name>:
$ docker container top 2a28f8711a05
PID USER TIME COMMAND
65246 root 0:00 bash ./start.sh
65292 root 0:00 nginx: master process /usr/sbin/nginx
65293 xfs 0:00 nginx: worker process
65294 xfs 0:00 nginx: worker process
65295 xfs 0:00 nginx: worker process
65296 xfs 0:00 nginx: worker process
65297 xfs 0:03 uwsgi --ini uwsgi.ini
65321 xfs 0:00 uwsgi --ini uwsgi.ini
65322 xfs 0:00 uwsgi --ini uwsgi.ini
65323 xfs 0:00 uwsgi --ini uwsgi.ini
65324 xfs 0:00 uwsgi --ini uwsgi.ini
65325 xfs 0:00 uwsgi --ini uwsgi.ini

Redis and Sidekiq in production on Ubuntu 16.04 using systemd and Deployment with Capistrano

I am deploying sidekiq in ubuntu 16.04 using systemd service with Capistrano.
Sidekiq system service file /lib/systemd/system/sidekiq.service
#
# systemd unit file for CentOS 7, Ubuntu 15.04
#
# Customize this file based on your bundler location, app directory, etc.
# Put this in /usr/lib/systemd/system (CentOS) or /lib/systemd/system (Ubuntu).
# Run:
# - systemctl enable sidekiq
# - systemctl {start,stop,restart} sidekiq
#
# This file corresponds to a single Sidekiq process. Add multiple copies
# to run multiple processes (sidekiq-1, sidekiq-2, etc).
#
# See Inspeqtor's Systemd wiki page for more detail about Systemd:
# https://github.com/mperham/inspeqtor/wiki/Systemd
#
[Unit]
Description=sidekiq
# start us only once the network and logging subsystems are available,
# consider adding redis-server.service if Redis is local and systemd-managed.
After=syslog.target network.target
# See these pages for lots of options:
# http://0pointer.de/public/systemd-man/systemd.service.html
# http://0pointer.de/public/systemd-man/systemd.exec.html
[Service]
Type=simple
WorkingDirectory=/opt//current
# If you use rbenv:
# ExecStart=/bin/bash -lc 'bundle exec sidekiq -e production'
# If you use the system's ruby:
ExecStart=/usr/local/bin/bundle exec sidekiq -e production -C config/sidekiq.yml -L log/sidekiq.log
User=deploy
Group=deploy
UMask=0002
# if we crash, restart
RestartSec=1
Restart=on-failure
# output goes to /var/log/syslog
StandardOutput=syslog
StandardError=syslog
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
But when start the sidekiq service using below command it not showing any error:
sudo systemctl start/stop sidekiq
In the status it throwing an error with an exit code sudo systemctl status sidekiq
● sidekiq.service - sidekiq
Loaded: loaded (/lib/systemd/system/sidekiq.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2018-12-04 01:24:39 PST; 754ms ago
Process: 28133 ExecStart=/usr/local/bin/bundle exec sidekiq -e production -C config/sidekiq.yml -L log/sidekiq.log (code=exited, status=217/US
Main PID: 28133 (code=exited, status=217/USER)
Dec 04 01:24:39 tt-apps-05 systemd[1]: sidekiq.service: Unit entered failed state.
Dec 04 01:24:39 tt-apps-05 systemd[1]: sidekiq.service: Failed with result 'exit-code'.
Bundler is present in the
tt-apps-05:/usr/local/bin$ ls
autopep8 drt-open mod_passenger.so pygal_gen.pyc rst2latex.py sphinx-apidoc
bundle drt-query netaddr pygmentize rst2man.py sphinx-autogen
bundler drt-unassigned nosetests query-pr rst2odt_prepstyles.py sphinx-build
Capistrno deploy.rb
set :user, "deploy"
Rake::Task["sidekiq:stop"].clear_actions
Rake::Task["sidekiq:start"].clear_actions
Rake::Task["sidekiq:restart"].clear_actions
namespace :sidekiq do
task :stop do
on roles(:app) do
execute :sudo, :systemctl, :stop, :sidekiq
end
end
task :start do
on roles(:app) do
execute :sudo, :systemctl, :start, :sidekiq
end
end
task :restart do
on roles(:app) do
execute :sudo, :systemctl, :restart, :sidekiq
end
end
end
I am not able to Identify what is the problem here can any one help me.
Try this:
ExecStart=/bin/bash -lc "/usr/local/bin/bundle exec sidekiq -e production -C config/sidekiq.yml -L log/sidekiq.log"

Docker - sh service script dont take options

i've this docker sh service script in /etc/init.d on a debain 8 machine:
#!/bin/sh
set -e
### BEGIN INIT INFO
# Provides: docker
# Required-Start: $syslog $remote_fs
# Required-Stop: $syslog $remote_fs
# Should-Start: cgroupfs-mount cgroup-lite
# Should-Stop: cgroupfs-mount cgroup-lite
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Create lightweight, portable, self-sufficient containers.
# Description:
# Docker is an open-source project to easily create lightweight, portable,
# self-sufficient containers from any application. The same container that a
# developer builds and tests on a laptop can run at scale, in production, on
# VMs, bare metal, OpenStack clusters, public clouds and more.
### END INIT INFO
export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
BASE=docker
# modify these in /etc/default/$BASE (/etc/default/docker)
DOCKER=/usr/bin/$BASE
# This is the pid file managed by docker itself
DOCKER_PIDFILE=/var/run/$BASE.pid
# This is the pid file created/managed by start-stop-daemon
DOCKER_SSD_PIDFILE=/var/run/$BASE-ssd.pid
DOCKER_LOGFILE=/var/log/$BASE.log
DOCKER_DESC="Docker"
DOCKER_OPTS="--insecure-registry 127.0.0.1:9000"
# Get lsb functions
. /lib/lsb/init-functions
if [ -f /etc/default/$BASE ]; then
. /etc/default/$BASE
fi
# Check docker is present
if [ ! -x $DOCKER ]; then
log_failure_msg "$DOCKER not present or not executable"
exit 1
fi
check_init() {
# see also init_is_upstart in /lib/lsb/init-functions (which isn't available in Ubuntu 12.04, or we'd use it directly)
if [ -x /sbin/initctl ] && /sbin/initctl version 2>/dev/null | grep -q upstart; then
log_failure_msg "$DOCKER_DESC is managed via upstart, try using service $BASE $1"
exit 1
fi
}
fail_unless_root() {
if [ "$(id -u)" != '0' ]; then
log_failure_msg "$DOCKER_DESC must be run as root"
exit 1
fi
}
cgroupfs_mount() {
# see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
if grep -v '^#' /etc/fstab | grep -q cgroup \
|| [ ! -e /proc/cgroups ] \
|| [ ! -d /sys/fs/cgroup ]; then
return
fi
if ! mountpoint -q /sys/fs/cgroup; then
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
fi
(
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
)
}
echo -n "$1"
case "$1" in
start)
echo "start"
check_init
fail_unless_root
cgroupfs_mount
touch "$DOCKER_LOGFILE"
chgrp docker "$DOCKER_LOGFILE"
ulimit -n 1048576
if [ "$BASH" ]; then
ulimit -u 1048576
else
ulimit -p 1048576
fi
echo $DOCKER_OPTS
log_begin_msg "Starting $DOCKER_DESC: $BASE"
start-stop-daemon --start --background \
--no-close \
--exec "$DOCKER" \
--pidfile "$DOCKER_SSD_PIDFILE" \
--make-pidfile \
-- \
daemon -p "$DOCKER_PIDFILE" \
$DOCKER_OPTS \
>> "$DOCKER_LOGFILE" 2>&1
log_end_msg $?
;;
stop)
check_init
fail_unless_root
log_begin_msg "Stopping $DOCKER_DESC: $BASE"
start-stop-daemon --stop --pidfile "$DOCKER_SSD_PIDFILE" --retry 10
log_end_msg $?
;;
restart)
check_init
fail_unless_root
docker_pid=`cat "$DOCKER_SSD_PIDFILE" 2>/dev/null`
[ -n "$docker_pid" ] \
&& ps -p $docker_pid > /dev/null 2>&1 \
&& $0 stop
$0 start
;;
force-reload)
check_init
fail_unless_root
$0 restart
;;
status)
echo "Prova"
check_init
status_of_proc -p "$DOCKER_SSD_PIDFILE" "$DOCKER" "$DOCKER_DESC"
echo "a"
;;
statu)
echo "Prova"
check_init
status_of_proc -p "$DOCKER_SSD_PIDFILE" "$DOCKER" "$DOCKER_DESC"
echo "a"
;;
*)
echo "Usage: service docker {start|stop|restart|status} PIPPO"
exit 1
;;
esac
the problem with this is that it doesn't pass the DOCKER_OPTS value to the start-stop-daemon section in start section (or at least they don't work and don't appear in the command of the resulting process in ps aux output).
We've tried to put the DOCKER_OPTS directly in the script as well as let it been read on the default config file but the result is the same, the options doesn't make any effect.
If we try to launch the process with start-stop-daemon directly form terminal with the same options it work just fine.
What can be the reason of this ?
On a side note, we also try to play with the script a bit and found this strage situation:
we made a copy of the status section called statu, then we call booth and they give us different results.
status :
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled)
Active: active (running) since Thu 2016-02-25 12:40:43 CET; 4min 35s ago
Docs: https://docs.docker.com
statu:
[FAIL[....] Docker is not running ... failed!
Being the code the same this is quite a surprise !? Why is it so ?
Systemd doesn't use the scripts in /etc/init.d. From your output, it's using the package default configuration in /lib/systemd/system/docker.service. If you want to make changes:
# make a local copy, /lib/systemd can be overwritten on an application upgrade
cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
# edit /etc/systemd/system/docker.service
systemctl daemon-reload # updates systemd from files on the disk
systemctl restart docker # restart the service with your changes

UWSGI logrotation

I have running uwsgi server. i need log rotation for daily and file size based log rotation.
uwsgi configuration:
# file: /etc/init/uwsgi.conf
description "uWSGI starter"
start on (local-filesystems and runlevel [2345])
stop on runlevel [016]
respawn
# home - is the path to our virtualenv directory
# pythonpath - the path to our django application
# module - the wsgi handler python script
exec /home/testuser/virtual_environments/teatapp/bin/uwsgi \
--uid testuser \
--home /home/testuser/virtual_environments/teatapp \
--pythonpath /home/testuser/sci-github/teatapp\
--socket /tmp/uwsgi.sock \
--chmod-socket \
--module wsgi \
--logdate \
--optimize 2 \
--processes 2 \
--master \
--logto /var/log/uwsgi/uwsgi.log
logrotate configuration:
# file : /etc/logrotate.conf
"/var/log/uwsgi/*.log" {
copytruncate
daily
maxsize 5k
dateext
rotate 5
compress
missingok
create 777 root root
}
But log rotation is not working please give the solution for if any wrong configuration in logrotaion.conf.
It's not needed to restart uwsgi service if you use copytruncate option in logrotate file (as stated by Tamar).
But the problem may be that you forgot to enable logrotate in cron. Please, make sure you have a entry in /etc/cron.daily called logrotate.
there is logrotation in uwsgi, based on the log file size, for example (uwsgi.ini directive):
log-maxsize = 100000
if you want to use logrotated, you have to restart uwsgi (logrotate directives):
postrotate
stop uwsgi
start uwsgi
endscript
Just put this on your uwsgi configuration file :
daemonize = /var/log/uwsgi/uwsgi-#(exec://date +%%Y-%%m-%%d).log
This will create a log each day, but be carefull don't daemonize if you are using master our emperror. Then if the logs are big you can control it with a script attatched to a cron to clean the folder.

Resources