uWSGI cannot auto create folder in /tmp - uwsgi

I have a uwsgiconfig.yaml like this
socket: /tmp/uwsgi/myapp/socket
chmod-socket: 666
But this doesn't work because there is no folder uwsgi or myapp in /tmp.
If I'm doing it like this, it works
socket: /tmp/uwsgi.myapp.socket
chmod-socket: 666
So why can not uWSGI just create the file by the full path? Or what should I do, what's the best practice?

This is a common UNIX pattern/best practice. Even mkdir does not do it without a proper flag.
If you want to create the directory tree you can use uWSGI hooks:
http://uwsgi-docs.readthedocs.org/en/latest/Hooks.html
if-not-dir = /tmp/uwsgi/
hook-asap = mkdir:/tmp/uwsgi/
end-if =
or
exec-asap = mkdir -p /tmp/uwsgi/myapp

Related

Create a single container instead of 3 different containers

I saw you were setting up a Docker-compose file but it which creates 3 different containers but wanted to combine those 3 containers to a single container/image instead of setting it up as multiple containers at deployment system.
My current list of containers are as follow:
my main container containing my code that I built using Docker File
rest 2 are containers of Redis and Postress but wanted to combine them in 1.
Is there any way to do so?
First of all, running redis, postgres and your "main container" in one container is NOT best practice.
Typically you should have 3 separate containers (single app per container) communicating over the network. Sometimes we want to run two or more lightweight services inside the same container but redis and postgres aren't such services.
I recommend reading: best practices for building containers.
However, it's possible to have multiple services in the same docker container using the supervisord process management system.
I will run both redis and postgres services in one docker container (it's similar to your issue) to illustrate you how it works. It's for demonstration purposes only.
This is a directory structure, we only need Dockerfile and supervisor.conf (supervisord config file):
$ tree example_container/
example_container/
├── Dockerfile
└── supervisor.conf
First, I created a supervisord configuration file with redis and postgres services defined:
$ cat example_container/supervisor.conf
[supervisord]
nodaemon=true
[program:redis]
command=redis-server # command to run redis service
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
[program:postgres]
command=/usr/lib/postgresql/12/bin/postgres -D /var/lib/postgresql/12/main/ -c config_file=/etc/postgresql/12/main/postgresql.conf # command to run postgres service
autostart=true
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
user=postgres
environment=HOME="/var/lib/postgresql",USER="postgres"
Next I created a simple Dockerfile:
$ cat example_container/Dockerfile
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
# Installing redis and postgres
RUN apt-get update && apt-get install -y supervisor redis-server postgresql-12
# Copying supervisor configuration file to container
ADD supervisor.conf /etc/supervisor.conf
# Initializing redis and postgres services using supervisord
CMD ["supervisord","-c","/etc/supervisor.conf"]
And then I built the docker image:
$ docker build -t example_container:v1 .
Finally I ran and tested docker container using the image above:
$ docker run --name multi_services -dit example_container:v1
472c7b2eac7441360126f8fcd0cc80e0e63ac3039f8195715a3a400f6288a236
$ docker exec -it multi_services bash
root#472c7b2eac74:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.7 0.1 27828 23372 pts/0 Ss+ 10:04 0:00 /usr/bin/python3 /usr/bin/supervisord -c /etc/supervisor.conf
postgres 8 0.1 0.1 212968 28972 pts/0 S 10:04 0:00 /usr/lib/postgresql/12/bin/postgres -D /var/lib/postgresql/12/main/ -c config_file=/etc/postgresql/12/main/postgresql.conf
root 9 0.1 0.0 47224 6216 pts/0 Sl 10:04 0:00 redis-server *:6379
...
root#472c7b2eac74:/# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 9/redis-server *:6
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 8/postgres
tcp6 0 0 :::6379 :::* LISTEN 9/redis-server *:6
As you can see it is possible to have multiple services in a single container but this is a NOT recommended approach that should be used ONLY for testing.
Regarding Kubernetes, you can group your containers in a single pod, as a deployment unit.
A Pod is the smallest deployable units of computing that you can create and manage in Kubernetes.
It is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.
A Pod's contents are always co-located and co-scheduled, and run in a shared context.
That would be more helpful than trying to merge containers together in one container.

Disable uWSGI logging

Completely new to uWSGI (as of a few hours ago) and I've been tasked to disable logging. I found this is what I need to add to my ini file: disable-logging=True. In my ini file at /etc/uwsgi/vassals/data.ini, I have:
virtualenv = /opt/our-analytics/apis/env
chdir = /opt/our-analytics/apis/
wsgi-file = app.py
callable = wsgi_app
socket = 127.0.0.1:3031
logto = /var/log/uwsgi/%n.log
My question is, can I simply use nano to add this one-liner disable-logging=True to the bottom of the ini file? Would/should I remove the entire logto = /var/log/uwsgi/%n.log line at the same time?
Then run sudo systemctl restart emperor.uwsgi.service?
Thanks!
p.s. I already checked documentation that was shared via How to disable request logging in Django and uWSGI?, but:
This worked:
Add:
disable-logging=True
...to data.ini, then:
sudo systemctl restart emperor.uwsgi.service
cd to /var/log/uwsgi and:
rm data.log
then:
sudo systemctl restart emperor.uwsgi.service
If you are starting a small application, with flask for example, and not using a .ini file you can also include the flag --disable-logging in the command.
For example: uwsgi --http 127.0.0.1:8000 --wsgi-file flask_app.py --callable app --disable-logging

Dockerize 'at' scheduler

I want to put at daemon (atd) in separate docker container for running as external environment independent scheduler service.
I can run atd with following Dockerfile and docker-compose.yml:
$ cat Dockerfile
FROM alpine
RUN apk add --update at ssmtp mailx
CMD [ "atd", "-f" ]
$ cat docker-compose.yml
version: '2'
services:
scheduler:
build: .
working_dir: /mnt/scripts
volumes:
- "${PWD}/scripts:/mnt/scripts"
But problems are:
1) There is no built-in option to reditect atd logs to /proc/self/fd/1 for showing them via docker logs command. at just have -m option, which sends mail to user.
Is it possible to redirect at from user mail to /proc/self/fd/1 (maybe some compile flags) ?
2) Now I add new task via command like docker-compose exec scheduler at -f test.sh now + 1 minute. Is it a good way ? I think a better way is to find a file where at stores a queue, add this file as volume, update it externally and just send docker restart after file change.
But I can't find where at stores its data on alpine linux ( I just found /var/spool/atd/.SEQ where at stores id of last job ). Anyone knows where at stores its data ?
Also will be glad to hear any advices regarding at dockerization.
UPD. I found where at stores its data on alpine, it's /var/spool/atd folder. When I create a task via at command it creates here executable file with name like a000040190a2ff and content like
#!/bin/sh
# atrun uid=0 gid=0
# mail root 1
umask 22
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; export PATH
HOSTNAME=e605e8017167; export HOSTNAME
HOME=/root; export HOME
cd /mnt/scripts || {
echo 'Execution directory inaccessible' >&2
exit 1
}
#!/usr/bin/env sh
echo "Hello world"
UPD2: the difference between running at with and without -m option is third string of generated script
with -m option:
#!/bin/sh
# atrun uid=0 gid=0
# mail root 1
...
without -m :
#!/bin/sh
# atrun uid=0 gid=0
# mail root 0
...
According official man
The user will be mailed standard error and standard output from his
commands, if any. Mail will be sent using the command
/usr/sbin/sendmail
and
-m
Send mail to the user when the job has completed even if there was no
output.
I tried to run schedule simple Hello World script and found that no mail was sent:
# mail -u root
No mail for root

How to netcat multiple files without tar?

Currently I am transporting files back and forth over telnet and I would like to send multiple files at once.
However, my target platform (a Blackfin processor) does not have "tar" enabled in its kernel/busybox configuration (a uClinux distribution).
As you all know the normal command is:
nc -p 12345 -l | tar -x
tar -c * | nc 192.168.0.100 12345 # with x.100 the robot IP address
How can I send multiple files using netcat without using tar?
Please, consider that I cannot easily add binaries on the platform. It would be best to do it with basic utilities and/or shell scripts.
Finally managed myself to do this, it can be done!
Here $l> stands for your machine with IP 192.168.0.10. And $e> is done on the embedded device without tar, in my case a robot. It uses old-fashioned dd which is able to copy an entire disk.
$l> nc -p 12345 -l | dd obs=4K of=/tmp/file.jffs2
$e> dd ibs=4K if=/dev/mtdblock2 | nc 192.168.0.10 12345
This is it, but because not everybody knows how to read a filesystem that is in this form, this is how you mount it:
file /tmp/file.jffs2
/tmp/file.jffs2: Linux jffs2 filesystem data little endian
sudo su #careful
mknod /tmp/mtdblock0 b 31 0
modprobe loop
losetup /dev/loop0 /tmp/file.jffs2
modprobe mtdblock
modprobe block2mtd
echo "/dev/loop0,128KiB" > /sys/module/block2mtd/parameters/block2mtd
modprobe jffs2
mkdir /media/robot
mount -t jffs2 /tmp/mtdblock0 /media/robot
Ctrl-D #back as normal user
And yes, you need the loopback device, or else:
sudo mount -t jffs2 /tmp/file.jffs2 /media/robot
mount: /tmp/file.jffs2 is not a block device (maybe try `-o loop'?)
Logically, it is a file (chars), and not a block device. The only thing I do not know is if there is a syntax for dd in which the command on the embedded device, can only select a subset of the filesystem to be included. I don't think this is likely because that would require dd to understand jffs2 while its strength is its raw byte copying behaviour.

Environment variables and PHP

I have an ubuntu server with a handful of custom environment variables set in /etc/environment as per the ubuntu community recommendation
When I use php from the command line I can use php's getenv() function to access this variables.
Also, if I run phpinfo() from the command line I see all of my variables in the ENVIRONMENT section.
However, when trying to access the same data inside processes being run by php5-fpm this data is not available. All I can see in the ENVIRONMENT section of phpinfo() is:
USER www-data
HOME /var/www
I know the command line uses this ini:
/etc/php5/cli/php.ini
And fpm uses:
/etc/php5/fpm/php.ini
I've not managed to find any differences between the two that would explain why the ENV variables are not coming through in both.
Also if run:
sudo su www-data
and then echo the environment variables I am expecting they are indeed available to the www-data user.
What do I need to do to get my environment variables into the php processes run by fpm?
It turns out that you have to explicitly set the ENV vars in the php-fpm.conf
Here's an example:
[global]
pid = /var/run/php5-fpm.pid
error_log = /var/log/php5-fpm.log
[www]
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
chdir = /
env[MY_ENV_VAR_1] = 'value1'
env[MY_ENV_VAR_2] = 'value2'
1. Setting environment variables automatically in php-fpm.conf
clear_env = no
2. Setting environment variables manually in php-fpm.conf
env[MY_ENV_VAR_1] = 'value1'
env[MY_ENV_VAR_2] = 'value2'
! Both methods are described in php-fpm.conf:
Clear environment in FPM workers Prevents arbitrary environment
variables from reaching FPM worker processes by clearing the
environment in workers before env vars specified in this pool
configuration are added. Setting to "no" will make all environment
variables available to PHP code via getenv(), $_ENV and $_SERVER.
Default Value: yes
clear_env = no
Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are
taken from the current environment. Default Value: clean env
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp
I found solution in this github discussion .
The problem is when you run the php-fpm. The process not load the environment.
You can load it in the startup script.
My php-fpm is install by apt-get.
So modify the
/etc/init.d/php5-fpm
and add (beware the space between the dot and the slash)
. /etc/profile
and modify the /etc/profile to add
. /home/user/env.sh
In the env.sh. You can export the environment whatever you need.
Then modify
php-fpm.conf
add env[MY_ENV_VAR_1] = 'value1' under the [www] section.
Last. restart the php-fpm. You'll get the environment load by the fpm.
Adding on to the answers above, I was running php-fpm7 and nginx in an alpine:3.8 docker container. The problem that I faced was the env variables of USER myuser was not getting copied into the USER root
My entrypoint for docker was
sudo nginx # Runs nginx as daemon
sudo php-fpm7 -F -O # Runs php-fpm7 in foreground
The solution for this was
sudo -E nginx
sudo -E php-fpm7 -F -O
-E option of sudo copies all env variables of current user to the root
Of course, your php-fpm.d/www.conf file should have clear_env=no
And FYI, if you're using a daemon service like supervisord they have their own settings to copy the env. For example, supervisord has setting called copy_env=True

Resources