I'm running into some permission issues with uwsgi running on Ubuntu 12. Here is my ini file:
[uwsgi]
project = djangorpoject
base_dir = /home/mysite/mysite.com
uid = www-data
gid = www-data
plugins = http,python
processes = 4
harakiri = 60
reload-mercy = 8
cpu-affinity = 1
max-requests = 2000
limit-as = 512
reload-on-as = 256
reload-on-rss = 192
no-orphans = True
#vacuum = True
master = True
logto = /var/log/uwsgi/%n.log
#daemonize = /var/log/uwsgi/%n.log
#catch-exceptions
disable-logging
virtualenv = %(base_dir)/venv
chdir = %(base_dir)
module = %(project).wsgi:application
socket = /run/uwsgi/%n.sock
chmod-socket = 666
chown-socket = www-data:www-data
As you can see, I am running chmod and chown on the socket file. When I attempt to load my site, I am getting the following error:
bind(): Permission denied [socket.c line 107]
This goes away if I run
sudo chown -R www-data:www-data /run/uwsgi
But this doesn't persist when I reboot my server. I am assuming this is because uwsgi is recreating the folder on boot? Is there any way to permanently apply the permissions to socket?
/run is a tmpfs which means it is not persistent across reboots. Create a directory /var/uwsgi instead which will be persistent.
Related
I want to customize some MariaDB server configs (wait_timeout, etc.). I followed the instructions on the official Docker image and created two .cnf files inside my host dir config/mariadb which is mounted in the container via these volumes:
volumes:
- /config/mariadb:/etc/mysql/conf.d:ro
- /config/mariadb:/etc/mysql/mariadb.conf.d:ro
root#server:~# ls -l /config/mariadb
total 12
-rwxrwx--- 1 root root 263 Jun 8 13:43 mariadb-finetuning.cnf
-rwxrwx--- 1 root root 367 Jun 8 13:43 mariadb-inno.cnf
These are the config files:
[mysqld]
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
# default_storage_engine = InnoDB
# innodb_buffer_pool_size = 256M
# innodb_log_buffer_size = 8M
# innodb_file_per_table = 1
# innodb_open_files = 400
# innodb_io_capacity = 400
innodb_flush_method = fsync
[mysqld]
# * Fine Tuning
#
# max_connections = 100
# connect_timeout = 5
wait_timeout = 3600
# max_allowed_packet = 16M
# thread_cache_size = 128
# sort_buffer_size = 4M
# bulk_insert_buffer_size = 16M
# tmp_table_size = 32M
# max_heap_table_size = 32M
Somehow the options have no effect and I don't know why:
show variables where variable_name = 'innodb_flush_method'
# Variable_name Value
# innodb_flush_method O_DIRECT
Is there any better way to check why mariadb does not pick up these configs?
Update:
Manually editing my.cnf inside the container works but isn't what I want to do.
Runnnig mysqld --print-defaults will print out ... --innodb_flush_method=fsync as result. It seems to me that this may be due to the entrypoint script /docker-entrypoint.sh ?
If your MariaDB can run with your settings, perform 'docker exec' to run a bash inside your MariaDB container and check if your volume mount shows the expected files in the expected location.
Then also check that the configuation or startup of MariaDB inside the container is configured to read these files at all. You could to that by providing empty files, or files with gibberish inside.
Only then start looking at the content of the files. Once you reach here, you seem to have power over the containerized database engine.
Problem were file permissions.
The config files must be readable by the linux user running the database.
There may be several solutions, in my case the simplest is to apply 774 file permissions so that the file is readable by every user:
root#server:~# ls -l /config/mariadb
total 12
-rwxrwxr-- 1 root root 263 Jun 8 13:43 mariadb-finetuning.cnf
-rwxrwxr-- 1 root root 367 Jun 8 13:43 mariadb-inno.cnf
I have a very strange issue with my Rust program that uses the rocket-rs library.
The issue I am facing is that when I try and build my program in a Docker container using a Dockerfile I created, some parts of the config I set out in the rocket.toml file is not applied. More specifically, I have set the log level option to critical in the config file and that is working but the address option I have set in the config file is not applied.
What is wierd is that I can build and all the options are applied on my local machine properly but not in the container.
Output when I build and run the program on my machine (no docker):
Configured for release.
>> address: 0.0.0.0
>> port: 8000
>> workers: 12
>> ident: Rocket
>> keep-alive: 5s
>> limits: bytes = 8KiB, data-form = 2MiB, file = 1MiB, form = 32KiB, json = 1MiB, msgpack = 1MiB, string = 8KiB
>> tls: disabled
>> temp dir: C:\Users\Nlanson\AppData\Local\Temp\
>> log level: critical
>> cli colors: true
>> shutdown: ctrlc = true, force = true, grace = 2s, mercy = 3s
Output when I build and run the program in a docker container:
Configured for release.
>> address: 127.0.0.1 //This is what I do not want
>> port: 8000
>> workers: 2
>> ident: Rocket
>> keep-alive: 5s
>> limits: bytes = 8KiB, data-form = 2MiB, file = 1MiB, form = 32KiB, json = 1MiB, msgpack = 1MiB, string = 8KiB
>> tls: disabled
>> temp dir: /tmp
>> log level: critical
>> cli colors: true
>> shutdown: ctrlc = true, force = true, signals = [SIGTERM], grace = 2s, mercy = 3s
Here is the Dockerfile I am using:
FROM rust as builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM rust as runtime
WORKDIR /app
COPY --from=builder /app/target/release/server .
COPY --from=builder /app/database.db .
EXPOSE 8000
CMD ["./server"]
and my rocket config file:
[global]
#address is not applied
address = "0.0.0.0"
#log level is applied
log_level = "critical"
I have tried a few things to trouble shoot this issue:
Run the container with docker run -it <container name> bash and check that all the required files including the config file is copied into the container
Build the program in the container through bash using different options.
Please let me know if I am missing any details.
Thanks in advance
You can create environment variable named ROCKET_ADDRESS in dockerfile. I am sharing an example
ENV_ROCKET_ADDRESS=0.0.0.0
EXPOSE 8000
CMD ["./server"]
Installed the lxc container via lxc-create:
sudo lxc-create -t download -n dos1
I chose debian buster arm64 and run it:
sudo lxc-start -n dos1 -d
Outputs an error:
lxc-start: dos1: tools/lxc_start.c: main: 290 No container config specified
What is the problem? Am I doing something wrong?
PS: configs are configured /etc/lxc/default.conf:
lxc.net.0.type = veth
lxc.net.0.link = virbr0
lxc.net.0.flags = up
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
~/.config/lxc/default.conf:
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
UPD
The problem is solved. You had to specify the path to the configuration file directly. For example:
sudo lxc-start -n dos1 -f /var/lib/lxc/dos1/config -d
Then all lxc-* commands must be executed with sudo
I got this error because I didn’t specify sudo. Without root permissions, lxc-start couldn’t find and read the container config to start it.
I'm trying to have several gitlab runners using different docker daemons on the same host
Currently using gitlab-runner 10.7.0 and docker 19.03.3. The goal is to maximize the usage of resources. Since I have two SSD disks on the machine, I want the runners to use both of them. The only way I found to have some runners use one disk while some others use the other disk is to have two docker daemons, one running on each disk.
I have one docker daemon running on unix:///var/run/docker-1.sock and one on unix:///var/run/docker-2.sock. They use each a dedicated bridge created manually. The (systemd) startup command line looks like /usr/bin/dockerd --host unix:///var/run/docker_socket/docker-%i.sock --containerd=/run/containerd/containerd.sock --pidfile /var/run/docker-%i.pid --data-root /data/local%i/docker/ --exec-root /data/local%i/docker_run/ --bridge docker-%i --fixed-cidr 172.%i0.0.1/17
The gitlab_runner mounts /var/run/docker_socket/ and runs on docker-1.sock.
I tried having one per docker daemon but then two jobs runs on the same runner although the limit is set to 1 (and also there are some sometimes errors appearing like ERROR: Job failed (system failure): Error: No such container: ...)
After registration the config.toml looks like:
concurrent = 20
check_interval = 0
[[runners]]
name = "[...]-large"
limit = 1
output_limit = 32768
url = "[...]"
token = "[...]"
executor = "docker"
[runners.docker]
host = "unix:///var/run/docker-1.sock"
tls_verify = false
image = "debian:jessie"
memory = "24g"
cpuset_cpus = "1-15"
privileged = false
security_opt = ["seccomp=unconfined"]
disable_cache = false
volumes = ["/var/run/docker-1.sock:/var/run/docker.sock"]
shm_size = 0
[runners.cache]
[[runners]]
name = "[...]-medium-1"
limit = 1
output_limit = 32768
url = "[...]"
token = "[...]"
executor = "docker"
[runners.docker]
host = "unix:///var/run/docker-2.sock"
tls_verify = false
image = "debian:jessie"
memory = "12g"
cpuset_cpus = "20-29"
privileged = false
security_opt = ["seccomp=unconfined"]
disable_cache = false
volumes = ["/var/run/docker-2.sock:/var/run/docker.sock"]
shm_size = 0
[runners.cache]
The two docker daemons are working fine. Tested with docker --host unix:///var/run/docker-<id>.sock ps
The current solution seems to be kind of OK but there are random errors in the gitlab_runner logs:
ERROR: Appending trace to coordinator... error couldn't execute PATCH against http://[...]/api/v4/jobs/223116/trace: Patch http://[...]/api/v4/jobs/223116/trace: read tcp [...] read: connection reset by peer runner=0ec8a845
Other people tried this, apparently with some success:
This one seems to list the whole set of options needed to properly run each instance of dockerd : Is it possible to start multiple docker daemons on the same machine. What are yours?
This other https://www.jujens.eu/posts/en/2018/Feb/25/multiple-docker/, does not speak about the possible extra bridge config.
NB: Docker documentation says the feature is experimental: https://docs.docker.com/engine/reference/commandline/dockerd/#run-multiple-daemons
This is my UWSGI config:
[uwsgi]
uid = $APPUSER
gid = $APPGROUP
socket = $SOCK
processes = 4
chdir = $APPDIR
virtualenv = $APPVENV
pythonpath = $APPVENV/bin/python
module = run
callable = app
emperor-pidfile = $APPDIR/emperor.pid
daemonize = /var/log/emperor.log
When emperor runs it does create the emperor log file but it is running in the foreground and not in the background as a daemon.
What might be causing this?
You should also pass --daemonize <logfile> to the emperor.
And see How to make uwsgi --emperor run as daemon