I'm trying to set up a uWSGI server with Lua script.
For now I've just a little test script (more or less the one shown in the uWSGI doc http://uwsgi-docs.readthedocs.org/en/latest/Lua.html#your-first-wsapi-application).
Here is my script :
function run(wsapi_env)
local headers = { ["Content-type"] = "text/html" }
local function hello_text()
coroutine.yield("<html><body>")
coroutine.yield("<p>Hello Wsapi!</p>")
coroutine.yield("<p>PATH_INFO: " .. wsapi_env.PATH_INFO .. "</p>")
coroutine.yield("<p>SCRIPT_NAME: " .. wsapi_env.SCRIPT_NAME .. "</p>")
coroutine.yield("</body></html>")
end
return 200, headers, coroutine.wrap(hello_text)
end
return run
I launch uWSGI with this command line ( until I manage to launch it succefully once, then I will use config file) :
uwsgi --socket :63031 --plugins lua --lua main.lua --master
I've run this command from the directory where is stored main.lua (I've tried with main.lua full path ) .
But uWSGI doesn't load the lua script :
*** Starting uWSGI 2.0.7-debian (64bit) on [Thu Feb 5 15:45:00 2015] ***
compiled with version: 4.9.1 on 25 October 2014 19:17:54
os: Linux-3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt2-1 (2014-12-08)
nodename: ns342653.ip-91-121-135.eu
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/vincent/web
detected binary path: /usr/bin/uwsgi-core
your processes number limit is 63906
your memory page size is 4096 bytes
detected max file descriptor number: 65536
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :63031 fd 3
Initializing Lua environment... (1 lua_States)
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145536 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 8148)
spawned uWSGI worker 1 (pid: 8149, cores: 1)
How can I make uWSGI load my script ?
Thanks for your awnser.
( P.S. : I've successfully launched uWSGI with psgi and perl script with almost the same config)
Related
I have a uwsgi process running a flask application. There is haproxy (running in mode http) sitting between the client and the application.
I am seeing occational haproxy termination state as "SD--" and the Tc = 0 and Tr = -1, and the returned http code is -1. This means that the haproxy encountered a explicit tcp disconnection from the uwsgi server.
Looking at the uwsgi logs, I found that the server was normally processing requests at the same time. But the affected request never reached the server.
Only thing strange about the uwsgi logs at that point of time is that
the Number of requests managed by the current uwsgi worker is greater than the sum total of requests managed by the whole uwsgi app.
like this:
[pid: 22759|app: 0|req: **47188**/**47178**] * POST * => generated 84 bytes in 970 msecs (HTTP/1.1 200) 2 headers in 71 bytes (3 switches on core 98)
I am wondering if this is abnormal, or what what scenarios can these counters be so?
I have a RPi running NGINX and UWSGI serving a webpage and an API via UWSGI.
Web page works fine, both locally and from the web.
API works locally, but not via web. My guess it's either the router or the NGINX configuration.
I am using cloudflare for the DNS, and all appears fine there.
I can GET / POST locally using Postman, but not via the web address. I would greatly appreciate any ideas on where to look.
Output from uwsgi is:
*** Starting uWSGI 2.0.20 (32bit) on [Sat May 14 12:35:08 2022] ***
compiled with version: 8.3.0 on 06 October 2021 05:59:48
os: Linux-5.10.103-v7l+ #1529 SMP Tue Mar 8 12:24:00 GMT 2022
nodename: xxx
machine: armv7l
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /var/www/xxx.xxx/public
detected binary path: /home/pi/.local/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 12393
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :9090 fd 4
spawned uWSGI http 1 (pid: 3176)
uwsgi socket 0 bound to TCP address 127.0.0.1:34881 (port auto-assigned) fd 3
Python version: 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0xd5c950
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 64408 bytes (62 KB) for 1 cores
*** Operational MODE: single process ***
<<<<<<<<<<<<<<<< Loaded script >>>>>>>>>>>>>>>>
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0xd5c950 pid: 3175 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 3175, cores: 1)
I have been trying to understand an issue I've had when running roribio16/alpine-sqs docker image on one of my machines. Whenever I try to run the image without specifying any other settings, docker run roribio16/alpine-sqs
[xxxx#yyyy ~]$ docker run roribio16/alpine-sqs
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/elasticmq.conf" during parsing
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/insight.conf" during parsing
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/sqs-init.conf" during parsing
2021-05-29 15:48:41,216 INFO Set uid to user 0 succeeded
2021-05-29 15:48:41,222 INFO RPC interface 'supervisor' initialized
2021-05-29 15:48:41,222 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2021-05-29 15:48:41,222 INFO supervisord started with pid 1
2021-05-29 15:48:42,225 INFO spawned: 'sqs-init' with pid 9
2021-05-29 15:48:42,229 INFO spawned: 'elasticmq' with pid 10
2021-05-29 15:48:42,230 INFO spawned: 'insight' with pid 11
cp: can't stat '/opt/custom/*.conf': No such file or directory
> sqs-insight#0.3.0 start /opt/sqs-insight
> node index.js
15:48:42.605 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
Loading config file from "/opt/sqs-insight/lib/../config/config_local.json"
15:48:42.929 [elasticmq-akka.actor.default-dispatcher-2] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
Unable to load queues for undefined
Config contains 0 queues.
library initialization failed - unable to allocate file descriptor table - out of memorylistening on port 9325
2021-05-29 15:48:43,233 INFO success: sqs-init entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,233 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,234 INFO success: insight entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,234 INFO exited: sqs-init (exit status 0; expected)
2021-05-29 15:48:44,318 INFO exited: elasticmq (terminated by SIGABRT (core dumped); not expected)
2021-05-29 15:48:45,322 INFO spawned: 'elasticmq' with pid 67
15:48:45.743 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
15:48:46.044 [elasticmq-akka.actor.default-dispatcher-2] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
library initialization failed - unable to allocate file descriptor table - out of memory2021-05-29 15:48:47,223 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:47,389 INFO exited: elasticmq (terminated by SIGABRT (core dumped); not expected)
2021-05-29 15:48:48,393 INFO spawned: 'elasticmq' with pid 89
15:48:48.766 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
15:48:49.066 [elasticmq-akka.actor.default-dispatcher-3] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
library initialization failed - unable to allocate file descriptor table - out of memory^C2021-05-29 15:48:49,559 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:49,559 WARN received SIGINT indicating exit request
2021-05-29 15:48:49,559 INFO waiting for insight, elasticmq to die
2021-05-29 15:48:49,566 INFO stopped: insight (terminated by SIGTERM)
2021-05-29 15:48:50,431 INFO stopped: elasticmq (terminated by SIGABRT (core dumped))
With a bit of googling I found this post where somebody had the same issue when running some other random image, and then posted that they managed to get the image running by setting some ulimits when running the image, which also worked for me (docker run --ulimit nofile=122880:122880 roribio16/alpine-sqs).
I checked the ulimits set inside the container when I didn't use this configuration
docker exec -it ca bash
$ ulimit -a
and found that the nofile setting was ridiculously high, which I assume is what is causing the container to run out of memory, if too many files are being opened simultaneously. I don't have a particulary good understanding of how this works though so would appreciate any clarification somebody could shed on that particular topic also.
Anyway the point of that ramble is that I want to try and find where the default docker container ulimits are set as I don't understand why they are so high on the machine I am using. I have another machine that does not have this problem.
I can find lots of ways to change the default limits but there does not seem to be much information about where these limits get set in the first place. I understand according to the docker documentation that if custom values are not set then the ulimits should be inherited from my system but as far as I can tell my system nofile settings are much lower than what I'm seeing in the container.
(Both machines run manjaro linux however the one that doesn't have this issue is XFCE and the one that does is KDE).
I'm trying to setup tensorflow to use GPU acceleration with WSL 2 running Ubuntu 20.04. I'm following this tutorial and am running into the error seen here. However, when I follow the solution there and try to start docker with sudo service docker start I get told docker is an unrecognized service. However, considering I can access the help menu and whatnot, I know docker is installed. While I can get docker to work with the desktop tool, since it doesn't support Cuda as mentioned in the SO post from earlier, it's not very helpful. It's not really giving me error logs or anything, so please ask if you need more details.
Edit:
Considering the lack of details, here are a list of solutions I've tried to no avail. 1 2 3
Update: I used sudo dockerd to get the container started and tried running the nvidia benchmark container only to be met with
INFO[2020-07-18T21:04:05.875283800-04:00] shim containerd-shim started address=/containerd-shim/021834ef5e5600bdf62a6a9e26dff7ffc1c76dd4ec9dadb9c1fcafb6c88b6e1b.sock debug=false pid=1960
INFO[2020-07-18T21:04:05.899420200-04:00] shim reaped id=70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736
ERRO[2020-07-18T21:04:05.909710600-04:00] stream copy error: reading from a closed fifo
ERRO[2020-07-18T21:04:05.909753500-04:00] stream copy error: reading from a closed fifo
ERRO[2020-07-18T21:04:06.001006700-04:00] 70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736 cleanup: failed to delete container from containerd: no such container
ERRO[2020-07-18T21:04:06.001045100-04:00] Handler for POST /v1.40/containers/70316df254d6b2633c743acb51a26ac2d0520f6f8e2f69b69c4e0624eaac1736/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled
Update 2: After installing windows insider and making everything as up to date as possible, I encountered a different error.
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Error: only 0 Devices available, 1 requested. Exiting.
I have a GTX 970, so I'm not sure why it's not being detected. After running sudo lshw -C display, it was confirmed that my graphics card isn't being detected. I got:
*-display UNCLAIMED
description: 3D controller
product: Microsoft Corporation
vendor: Microsoft Corporation
physical id: 4
bus info: pci#941e:00:00.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: bus_master cap_list
configuration: latency=0
I installed uWSGI with pcre support (on Heroku)
I got this message :
################# uWSGI configuration #################
pcre = True
kernel = Linux
malloc = libc
execinfo = False
ifaddrs = True
ssl = True
zlib = True
locking = pthread_mutex
plugin_dir = .
timer = timerfd
yaml = embedded
json = False
filemonitor = inotify
routing = True
debug = False
capabilities = False
xml = libxml2
event = epoll
############## end of uWSGI configuration #############
However, when i launch it using uwsgi --pcre-jit
I got this:
*** Starting uWSGI 2.0.10 (64bit) on [Mon Jun 22 22:51:56 2015] ***
compiled with version: 4.8.2 on 22 June 2015 22:37:39
os: Linux-3.13.0-49-generic #83-Ubuntu SMP Fri Apr 10 20:11:33 UTC 2015
nodename: 2bba099f-37e1-4ee2-aaa2-2400a68e6530
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /app
detected binary path: /app/.heroku/python/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 256
your memory page size is 4096 bytes
detected max file descriptor number: 10000
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
The -s/--socket option is missing and stdin is not a socket.
pcre jit disabled. Why uwsgi does not use pcre?