How to diagnose intermittent uwsgi errors? - uwsgi

First let me briefly describe our set up before I ask the question proper:
We have a web application server (virtual machine) running a django application. nginx at the front, uwsgi running under that, then a newrelic application wrapper followed by django et al., database is a separate postgresql server located via smartstack (synapse/nerve)
The issue we face is that occasionally (happened once 2 weeks ago, and twice in the last 2 days), one or two of the uwsgi worker processes will trip up and start producing "django.db.utils.InterfaceError: connection already closed" on most of their requests.
slightly redacted stack trace (user and application_name):
Traceback (most recent call last):
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/newrelic-2.8.0.7/newrelic/api/web_transaction.py", line 863, in __call__
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/newrelic-2.8.0.7/newrelic/api/function_trace.py", line 90, in literal_wrapper
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/newrelic-2.8.0.7/newrelic/api/web_transaction.py", line 752, in __call__
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 194, in __call__
signals.request_started.send(sender=self.__class__)
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 185, in send
response = receiver(signal=self, sender=sender, **named)
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/__init__.py", line 91, in close_old_connections
conn.abort()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 374, in abort
self.rollback()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 177, in rollback
self._rollback()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 141, in _rollback
return self.connection.rollback()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 141, in _rollback
return self.connection.rollback()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/newrelic-2.8.0.7/newrelic/hooks/database_dbapi2.py", line 82, in rollback
django.db.utils.InterfaceError: connection already closed
The stack trace never gets in to our application, it only touches new relic and django. Once a worker trips, it doesn't recover and all further requests result in 500's in the uwsgi logs and 502's on the front side. I assume database connectivity is fine because the sibling workers continue to function normally, and restarting uwsgi instantly fixes the problem.
My question is how one would go about diagnosing this issue to pinpoint the root cause, I have checked everything I know how to check (memory, cpu, logs, database connectivity) and some things I don't fully understand but am trying to read up on (file descriptors mainly).
For now I updated new relic (stack trace is older version) as it's the only thing I felt I could do.
I would appreciate any feedback, many google searches have proved fruitless.
replies may be slightly delayed, my timezone says it's time to sleep. Also, apologies if this should be on serverfault or something, I just figured it's closer to an application debug issue than a server config issue.

Related

Dask worker hangs after missing dep key

In a distributed GKE dask cluster, I have one graph that stalls with the traceback below. The worker dashboard continues reporting the same constant high value for cpu, while the GKE dashboard shows near-zero CPU for the pod. The "last seen" value of the worker dashboard becomes many minutes. After 15 minutes I kill the GKE pod, yet the dask scheduler still indicates the worker exists and keeps the task assigned to it. The scheduler seems to be wedged regarding this task - no progress is made, nor is any cleanup or restarting of the failing work unit.
I am using dask/distributed 2020.12.0, dask-gateway 0.9.0, and xarray 0.16.2.
What can cause a key to go missing?
How does one debug or workaround the underlying issue here?
Edit:
For each run of the same graph, a different key is in the traceback. Wedged workers for a single run show the same key in their tracebacks.
With enough patience/retries, the graph can succeed.
I am using an auto-scaling cluster with pre-emptible nodes, though the problem persists even if I remove auto-scaling and set a fixed number of nodes.
I've seen this on different workloads. The current workload I'm struggling with creates a dask dataframe from two arrays like this:
image_chunks = image.to_delayed().ravel()
labels_chunks = labels.to_delayed().ravel()
results = []
for image_chunk, labels_chunk in zip(image_chunks, labels_chunks):
offsets = np.array(image_chunk.key[1:]) * np.array(image.chunksize)
result = dask.delayed(stats)(image_chunk, labels_chunk, offsets, ...)
results.append(result)
...
dask_df = dd.from_delayed(results, meta=df_meta)
dask_df = dask_df.groupby(['label', 'kind']).sum()
Example traceback #1
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/distributed/worker.py", line 2627, in execute
self.ensure_communicating()
File "/opt/conda/lib/python3.8/site-packages/distributed/worker.py", line 1880, in ensure_communicating
to_gather, total_nbytes = self.select_keys_for_gather(
File "/opt/conda/lib/python3.8/site-packages/distributed/worker.py", line 1985, in select_keys_for_gather
total_bytes = self.tasks[dep].get_nbytes()
KeyError: 'xarray-image-bc4e1224600f3930ab9b691d1009ed0c'
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7fac3d04f4f0>>, <Task finished name='Task-143' coro=<Worker.execute() done, defined at /opt/conda/lib/python3.8/site-packages/distributed/worker.py:2524> exception=KeyError('xarray-image-bc4e1224600f3930ab9b691d1009ed0c')>)
Example traceback #2
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/tornado/ioloop.py", line 741, in _run_callback
ret = callback()
File "/opt/conda/lib/python3.8/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
future.result()
File "/opt/conda/lib/python3.8/site-packages/distributed/worker.py", line 2146, in gather_dep
await self.query_who_has(dep.key)
File "/opt/conda/lib/python3.8/site-packages/distributed/worker.py", line 2218, in query_who_has
self.update_who_has(response)
File "/opt/conda/lib/python3.8/site-packages/distributed/worker.py", line 2227, in update_who_has
self.tasks[dep].who_has.update(workers)
KeyError: "('rechunk-merge-607c9ba97d3abca4de3981b3de246bf3', 0, 0, 4, 4)"

IPython crashing when I hold any key

When I ssh into a particular remote machine and start an IPython session, it crashed whenever I hold a key for about half a second (e.g. backspace key).
The error output is pasted below:
File "/home/zach/local/anaconda3/bin/ipython", line 11, in <module>
sys.exit(start_ipython())
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/__init__.py", line 125, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/terminal/ipapp.py", line 356, in start
self.shell.mainloop()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py", line 498, in mainloop
self.interact()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py", line 481, in interact
code = self.prompt_for_code()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py", line 410, in prompt_for_code
**self._extra_prompt_options())
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/shortcuts/prompt.py", line 738, in prompt
return run_sync()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/shortcuts/prompt.py", line 727, in run_sync
return self.app.run(inputhook=self.inputhook, pre_run=pre_run2)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/application/application.py", line 709, in run
return run()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/application/application.py", line 682, in run
run_until_complete(f, inputhook=inputhook)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/eventloop/defaults.py", line 123, in run_until_complete
return get_event_loop().run_until_complete(future, inputhook=inputhook)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/eventloop/posix.py", line 66, in run_until_complete
self._run_once(inputhook)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/eventloop/posix.py", line 85, in _run_once
self._inputhook_context.call_inputhook(ready, inputhook)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/eventloop/inputhook.py", line 78, in call_inputhook
threading.Thread(target=thread).start()
File "/home/zach/local/anaconda3/lib/python3.7/threading.py", line 847, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev#python.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True
It drops me from here into a broken bash session where my keystrokes do not appear on screen, although I can execute commands such as ls, man, pwd, ipython, etc. I can only kill the bash session by pressing Control D followed by Control C. In particular, the message's suggestion that I press %tb and so forth is not possible.
Other programs are not competing for threads. Looking through the error, it looks like the an event loop is possibly trying to create a thread to handle every key press, and this eventually causes failure to allocate more threads. It seems a little far-fetched that this would be the issue though since holding a key down is surely expected behavior.
This seems potentially similar to the issue https://ipython.org/faq.html#ipython-crashes-under-os-x-when-using-the-arrow-keys.
It appears not to be a Python issue per se, since if I use Python rather than IPython the issue disappears. I initially used Anaconda ipython but also switched to the system ipython in /usr/bin/ipython with the same results. Also tried a clean install of Anaconda, with the same issue. Also tried a fresh install of Anaconda on a different machine with the same OS, and the issue did not occur.
I am looking for ideas to make progress on this issue. Any ideas are appreciated, and I will post follow-up data if needed.
Python 3.7.3 (default, Mar 27 2019, 22:11:17)
IPython 7.5.0
Ubuntu 18.04.2 LTS
It is fixed now, but still somewhat mysterious to me. I followed the stack trace all the way down through CPython to the pthreads library calls. The pthreads documentation indicated that the error can essentially only arise if one is out of memory on the heap or if the max number of threads has been allocated. I used ulimit to set the virtual memory per process to unlimited (it had been ~3 GB). This resolved the issue.
So apparently the virtual memory limit interfered with the ability to allocate a thread. The obvious solution is that more memory was needed, although it is hard to believe that more than 3 GB is needed to respond to a key press. Another possibility is that the amount allocated per thread is a function of the virtual memory limit--I remember something like that in the pthreads documentation although it was a bit above my head.

RuntimeError: concurrent poll() invocation using celery

When running Celery on a Docker container which receives restAPI from other containers I get a RuntimeError: concurrent poll() invocation.
Did anyone face a similar error?
I attach the traceback.
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/opt/www/api/api/training_call.py", line 187, in start_process
result_state.get(on_message=self._on_raw_message, propagate=False)
File "/usr/local/lib/python3.5/dist-packages/celery/result.py", line 226, in get
on_message=on_message,
File "/usr/local/lib/python3.5/dist-packages/celery/backends/asynchronous.py", line 188, in wait_for_pending
for _ in self._wait_for_pending(result, **kwargs):
File "/usr/local/lib/python3.5/dist-packages/celery/backends/asynchronous.py", line 255, in _wait_for_pending
on_interval=on_interval):
File "/usr/local/lib/python3.5/dist-packages/celery/backends/asynchronous.py", line 56, in drain_events_until
yield self.wait_for(p, wait, timeout=1)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/asynchronous.py", line 65, in wait_for
wait(timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/redis.py", line 127, in drain_events
message = self._pubsub.get_message(timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/redis/client.py", line 3135, in get_message
response = self.parse_response(block=False, timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/redis/client.py", line 3034, in parse_response
if not block and not connection.can_read(timeout=timeout):
File "/usr/local/lib/python3.5/dist-packages/redis/connection.py", line 628, in can_read
return self._parser.can_read() or self._selector.can_read(timeout)
File "/usr/local/lib/python3.5/dist-packages/redis/selector.py", line 28, in can_read
return self.check_can_read(timeout)
File "/usr/local/lib/python3.5/dist-packages/redis/selector.py", line 156, in check_can_read
events = self.read_poller.poll(timeout)
RuntimeError: concurrent poll() invocation
The broker connection is not thread-safe, so you need to handle thread-safety in your application code.
#Laizer mentioned the ticket where this error was introduced in the python core library
One way to do it is to wrap all the calls that block until task completion in shared Lock:
import celery
import threading
#celery.shared_task
def debug_task(self):
print('Hello, world')
def boom(nb_tasks):
""" not thread safe - raises RuntimeError during concurrent executions """
tasks = celery.group([debug_task.s() for _ in range(nb_tasks)])
pool = tasks.apply_async()
pool.join() # raised from here
CELERY_POLL_LOCK = threading.Lock()
def safe(nb_tasks):
tasks = celery.group([debug_task.s() for _ in range(nb_tasks)])
pool = tasks.apply_async()
with CELERY_POLL_LOCK: # prevents concurrent calls to poll()
pool.join()
def main(nb_threads, nb_tasks_per_thread):
for func in (safe, boom):
threads = [threading.Thread(target=func, args=(nb_tasks_per_thread, )) for _ in range(nb_threads)]
for a_thread in threads:
a_thread.start()
for a_thread in threads:
a_thread.join()
main(10, 100)
This is a naive approach, that's suitable for me because I don't expect much concurrency and all the tasks are relatively fast (~10s).
If you have a different "profile", you may need something more convoluted (e.g. a single polling task that periodically polls for all pending groups / tasks).
I had the same error come up with an application that was using Redis pub/sub directly. Firing off many calls to redis.client.PubSub.getMessage in quick succession led to this race condition. My solution was to slow down the rate of polling for new messages.
I was faced the same problem, and solved it by
pip install -U "celery[redis]"
hope it's helpful to you
https://docs.celeryproject.org/en/latest/getting-started/brokers/redis.html

py2neo 3.1.2 connection problems

I am trying to commit a small graph of three nodes (a_py2neo_subgraph) to my graphene Neo4j server. I am using py2neo 3.1.2.
g = py2neo.Graph(server)
tx = g.begin()
tx.create(a_py2neo_subgraph)
tx.commit()
tx.finished()
Where "server" is the exact value given by Graphene (i.e. something like http://nick:password#hobby-hash.dbs.graphenedb.com:port/db/data/). In order to debug, I ran with py2neo watch and this was the info given by watch for my httpstream:
> GET server
< 200 OK [1287]
Then I get the following traceback:
Traceback (most recent call last):
File "C:/Users/petr.svarny/PycharmProjects/untitled/test.py", line 116, in <module>
tx = g.begin()
File "C:\Users\petr.svarny\untitled\lib\site-packages\py2neo\database\__init__.py", line 370, in begin
return self.transaction_class(self, autocommit)
File "C:\Users\petr.svarny\untitled\lib\site-packages\py2neo\database\__init__.py", line 1249, in __init__
self.session = driver.session()
File "C:\Users\petr.svarny\untitled\lib\site-packages\py2neo\packages\neo4j\v1\session.py", line 126, in session
connection = connect(self.address, self.ssl_context, **self.config)
File "C:\Users\petr.svarny\untitled\lib\site-packages\py2neo\packages\neo4j\v1\bolt.py", line 419, in connect
s = create_connection(host_port)
File "C:\Python27\Lib\socket.py", line 571, in create_connection
raise err
socket.error: [Errno 10060]
I already tried to set the socket timeout to 9999, did not help. I attempted to connect via telnet to the server and managed to do so without any problem. Similarly I am able to access the server address when I enter it into my browser. I also managed to run my code for a local Neo4j database.
Thank you for any suggestions.
I'm Judit from GrapheneDB. Can you check which version of py2neo are you using? The problem you have described looks like a common issue when moving from Py2neo v2 to v3. Since py2neo v3 supports Bolt protocol, you have to specify the bolt port or just tell the driver you're not using it.
If you don't want to use Bolt connection, your code should look like the following:
graph = Graph("http://USER:PASS#hobby-hash.dbs.graphenedb.com:port/db/data/", bolt = False)
If it's not your case, it'd be useful to know which version of Python/Py2neo/Neo4j are you using.
Cheers!
I had a similar problem with py2neo 3.1.2 and I could not get it to work even with the "bolt=False" switch.
I have switched to neorestclient 2.1.1 and now that works.

ZMQ crashes "randomly" in aiohttp web service

We have a aiohttp based web services which uses ZMQ to send jobs to workers and waits for the result. We are of course using the ZMQ eventloop, so we can wait for ZMQ sockets. "Sometimes" the process crashes and we get this stack trace:
...
await socket.send(z, flags=flags)
File "/usr/local/lib/python3.5/dist-packages/zmq/eventloop/future.py", line 165, in send
kwargs=dict(flags=flags, copy=copy, track=track),
File "/usr/local/lib/python3.5/dist-packages/zmq/eventloop/future.py", line 276, in _add_send_event
timeout_ms = self._shadow_sock.sndtimeo
File "/usr/local/lib/python3.5/dist-packages/zmq/sugar/attrsettr.py", line 45, in _getattr_
return self._get_attr_opt(upper_key, opt)
File "/usr/local/lib/python3.5/dist-packages/zmq/sugar/attrsettr.py", line 49, in _get_attr_opt
return self.get(opt)
File "zmq/backend/cython/socket.pyx", line 449, in zmq.backend.cython.socket.Socket.get (zmq/backend/cython/socket.c:4920)
File "zmq/backend/cython/socket.pyx", line 221, in zmq.backend.cython.socket._getsockopt (zmq/backend/cython/socket.c:2860)
"Sometimes" means, that the code works fine, if I just run it on my test machine. We encountered the problem in some rare cases when using docker containers, but were never able to reproduce it in an reliable way. Since we moved our containers into a Kubernetes cluster, it occurs much more often. Does anybody know, what could be the source of the above stack trace?
aiohttp is not intended to be used with vanilla pyzmq.
Use aiozmq loopless streams instead.
See also https://github.com/zeromq/pyzmq/issues/894 and https://github.com/aio-libs/aiozmq/blob/master/README.rst

Resources