ZMQ crashes "randomly" in aiohttp web service - docker

We have a aiohttp based web services which uses ZMQ to send jobs to workers and waits for the result. We are of course using the ZMQ eventloop, so we can wait for ZMQ sockets. "Sometimes" the process crashes and we get this stack trace:
...
await socket.send(z, flags=flags)
File "/usr/local/lib/python3.5/dist-packages/zmq/eventloop/future.py", line 165, in send
kwargs=dict(flags=flags, copy=copy, track=track),
File "/usr/local/lib/python3.5/dist-packages/zmq/eventloop/future.py", line 276, in _add_send_event
timeout_ms = self._shadow_sock.sndtimeo
File "/usr/local/lib/python3.5/dist-packages/zmq/sugar/attrsettr.py", line 45, in _getattr_
return self._get_attr_opt(upper_key, opt)
File "/usr/local/lib/python3.5/dist-packages/zmq/sugar/attrsettr.py", line 49, in _get_attr_opt
return self.get(opt)
File "zmq/backend/cython/socket.pyx", line 449, in zmq.backend.cython.socket.Socket.get (zmq/backend/cython/socket.c:4920)
File "zmq/backend/cython/socket.pyx", line 221, in zmq.backend.cython.socket._getsockopt (zmq/backend/cython/socket.c:2860)
"Sometimes" means, that the code works fine, if I just run it on my test machine. We encountered the problem in some rare cases when using docker containers, but were never able to reproduce it in an reliable way. Since we moved our containers into a Kubernetes cluster, it occurs much more often. Does anybody know, what could be the source of the above stack trace?

aiohttp is not intended to be used with vanilla pyzmq.
Use aiozmq loopless streams instead.
See also https://github.com/zeromq/pyzmq/issues/894 and https://github.com/aio-libs/aiozmq/blob/master/README.rst

Related

Dask with tls connection can not end the program with to_parquet method

I am using dask to process 10 files which the size of each file is about 142MB. I build a method with delayed tag, following is an example:
#dask.delayed
def process_one_file(input_file_path, save_path):
res = []
for line in open(input_file_path):
res.append(line)
df = pd.DataFrame(line)
df.to_parquet(save_path+os.path.basename(input_file_path))
if __name__ == '__main__':
client = ClusterClient()
input_dir = ""
save_dir = ""
print("start to process")
cvss = [process_one_file(input_dir+filename, save_dir) for filename in os.listdir(input_dir)]
dask.compute(csvs)
However, dask does not always run successfully. After processing all files, the program often hangs.
I used the command line to run the program. The program often huangs after printing start to process. I know the program runs correctly, since I can see all output files after a while.
But the program never stops. If I disabled tls, the program can run successfully.
It was so strange that dask can not stop the program if I enable tls connection. How can I solve it?
I found that if I add to_parquet method, then the program cannot stop, while if I remove the method, it runs successfully.
I have found the problem. I set 10GB for each process. That means I set memory-limit=10GB. I totally set 2 workers and each has 2 processes. Each process has 2 threads.
Thus, each machine will have 4 processes which occupy 40GB. However, my machine only have 32GB. If I lower the memory limit, then the program will run successfully!

IPython crashing when I hold any key

When I ssh into a particular remote machine and start an IPython session, it crashed whenever I hold a key for about half a second (e.g. backspace key).
The error output is pasted below:
File "/home/zach/local/anaconda3/bin/ipython", line 11, in <module>
sys.exit(start_ipython())
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/__init__.py", line 125, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/terminal/ipapp.py", line 356, in start
self.shell.mainloop()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py", line 498, in mainloop
self.interact()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py", line 481, in interact
code = self.prompt_for_code()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py", line 410, in prompt_for_code
**self._extra_prompt_options())
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/shortcuts/prompt.py", line 738, in prompt
return run_sync()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/shortcuts/prompt.py", line 727, in run_sync
return self.app.run(inputhook=self.inputhook, pre_run=pre_run2)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/application/application.py", line 709, in run
return run()
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/application/application.py", line 682, in run
run_until_complete(f, inputhook=inputhook)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/eventloop/defaults.py", line 123, in run_until_complete
return get_event_loop().run_until_complete(future, inputhook=inputhook)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/eventloop/posix.py", line 66, in run_until_complete
self._run_once(inputhook)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/eventloop/posix.py", line 85, in _run_once
self._inputhook_context.call_inputhook(ready, inputhook)
File "/home/zach/local/anaconda3/lib/python3.7/site-packages/prompt_toolkit/eventloop/inputhook.py", line 78, in call_inputhook
threading.Thread(target=thread).start()
File "/home/zach/local/anaconda3/lib/python3.7/threading.py", line 847, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev#python.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True
It drops me from here into a broken bash session where my keystrokes do not appear on screen, although I can execute commands such as ls, man, pwd, ipython, etc. I can only kill the bash session by pressing Control D followed by Control C. In particular, the message's suggestion that I press %tb and so forth is not possible.
Other programs are not competing for threads. Looking through the error, it looks like the an event loop is possibly trying to create a thread to handle every key press, and this eventually causes failure to allocate more threads. It seems a little far-fetched that this would be the issue though since holding a key down is surely expected behavior.
This seems potentially similar to the issue https://ipython.org/faq.html#ipython-crashes-under-os-x-when-using-the-arrow-keys.
It appears not to be a Python issue per se, since if I use Python rather than IPython the issue disappears. I initially used Anaconda ipython but also switched to the system ipython in /usr/bin/ipython with the same results. Also tried a clean install of Anaconda, with the same issue. Also tried a fresh install of Anaconda on a different machine with the same OS, and the issue did not occur.
I am looking for ideas to make progress on this issue. Any ideas are appreciated, and I will post follow-up data if needed.
Python 3.7.3 (default, Mar 27 2019, 22:11:17)
IPython 7.5.0
Ubuntu 18.04.2 LTS
It is fixed now, but still somewhat mysterious to me. I followed the stack trace all the way down through CPython to the pthreads library calls. The pthreads documentation indicated that the error can essentially only arise if one is out of memory on the heap or if the max number of threads has been allocated. I used ulimit to set the virtual memory per process to unlimited (it had been ~3 GB). This resolved the issue.
So apparently the virtual memory limit interfered with the ability to allocate a thread. The obvious solution is that more memory was needed, although it is hard to believe that more than 3 GB is needed to respond to a key press. Another possibility is that the amount allocated per thread is a function of the virtual memory limit--I remember something like that in the pthreads documentation although it was a bit above my head.

RuntimeError: concurrent poll() invocation using celery

When running Celery on a Docker container which receives restAPI from other containers I get a RuntimeError: concurrent poll() invocation.
Did anyone face a similar error?
I attach the traceback.
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/opt/www/api/api/training_call.py", line 187, in start_process
result_state.get(on_message=self._on_raw_message, propagate=False)
File "/usr/local/lib/python3.5/dist-packages/celery/result.py", line 226, in get
on_message=on_message,
File "/usr/local/lib/python3.5/dist-packages/celery/backends/asynchronous.py", line 188, in wait_for_pending
for _ in self._wait_for_pending(result, **kwargs):
File "/usr/local/lib/python3.5/dist-packages/celery/backends/asynchronous.py", line 255, in _wait_for_pending
on_interval=on_interval):
File "/usr/local/lib/python3.5/dist-packages/celery/backends/asynchronous.py", line 56, in drain_events_until
yield self.wait_for(p, wait, timeout=1)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/asynchronous.py", line 65, in wait_for
wait(timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/redis.py", line 127, in drain_events
message = self._pubsub.get_message(timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/redis/client.py", line 3135, in get_message
response = self.parse_response(block=False, timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/redis/client.py", line 3034, in parse_response
if not block and not connection.can_read(timeout=timeout):
File "/usr/local/lib/python3.5/dist-packages/redis/connection.py", line 628, in can_read
return self._parser.can_read() or self._selector.can_read(timeout)
File "/usr/local/lib/python3.5/dist-packages/redis/selector.py", line 28, in can_read
return self.check_can_read(timeout)
File "/usr/local/lib/python3.5/dist-packages/redis/selector.py", line 156, in check_can_read
events = self.read_poller.poll(timeout)
RuntimeError: concurrent poll() invocation
The broker connection is not thread-safe, so you need to handle thread-safety in your application code.
#Laizer mentioned the ticket where this error was introduced in the python core library
One way to do it is to wrap all the calls that block until task completion in shared Lock:
import celery
import threading
#celery.shared_task
def debug_task(self):
print('Hello, world')
def boom(nb_tasks):
""" not thread safe - raises RuntimeError during concurrent executions """
tasks = celery.group([debug_task.s() for _ in range(nb_tasks)])
pool = tasks.apply_async()
pool.join() # raised from here
CELERY_POLL_LOCK = threading.Lock()
def safe(nb_tasks):
tasks = celery.group([debug_task.s() for _ in range(nb_tasks)])
pool = tasks.apply_async()
with CELERY_POLL_LOCK: # prevents concurrent calls to poll()
pool.join()
def main(nb_threads, nb_tasks_per_thread):
for func in (safe, boom):
threads = [threading.Thread(target=func, args=(nb_tasks_per_thread, )) for _ in range(nb_threads)]
for a_thread in threads:
a_thread.start()
for a_thread in threads:
a_thread.join()
main(10, 100)
This is a naive approach, that's suitable for me because I don't expect much concurrency and all the tasks are relatively fast (~10s).
If you have a different "profile", you may need something more convoluted (e.g. a single polling task that periodically polls for all pending groups / tasks).
I had the same error come up with an application that was using Redis pub/sub directly. Firing off many calls to redis.client.PubSub.getMessage in quick succession led to this race condition. My solution was to slow down the rate of polling for new messages.
I was faced the same problem, and solved it by
pip install -U "celery[redis]"
hope it's helpful to you
https://docs.celeryproject.org/en/latest/getting-started/brokers/redis.html

py2neo 3.1.2 connection problems

I am trying to commit a small graph of three nodes (a_py2neo_subgraph) to my graphene Neo4j server. I am using py2neo 3.1.2.
g = py2neo.Graph(server)
tx = g.begin()
tx.create(a_py2neo_subgraph)
tx.commit()
tx.finished()
Where "server" is the exact value given by Graphene (i.e. something like http://nick:password#hobby-hash.dbs.graphenedb.com:port/db/data/). In order to debug, I ran with py2neo watch and this was the info given by watch for my httpstream:
> GET server
< 200 OK [1287]
Then I get the following traceback:
Traceback (most recent call last):
File "C:/Users/petr.svarny/PycharmProjects/untitled/test.py", line 116, in <module>
tx = g.begin()
File "C:\Users\petr.svarny\untitled\lib\site-packages\py2neo\database\__init__.py", line 370, in begin
return self.transaction_class(self, autocommit)
File "C:\Users\petr.svarny\untitled\lib\site-packages\py2neo\database\__init__.py", line 1249, in __init__
self.session = driver.session()
File "C:\Users\petr.svarny\untitled\lib\site-packages\py2neo\packages\neo4j\v1\session.py", line 126, in session
connection = connect(self.address, self.ssl_context, **self.config)
File "C:\Users\petr.svarny\untitled\lib\site-packages\py2neo\packages\neo4j\v1\bolt.py", line 419, in connect
s = create_connection(host_port)
File "C:\Python27\Lib\socket.py", line 571, in create_connection
raise err
socket.error: [Errno 10060]
I already tried to set the socket timeout to 9999, did not help. I attempted to connect via telnet to the server and managed to do so without any problem. Similarly I am able to access the server address when I enter it into my browser. I also managed to run my code for a local Neo4j database.
Thank you for any suggestions.
I'm Judit from GrapheneDB. Can you check which version of py2neo are you using? The problem you have described looks like a common issue when moving from Py2neo v2 to v3. Since py2neo v3 supports Bolt protocol, you have to specify the bolt port or just tell the driver you're not using it.
If you don't want to use Bolt connection, your code should look like the following:
graph = Graph("http://USER:PASS#hobby-hash.dbs.graphenedb.com:port/db/data/", bolt = False)
If it's not your case, it'd be useful to know which version of Python/Py2neo/Neo4j are you using.
Cheers!
I had a similar problem with py2neo 3.1.2 and I could not get it to work even with the "bolt=False" switch.
I have switched to neorestclient 2.1.1 and now that works.

How to diagnose intermittent uwsgi errors?

First let me briefly describe our set up before I ask the question proper:
We have a web application server (virtual machine) running a django application. nginx at the front, uwsgi running under that, then a newrelic application wrapper followed by django et al., database is a separate postgresql server located via smartstack (synapse/nerve)
The issue we face is that occasionally (happened once 2 weeks ago, and twice in the last 2 days), one or two of the uwsgi worker processes will trip up and start producing "django.db.utils.InterfaceError: connection already closed" on most of their requests.
slightly redacted stack trace (user and application_name):
Traceback (most recent call last):
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/newrelic-2.8.0.7/newrelic/api/web_transaction.py", line 863, in __call__
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/newrelic-2.8.0.7/newrelic/api/function_trace.py", line 90, in literal_wrapper
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/newrelic-2.8.0.7/newrelic/api/web_transaction.py", line 752, in __call__
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 194, in __call__
signals.request_started.send(sender=self.__class__)
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 185, in send
response = receiver(signal=self, sender=sender, **named)
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/__init__.py", line 91, in close_old_connections
conn.abort()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 374, in abort
self.rollback()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 177, in rollback
self._rollback()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 141, in _rollback
return self.connection.rollback()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/django/db/backends/__init__.py", line 141, in _rollback
return self.connection.rollback()
File "/home/user/webapps/application_name/local/lib/python2.7/site-packages/newrelic-2.8.0.7/newrelic/hooks/database_dbapi2.py", line 82, in rollback
django.db.utils.InterfaceError: connection already closed
The stack trace never gets in to our application, it only touches new relic and django. Once a worker trips, it doesn't recover and all further requests result in 500's in the uwsgi logs and 502's on the front side. I assume database connectivity is fine because the sibling workers continue to function normally, and restarting uwsgi instantly fixes the problem.
My question is how one would go about diagnosing this issue to pinpoint the root cause, I have checked everything I know how to check (memory, cpu, logs, database connectivity) and some things I don't fully understand but am trying to read up on (file descriptors mainly).
For now I updated new relic (stack trace is older version) as it's the only thing I felt I could do.
I would appreciate any feedback, many google searches have proved fruitless.
replies may be slightly delayed, my timezone says it's time to sleep. Also, apologies if this should be on serverfault or something, I just figured it's closer to an application debug issue than a server config issue.

Resources