Flask: How do I successfully use multiprocessing (not multithreading)? - opencv

I am using a Flask server to handle requests for some image-processing tasks.
The processing relies extensively on OpenCV and I would now like to trivially-parallelize some of the slower steps.
I have a preference for multiprocessing rather than multithreading (please assume the former in your answers).
But multiprocessing with opencv is apparently broken (I am on Python 2.7 + macOS): https://github.com/opencv/opencv/issues/5150
One solution (see https://github.com/opencv/opencv/issues/5150#issuecomment-400727184) is to use the excellent Loky (https://github.com/tomMoral/loky)
[Question: What other working solutions exist apart from concurrent.futures, loky, joblib..?]
But Loky leads me to the following stacktrace:
a,b = f.result()
File "/anaconda2/lib/python2.7/site-packages/loky/_base.py", line 433, in result
return self.__get_result()
File "/anaconda2/lib/python2.7/site-packages/loky/_base.py", line 381, in __get_result
raise self._exception
BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
This was caused directly by
'''
Traceback (most recent call last):
File "/anaconda2/lib/python2.7/site-packages/loky/process_executor.py", line 391, in _process_worker
call_item = call_queue.get(block=True, timeout=timeout)
File "/anaconda2/lib/python2.7/multiprocessing/queues.py", line 135, in get
res = self._recv()
File "myfile.py", line 44, in <module>
app.config['EXECUTOR_MAX_WORKERS'] = 5
File "/anaconda2/lib/python2.7/site-packages/werkzeug/local.py", line 348, in __getattr__
return getattr(self._get_current_object(), name)
File "/anaconda2/lib/python2.7/site-packages/werkzeug/local.py", line 307, in _get_current_object
return self.__local()
File "/anaconda2/lib/python2.7/site-packages/flask/globals.py", line 52, in _find_app
raise RuntimeError(_app_ctx_err_msg)
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in some way. To solve
this, set up an application context with app.app_context(). See the
documentation for more information.
'''
The functions to be parallelized are not being called from app/main.py, but rather from an abitrarily-deep submodule.
I have tried the similarly-useful-looking https://flask-executor.readthedocs.io/en/latest, also so far in vain.
So the question is:
How can I safely pass the application context through to the workers or otherwise get multiprocessing working (without recourse to multithreading)?
I can build out this question if you need more information. Many thanks as ever.
Related resources:
Copy flask request/app context to another process
Flask Multiprocessing
Update:
Non-opencv calls work fine with flask-executor (no Loky) :)
The problem comes when trying to call an opencv function like knnMatch.
If Loky fixes the opencv issue, I wonder if it can be made to work with flask-executor (not for me, so far).

Related

trying to run styleGAN in jyputer notebook, it says "tensorflow' has no attribute 'Dimension"

enter image description here!python encode_images.py --optimizer=lbfgs --face_mask=True --iterations=6 --use_lpips_loss=0 --use_discriminator_loss=0 --output_video=True aligned_images/ generated_images/ latent_representations/
print("\n************ Latent code optimization finished! ***************")
2021-08-24 13:33:11.033451: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
Traceback (most recent call last):
File "encode_images.py", line 12, in
import dnnlib.tflib as tflib
File "C:\Users\bkvij\Office Rapid Innovation\StyleGAN Face Morphing - Arxiv Insights\stylegan-encoder\dnnlib\tflib_init_.py", line 8, in
from . import autosummary
File "C:\Users\bkvij\Office Rapid Innovation\StyleGAN Face Morphing - Arxiv Insights\stylegan-encoder\dnnlib\tflib\autosummary.py", line 31, in
from . import tfutil
File "C:\Users\bkvij\Office Rapid Innovation\StyleGAN Face Morphing - Arxiv Insights\stylegan-encoder\dnnlib\tflib\tfutil.py", line 34, in
def shape_to_list(shape: Iterable[tf.Dimension]) -> List[Union[int, None]]:
AttributeError: module 'tensorflow' has no attribute 'Dimension'
It's because tf.Dimension is deprecated.
Go to stylegan/dnnlib/tflib/tfutil.py and change the tf.Dimension in line 34 to tf.compat.v1.Dimension.
I think you're using TensorFlow v2, use google colape and it will fix the problem for you, otherwise, you will need to make a virtual environment with Python 3.6 TensorFlow 1.10 cuDNN 7.3.1 and it will solve the problem
To expand on Faezeh's answer, you'll have to make the following edits to tfutils.py
From
tf.Dimension (line 34)
tf.variable_scope (line 74)
tf.Session (line 128)
To
tf.compat.v1.Dimension
tf.compat.v1.variable_scope
tf.compat.v1.Session
alternatively, you could just download tensorflow 1.x and save yourself the hassle
you can use it on terminal ' pip install tensorflow-addons==0.14.0

Retrieving SwissProt from ExPASy server - error

I've just started learning my way around Biopython and I'm trying to use ExPASy to retrieve SwissProt records, just like described in page 180 of the Biopython tutorial (http://biopython.org/DIST/docs/tutorial/Tutorial.pdf), but also in a relevant ROSALIND exercise (http://rosalind.info/problems/dbpr/ - click to expand the "Programming shortcut" section).
The code I'm using is basically the same as in the ROSALIND exercise:
from Bio import ExPASy
from Bio import SwissProt
handle = ExPASy.get_sprot_raw('Q5SLP9')
record = SwissProt.read(handle)
However, the SwissProt.read function gives the following error messages (I've trimmed some of the filepaths):
Traceback (most recent call last): File "code.py", line 4, in <module>
record = SwissProt.read(handle) File "lib\site-packages\Bio\SwissProt\__init__.py", line 151, in read
record = _read(handle) File "lib\site-packages\Bio\SwissProt\__init__.py", line 255, in _read
_read_ft(record, line) File "lib\site-packages\Bio\SwissProt\__init__.py", line 594, in _read_ft
assert not from_res and not to_res, line AssertionError: /note="Single-stranded DNA-binding protein"
I found this has been reported in GitHub (https://github.com/biopython/biopython/issues/2417), so I'm not the first one who gets this, but I don't really find any updated version of the package or any way to fix the issue. Maybe it's because I'm very new to using packages. Could someone help me please?
Please update your BioPython to version 1.77. The issue has been fixed with pull request 2484.

CSV file that lists all student answers to the problem

As I followed the steps to download CSV of problem responses for a problem, It says “The problem responses report is being created. To view the status of the report, see Pending Tasks below.”
But, I am not seeing any pending tasks nor the files are being generated. Can’t see the generated file even after refreshing.
But, I can generate other CSV files for example, Problem Grade Report.
The issue is only for the problem response report.
PS: When I checked Admin, I could see that the request was failed with this error:
{"exception": "AssertionError", "traceback": "Traceback (most recent call last):\n File \"/openedx/venv/local/lib/python2.7/site-packages/celery/app/trace.py\", line 240, in trace_task\n R = retval = fun(*args, **kwargs)\n File \"/openedx/venv/local/lib/python2.7/site-packages/celery/app/trace.py\", line 438, in __protected_call__\n return self.run(*args, **kwargs)\n File \"/openedx/edx-platform/lms/djangoapps/instructor_task/tasks.py\", line 171, in calculate_problem_responses_csv\n return run_main_task(entry_id, task_fn, action_name)\n File \"/openedx/edx-platform/lms/djangoapps/instructor_task/tasks_helper/runner.py\", line 111, in run_main_task\n task_progress = task_fcn(entry_id, course_id, task_input, action_name)\n File \"/openedx/edx-platform/lms/djangoapps/instructor_task/tasks_helper/grades.py\", line 737, in generate\n usage_key_str=problem_location\n File \"/openedx/edx-platform/lms/djangoapps/instructor_task/tasks_helper/grades.py\", line 674, ...", "message": ""}
Please help.
After having much tries I was able to figure out my issue was caused because one operation which is not quite preferable by Open Edx. It is changing correct answer of a problem after few students submitting their responses. I was able to get their grades calculated based on new answers by using the Re-score option but unable to generate this report.
I will update here if I get a solution.

Service __len__ not found Unexpected error, recovered safely

python3.8
My code:
from googleads import adwords
def execute_request():
adwords_client = adwords.AdWordsClient.LoadFromStorage(path="google_general/googleads.yaml")
campaign_service = adwords_client.GetService('CampaignService', version='v201809')
pass
context["dict_list"] = execute_request()
Traceback:
Traceback (most recent call last):
File "/home/michael/pycharm-community-2019.3.2/plugins/python-ce/helpers/pydev/_pydevd_bundle/pydevd_xml.py", line 282, in frame_vars_to_xml
xml += var_to_xml(v, str(k), evaluate_full_value=eval_full_val)
File "/home/michael/pycharm-community-2019.3.2/plugins/python-ce/helpers/pydev/_pydevd_bundle/pydevd_xml.py", line 369, in var_to_xml
elif hasattr(v, "__len__") and not is_string(v):
File "/home/michael/PycharmProjects/ads3/venv/lib/python3.8/site-packages/googleads/common.py", line 694, in __getattr__
raise googleads.errors.GoogleAdsValueError('Service %s not found' % attr)
googleads.errors.GoogleAdsValueError: Service __len__ not found
Unexpected error, recovered safely.
googleads.yaml about logging
logging:
version: 1
disable_existing_loggers: False
formatters:
default_fmt:
format: ext://googleads.util.LOGGER_FORMAT
handlers:
default_handler:
class: logging.StreamHandler
formatter: default_fmt
level: DEBUG
loggers:
# Configure root logger
"":
handlers: [default_handler]
level: DEBUG
I've just started studying the API.
Namely, I'm trying to execute my first request (https://developers.google.com/adwords/api/docs/guides/first-api-call#make_your_first_api_call)
Could you help me with this problem? At least how to localize it more precisely.
This seems to be a problem which results from the way the PyCharm debugger inspects live objects during debugging.
Specifically, it checks if a given object has the __len__ attribute/method in the code of var_to_xml, most likely to determine an appropriate representation of the object for the debugger interface (which seems to require constructing an XML representation).
googleads service objects such as your campaign_service, however, use some magic to be able to call the defined SOAP methods on them without requiring to hard-code all of them. The code looks like this:
def __getattr__(self, attr):
"""Support service.method() syntax."""
if self._WsdlHasMethod(attr):
if attr not in self._method_proxies:
self._method_proxies[attr] = self._CreateMethod(attr)
return self._method_proxies[attr]
else:
raise googleads.errors.GoogleAdsValueError('Service %s not found' % attr)
This means that the debugger's check for a potential __len__ attribute is intercepted, and because the CampaignService does not have a SOAP operation called __len__, an exception is raised.
You can validate this by running your snippet in the regular way (i.e. not debugging it) and checking if that works.
An actual fix would seem to either require that PyCharm's debugger changes the way it inspects objects (not calling hasattr(v, "__len__")) or that googleads modifies the way it implements __getattr__, for example by actually implementing a __len__ method that just raises AttributeError.

PyBBIO Analog Input: Failure to Load ADC File

While running the PyBBIO examples phant_test.py and analog_test.py I received the following error (I believe 'could' is a typo meant to be 'could not'):
Traceback (most recent call last):
File "analog_test.py", line 47, in <module>
run(setup, loop)
File "/usr/lib/python2.7/site-packages/PyBBIO-0.9-py2.7-linux-armv7l.egg/bbio/bbio.py", line 63, in run
loop()
File "analog_test.py", line 37, in loop
val1 = analogRead(pot1)
File "/usr/lib/python2.7/site-packages/PyBBIO-0.9-py2.7-linux-armv7l.egg/bbio/platform/beaglebone/bone_3_8/adc.py", line 46, in analogRead
raise Exception('*Could load overlay for adc_pin: %s' % adc_pin)
Exception: *Could load overlay for adc_pin: ['/sys/devices/ocp.2/PyBBIO-AIN0.*/AIN0', 'PyBBIO-AIN0', 'P9.39']
I have tried restarting the BeagleBone (rev A6 running Angstrom with a 3.8 kernel, with no capes connected) to clear the /sys/devices/bone_capemgr.7/slots file, but that did not work. It seems PyBBIO is accessing the slots file and adding overlays because the slots file looks like this after the example program runs:
0: 54:PF---
1: 55:PF---
2: 56:PF---
3: 57:PF---
4: ff:P-O-L Override Board Name,00A0,Override Manuf,PyBBIO-ADC
5: ff:P-O-L Override Board Name,00A0,Override Manuf,PyBBIO-AIN0
Since there were some changes being made to the slots file I checked what files the analog_read(adc_pin) function in the adc.py file of PyBBIO was retrieving. With some print statements I figured out the root problem was that the /sys/devices/ocp.2/PyBBIO-AIN0.*/AIN0 file, which apparently stores the analog read values, does not exist. The glob.glob function returns a null array, and ls /sys/devices/ocp.2/PyBBIO-AIN0.10/ shows modalias power subsystem uevent as the only contents. Is there something wrong in the overlay file? Or could there be another program or problem that is preventing the BeagleBone from writing the AIN0 file that PyBBIO is trying to read? The python code seems to be logically correct, but the overlay is working incorrectly or being blocked in some way.

Resources