While running the PyBBIO examples phant_test.py and analog_test.py I received the following error (I believe 'could' is a typo meant to be 'could not'):
Traceback (most recent call last):
File "analog_test.py", line 47, in <module>
run(setup, loop)
File "/usr/lib/python2.7/site-packages/PyBBIO-0.9-py2.7-linux-armv7l.egg/bbio/bbio.py", line 63, in run
loop()
File "analog_test.py", line 37, in loop
val1 = analogRead(pot1)
File "/usr/lib/python2.7/site-packages/PyBBIO-0.9-py2.7-linux-armv7l.egg/bbio/platform/beaglebone/bone_3_8/adc.py", line 46, in analogRead
raise Exception('*Could load overlay for adc_pin: %s' % adc_pin)
Exception: *Could load overlay for adc_pin: ['/sys/devices/ocp.2/PyBBIO-AIN0.*/AIN0', 'PyBBIO-AIN0', 'P9.39']
I have tried restarting the BeagleBone (rev A6 running Angstrom with a 3.8 kernel, with no capes connected) to clear the /sys/devices/bone_capemgr.7/slots file, but that did not work. It seems PyBBIO is accessing the slots file and adding overlays because the slots file looks like this after the example program runs:
0: 54:PF---
1: 55:PF---
2: 56:PF---
3: 57:PF---
4: ff:P-O-L Override Board Name,00A0,Override Manuf,PyBBIO-ADC
5: ff:P-O-L Override Board Name,00A0,Override Manuf,PyBBIO-AIN0
Since there were some changes being made to the slots file I checked what files the analog_read(adc_pin) function in the adc.py file of PyBBIO was retrieving. With some print statements I figured out the root problem was that the /sys/devices/ocp.2/PyBBIO-AIN0.*/AIN0 file, which apparently stores the analog read values, does not exist. The glob.glob function returns a null array, and ls /sys/devices/ocp.2/PyBBIO-AIN0.10/ shows modalias power subsystem uevent as the only contents. Is there something wrong in the overlay file? Or could there be another program or problem that is preventing the BeagleBone from writing the AIN0 file that PyBBIO is trying to read? The python code seems to be logically correct, but the overlay is working incorrectly or being blocked in some way.
Related
I'm struggling to get the deap.tools.mutUniformInt mutation function to work. To isolate the issue for this SO question, I changed line 62 of examples/ga/onemax.py from
toolbox.register("mutate", tools.mutFlipBit, indpb=0.05)
to
toolbox.register("mutate", tools.mutUniformInt, 0, 1, indpb=0.05)
The onemax.py example now fails:
C:\Users\mshiv\DEAP>python onemax.py
Start of evolution
Evaluated 300 individuals
-- Generation 1 --
Traceback (most recent call last):
File "C:\Users\mshiv\DEAP\onemax.py", line 161, in <module>
main()
File "C:\Users\mshiv\DEAP\onemax.py", line 128, in main
toolbox.mutate(mutant)
File "C:\Users\mshiv\AppData\Local\Programs\Python\Python39\lib\site-packages\deap\tools\mutation.py", line 159, in mutUniformInt
size = len(individual)
TypeError: object of type 'int' has no len()
Both mutators should operate on an Individual, which is defined in onemax.py to be a list, so why does mutFlipBit work, but mutUniformInt seems to receive the Individual parameter as an int, not a list?
Poking around in the DEAP code, I found that mutUniformInt receives the parameters out of order, i.e. they are passed in as (low, up, individual, indpb) whereas the function itself is defined as
def mutUniformInt(individual, low, up, indpb):
Am I registering this mutation function incorrectly?
Source of the onemax example I altered:
https://github.com/DEAP/deap/blob/master/examples/ga/onemax.py
NVM - found the answer here:
https://groups.google.com/g/deap-users/c/4sw2_Al4YFI/m/EvUiq70IBAAJ
The correct syntax should have been:
toolbox.register("mutate", tools.mutUniformInt, low=0, up=1, indpb=0.05)
I've just started learning my way around Biopython and I'm trying to use ExPASy to retrieve SwissProt records, just like described in page 180 of the Biopython tutorial (http://biopython.org/DIST/docs/tutorial/Tutorial.pdf), but also in a relevant ROSALIND exercise (http://rosalind.info/problems/dbpr/ - click to expand the "Programming shortcut" section).
The code I'm using is basically the same as in the ROSALIND exercise:
from Bio import ExPASy
from Bio import SwissProt
handle = ExPASy.get_sprot_raw('Q5SLP9')
record = SwissProt.read(handle)
However, the SwissProt.read function gives the following error messages (I've trimmed some of the filepaths):
Traceback (most recent call last): File "code.py", line 4, in <module>
record = SwissProt.read(handle) File "lib\site-packages\Bio\SwissProt\__init__.py", line 151, in read
record = _read(handle) File "lib\site-packages\Bio\SwissProt\__init__.py", line 255, in _read
_read_ft(record, line) File "lib\site-packages\Bio\SwissProt\__init__.py", line 594, in _read_ft
assert not from_res and not to_res, line AssertionError: /note="Single-stranded DNA-binding protein"
I found this has been reported in GitHub (https://github.com/biopython/biopython/issues/2417), so I'm not the first one who gets this, but I don't really find any updated version of the package or any way to fix the issue. Maybe it's because I'm very new to using packages. Could someone help me please?
Please update your BioPython to version 1.77. The issue has been fixed with pull request 2484.
I am using a Flask server to handle requests for some image-processing tasks.
The processing relies extensively on OpenCV and I would now like to trivially-parallelize some of the slower steps.
I have a preference for multiprocessing rather than multithreading (please assume the former in your answers).
But multiprocessing with opencv is apparently broken (I am on Python 2.7 + macOS): https://github.com/opencv/opencv/issues/5150
One solution (see https://github.com/opencv/opencv/issues/5150#issuecomment-400727184) is to use the excellent Loky (https://github.com/tomMoral/loky)
[Question: What other working solutions exist apart from concurrent.futures, loky, joblib..?]
But Loky leads me to the following stacktrace:
a,b = f.result()
File "/anaconda2/lib/python2.7/site-packages/loky/_base.py", line 433, in result
return self.__get_result()
File "/anaconda2/lib/python2.7/site-packages/loky/_base.py", line 381, in __get_result
raise self._exception
BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
This was caused directly by
'''
Traceback (most recent call last):
File "/anaconda2/lib/python2.7/site-packages/loky/process_executor.py", line 391, in _process_worker
call_item = call_queue.get(block=True, timeout=timeout)
File "/anaconda2/lib/python2.7/multiprocessing/queues.py", line 135, in get
res = self._recv()
File "myfile.py", line 44, in <module>
app.config['EXECUTOR_MAX_WORKERS'] = 5
File "/anaconda2/lib/python2.7/site-packages/werkzeug/local.py", line 348, in __getattr__
return getattr(self._get_current_object(), name)
File "/anaconda2/lib/python2.7/site-packages/werkzeug/local.py", line 307, in _get_current_object
return self.__local()
File "/anaconda2/lib/python2.7/site-packages/flask/globals.py", line 52, in _find_app
raise RuntimeError(_app_ctx_err_msg)
RuntimeError: Working outside of application context.
This typically means that you attempted to use functionality that needed
to interface with the current application object in some way. To solve
this, set up an application context with app.app_context(). See the
documentation for more information.
'''
The functions to be parallelized are not being called from app/main.py, but rather from an abitrarily-deep submodule.
I have tried the similarly-useful-looking https://flask-executor.readthedocs.io/en/latest, also so far in vain.
So the question is:
How can I safely pass the application context through to the workers or otherwise get multiprocessing working (without recourse to multithreading)?
I can build out this question if you need more information. Many thanks as ever.
Related resources:
Copy flask request/app context to another process
Flask Multiprocessing
Update:
Non-opencv calls work fine with flask-executor (no Loky) :)
The problem comes when trying to call an opencv function like knnMatch.
If Loky fixes the opencv issue, I wonder if it can be made to work with flask-executor (not for me, so far).
I am having some troubles getting my Firebird connection to work, and it all seems related to encodings. I am connecting to the database like this (local_copy is /path/to/database.fdb):
conn = fdb.connect(dsn=local_copy, user='****', password='****', charset="ISO8859_1")
which only works for certain charsets. I need to have the ISO8859_1 charset, which worked before, but not anymore (perhaps because of an update).
Traceback (most recent call last):
File "sync.py", line 10, in <module>
conn = fdb.connect(dsn=local_copy, user='**', password='**', charset="ISO8859_1")
File "/usr/local/lib/python3.6/site-packages/fdb/fbcore.py", line 848, in connect
"Error while connecting to database:")
fdb.fbcore.DatabaseError: ('Error while connecting to database:\n- SQLCODE: -924\n- bad parameters on attach or create database\n- CHARACTER SET ISO8859_1 is not defined', -924, 335544325)
When I use ISO88591, the connect works, but Python is not happy with that:
Traceback (most recent call last):
File "sync.py", line 10, in <module>
conn = fdb.connect(dsn=local_copy, user='***', password='***', charset="ANSI")
File "/usr/local/lib/python3.6/site-packages/fdb/fbcore.py", line 826, in connect
no_reserve, db_key_scope, no_gc, no_db_triggers, no_linger)
File "/usr/local/lib/python3.6/site-packages/fdb/fbcore.py", line 759, in build_dpb
dpb.add_string_parameter(isc_dpb_user_name, user)
File "/usr/local/lib/python3.6/site-packages/fdb/fbcore.py", line 624, in add_string_parameter
value = value.encode(charset_map.get(self.charset, self.charset))
LookupError: unknown encoding: ISO88591
So, I thought perhaps adding an alias ISO88591 to Python would work. I tried to edit the /usr/lib64/python3.6/encodings/aliases.py, but that didn't seem to have any effect.
As a short summary of what was posted on Firebird-support, it looks the fbintl module in Firebird 2.5.8 on CentOS is broken.
As indicated by Philippe Makowski:
Sorry, it is broken, and I don't know how to fix it :
https://bugzilla.redhat.com/show_bug.cgi?id=1636177
but Firebird 3 is ok
https://copr.fedorainfracloud.org/coprs/makowski/firebird/
A possible workaround suggested in https://bugzilla.redhat.com/show_bug.cgi?id=1636177 is to either downgrade to 2.5.7, or to continue using 2.5.8, but replace its fbintl module with the one from 2.5.7.
I think this might be an obvious to the seasoned py2neo users, but I could not get over it since I'm new. I'm trying to follow py2neo online doc: http://book.py2neo.org/en/latest/graphs_nodes_relationships/, but I was able to use the 'Node' methods for the instance returned from
GraphDatabaseService.create, but when I use GraphDatabaseService.node to retrieve the node, all the expected Node methods stop working, I've narrowed it down to an example below using the Node.len method.
Thanks in advance for any helpful insights.
Bruce
My env:
windows 7 professional
pycharm 3.4
py2neo 1.6.4
python2.7.5
Here are the codes:
from py2neo import node, neo4j
db = neo4j.GraphDatabaseService()
db.clear()
a, = db.create(node({'name': ['a']}))
a.add_labels('Label')
b = db.node(a._id)
print db.neo4j_version
print b, type(b)
print "There is %s node in db" % db.order
print len(b)
Here are the outputs:
C:\Python27\python.exe C:/Users/you_zhang/PycharmProjects/py2neo/ex11.py
(2, 0, 3, u'')
(10) <class 'py2neo.neo4j.Node'>
There is 1 node in db
Traceback (most recent call last):
File "C:/Users/you_zhang/PycharmProjects/py2neo/ex11.py", line 11, in <module>
print len(b)
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 1339, in __len__
return len(self.get_properties())
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 1398, in get_properties
self._properties = assembled(self._properties_resource._get()) or {}
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 1349, in _properties_resource
return self._subresource("properties")
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 403, in _subresource
uri = URI(self.__metadata__[key])
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 338, in __metadata__
self.refresh()
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 360, in refresh
self._metadata = ResourceMetadata(self._get().content)
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 367, in _get
raise ClientError(e)
py2neo.exceptions.ClientError: Not Found
Your exact code snippet works for me (OS X, neo4j 2.1.2). There shouldn't be any problem. Have you tried to install the latest version of neo4j and run your code on a fresh and untouched database? I have encountered inconsistencies in corrupted databases.
Have you tried to load the node with .find()?
result = db.find('Label')
for n in result:
print(n)