Animation instances aren't cleaned up - kivy

Enviroment
Python 3.6.3
Kivy master
OS: Linux Mint 18.2(based on Ubuntu 16.04 LTS)
Code
Hi, I'm writing unittest of kivy.animation. When I ran the code below
import unittest
from time import time, sleep
from kivy.animation import Animation
from kivy.uix.widget import Widget
from kivy.clock import Clock
class AnimationTestCase(unittest.TestCase):
SLEEP_DURATION = .3
TIMES = 2
def sleep(self, t):
start = time()
while time() < start + t:
sleep(.01)
Clock.tick()
def test_animation(self):
for index in range(self.TIMES):
print('----------------------------------')
with self.subTest(index=index):
w = Widget()
a = Animation(x=100, d=.2)
print('a:', a)
a.start(w)
self.sleep(self.SLEEP_DURATION)
print('instances_:', Animation._instances)
self.assertEqual(len(Animation._instances), 0)
output is
----------------------------------
a: <kivy.animation.Animation object at 0x7f0afb31c660>
instances_: set()
----------------------------------
a: <kivy.animation.Animation object at 0x7f0afc20b180>
instances_: {<kivy.animation.Animation object at 0x7f0afc20b250>, <kivy.animation.Animation object at 0x7f0afb31c660>}
======================================================================
FAIL: test_animation (kivy.tests.test_animations.AnimationTestCase) (index=1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/firefox/kivy/kivy/tests/test_animations.py", line 34, in test_animation
self.assertEqual(len(Animation._instances), 0)
AssertionError: 2 != 0
----------------------------------------------------------------------
Ran 1 test in 0.822s
FAILED (failures=1)
Either of
Increase SLEEP_DURATION (for example SLEEP_DURATION = 2) or
TIMES = 1
will fix this error.
Is this correct behavior or bug?

The cause of this error is kivy.modules.inspector.
After I removed this line from config.ini,
[modules]
inspector = # <= remove this line
the program behave how I expect. Seems like ScrollView inside inspector creates Animation internally, and it makes my test fails.

Related

Issue in importing BERTtokenizer module for Q&A with finetuned BERT

I am trying to train the model for question answering with a finetuned Q&A BERT.
import torch
from transformers import BertForQuestionAnswering, BertTokenizer
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
while i trying to use tokenizer for pretraining the bert-large-uncased-whole-word-masking-finetuned-squad model:I am getting the below error.
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-29-d478833618be> in <module>
----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
1 frames
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs)
1857 def _save_pretrained(
1858 self,
-> 1859 save_directory: str,
1860 file_names: Tuple[str],
1861 legacy_format: bool = True,
ModuleNotFoundError: No module named 'transformers.models.auto.configuration_auto'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
--------------------------------------------------------------------------
I am using the new version of transformer only in my notebook. But its giving me this error. Can someone help me with this issue?
Try with:
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
I suspect that you have code from a previous version in your cache. Try
transformers.BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad', cache_dir="./")

Why I get a problem that tells me to set the value to float, and when I do I get I need to set it to integer?

I'm trying to save a recording of camera, Im using a free course, and it says, I need to create cv2.VideoWriter.
the problem I get when I try is:
Traceback (most recent call last):
File ..., line 10, in <module>
writer = cv2.VideoWriter(filename,cv2.VideoWriter_fourcc(*'DVIX'),20,(width,height))
TypeError: a float is required
my code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
#### THE PROBLEM IS IN THIS LINE ########
writer = cv2.VideoWriter("images/my_super_vid.mp4",cv2.VideoWriter_fourcc(*'DVIX'),20,(width,height))
######################################
while True:
ret, frame = cap.read()
#OPERTAIONS (DRAWING ETC')
writer.write(frame)
cv2.imshow("frame",frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
writer.release()
cv2.destroyAllWindows()
I saw the problem and I changed it to:
writer = cv2.VideoWriter("images/my_super_vid.mp4",cv2.VideoWriter_fourcc(*'DVIX'),float(20),(width,height))
and even:
writer = cv2.VideoWriter("images/my_super_vid.mp4",cv2.VideoWriter_fourcc(*'DVIX'),20.0,(width,height))
but in both of those tries I got:
Traceback (most recent call last):
File "....", line 10, in <module>
writer = cv2.VideoWriter("images/my_super_vid.mp4",cv2.VideoWriter_fourcc(*'DVIX'),20.0,(width,height))
TypeError: integer argument expected, got float
I don't know how to fix it, I tried solutions I saw in the internet but none of them worked..
got the same problem
okay I changed the line to:
writer = cv2.VideoWriter("images/my_super_vid.avi",cv2.VideoWriter_fourcc(*'XVID'),20, (int(width),int(height)))
and it worked. but now I get:
qt.qpa.plugin: Could not find the Qt platform plugin "cocoa" in ""
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
when I try to run it in PyCharm
FIXED IT: just ran it from terminal.
The errors are referring to different values on the same line in the parentheses. The first value has to be a float and second an int (or vice versa)

Python cyclic import unexpected behavior

I have discovered something unexpected when playing with cyclic imports. I have two files in the same directory:
a.py
import b
print("hello from a")
b.py
import a
print("hello from b")
Running either python3 a.py and python3 b.py does not result in a cyclic import related error. I know that the first imported module is imported under the name __main__, but I still do not understand this behavior. For example, running python3 a.py or python -m a produces the following output:
hi from a
hi from b
hi from a
Looking at the output of print(sys.modules.keys()), I can see that both modules are somehow already imported when checking it, even when importing the sys module as the first thing in one of the modules.
I did not use sys.modules properly before answering my own question.
This does not happen if neither of the cyclic imported modules is the __main__ module. My Python version is Python 3.6.3 on Ubuntu 17.10.
It still happens, but there is a visible error only if there is actually something you use from one of the cyclically imported modules.
See my own answer for clarifications.
The answer to my question
I have discovered the answer. I will try to sketch an explanation:
Executing python3 a.py imports the module in file a.py as __main__:
import b in module __main__:
import a in module b -> Imports the module in file a.py as a
import b in module a -> Nothing happens, already imported that module
print('hello from a') in a.py (executing module a)
import a in module b finished
print('hello from b') in b.py (executing module b)
import b in module __main__ finished
print('hello from a') in a.py(executing module __main__)
The problem is that there is no cyclic import error per se. A module is imported only once, and after that, other imports of the same module can be seen as no-ops.
This operation can be seen as adding a key to the sys.modules dictionary corresponding to the name of the imported module and then setting attributes on the module object associated with that key as it gets executed. So if the key is already present in the dictionary (on a second import of the same module), nothing happens on the second import. The already imported above means already present in the sys.modules dictionary. This reflects the procedural nature of Python (being originally implemented in C) and the fact that anything in Python is an object.
The lurking problem
In order to show the fact that the problem associated with cyclic imports is still present, let's add a function to module b and try to use it from module a.
a.py
import b
b.f()
b.py
import a
def f():
print('hello from b.f()')
Executing now python a.py imports the module in file a.py as __main__:
import b in module __main__:
import a in module b -> Imports the module in file a.py as a
import b in module a -> Nothing happens, already imported that module
b.f() -> AttributeError: module 'b' has no attribute 'f'
Note: The line b.f() can be further simplified to b.f and the error will still occur. This is because b.f() first accesses the attribute f of module object b, which happens to be a function object, and then tries to call it. I wanted to point out again the object oriented nature of Python.
The from ... import ... statement
It is interesting to mention that using the from ... import ... form gives another error, even though the reason is the same:
a.py
from b import f
f()
b.py
import a
def f():
printf('hello from b.f()')
Executing python a.py imports the module in file a.py as __main__:
from b import f in module __main__ actually imports the whole module (adds it to sys.modules and executes its body), but binds only the name f in the current module namespace:
import a in module b -> Imports the module in file a.py as a
from b import f in module a -> ImportError: cannot import name f (because the first execution of from b import f did not get to see the definition of the function object f in module b)
In this last case, the from ... import ... itself fails with an error because the interpreter knows earlier in time that you are trying to access something in that module which does not exist. Compare it to the first AttributeError, where the program did not see any problem until it tried to access attribute f (in the expression b.f).
The double execution problem of the code in the main module
When importing the module in the file used to start the program (imported as __main__ first) from another module, the code in that module gets executed twice and any side effects in that module execution will happen twice too. This is why it is not recommended to import the main module of the program again in other modules.
Using sys.modules to confirm my conclusions above
I will show how checking the contents of sys.modules can clarify this problem:
a.py
import sys
assert '__main__' in sys.modules.keys()
print(f'{__name__}:')
print('\ta imported:', 'a' in sys.modules.keys())
print('\tb imported:', 'b' in sys.modules.keys())
import b
b.f()
b.py
import sys
assert '__main__' in sys.modules.keys()
print(f'{__name__}:')
print('\ta imported:', 'a' in sys.modules.keys())
print('\tb imported:', 'b' in sys.modules.keys())
import a
assert False # Control flow never gets here
def f():
print('hi from b.f()')
The output of python3 a.py:
__main__:
a imported: False
b imported: False
b:
a imported: False
b imported: True
a:
a imported: True
b imported: True
Traceback (most recent call last):
File "a.py", line 8, in <module>
import b
File "/home/andrei/PycharmProjects/untitled/b.py", line 8, in <module>
import a
File "/home/andrei/PycharmProjects/untitled/a.py", line 10, in <module>
b.f()
AttributeError: module 'b' has no attribute 'f'

How do I resolve a Pickling Error on class apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum?

A PicklingError is raised when I run my data pipeline remotely: the data pipeline has been written using the Beam SDK for Python and I am running it on top of Google Cloud Dataflow. The pipeline works fine when I run it locally.
The following code generates the PicklingError: this ought to reproduce the problem
import apache_beam as beam
from apache_beam.transforms import pvalue
from apache_beam.io.fileio import _CompressionType
from apache_beam.utils.options import PipelineOptions
from apache_beam.utils.options import GoogleCloudOptions
from apache_beam.utils.options import SetupOptions
from apache_beam.utils.options import StandardOptions
if __name__ == "__main__":
pipeline_options = PipelineOptions()
pipeline_options.view_as(StandardOptions).runner = 'BlockingDataflowPipelineRunner'
pipeline_options.view_as(SetupOptions).save_main_session = True
google_cloud_options = pipeline_options.view_as(GoogleCloudOptions)
google_cloud_options.project = "project-name"
google_cloud_options.job_name = "job-name"
google_cloud_options.staging_location = 'gs://path/to/bucket/staging'
google_cloud_options.temp_location = 'gs://path/to/bucket/temp'
p = beam.Pipeline(options=pipeline_options)
p.run()
Below is a sample from the beginning and the end of the Traceback:
WARNING: Could not acquire lock C:\Users\ghousains\AppData\Roaming\gcloud\credentials.lock in 0 seconds
WARNING: The credentials file (C:\Users\ghousains\AppData\Roaming\gcloud\credentials) is not writable. Opening in read-only mode. Any refreshed credentials will only be valid for this run.
Traceback (most recent call last):
File "formatter_debug.py", line 133, in <module>
p.run()
File "C:\Miniconda3\envs\beam\lib\site-packages\apache_beam\pipeline.py", line 159, in run
return self.runner.run(self)
....
....
....
File "C:\Miniconda3\envs\beam\lib\sitepackages\apache_beam\runners\dataflow_runner.py", line 172, in run
self.dataflow_client.create_job(self.job))
StockPickler.save_global(pickler, obj)
File "C:\Miniconda3\envs\beam\lib\pickle.py", line 754, in save_global (obj, module, name))
pickle.PicklingError: Can't pickle <class 'apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum'>: it's not found as apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum
I've found that your error gets raised when a Pipeline object is included in the context that gets pickled and sent to the cloud:
pickle.PicklingError: Can't pickle <class 'apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum'>: it's not found as apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum
Naturally, you might ask:
What's making the Pipeline object unpickleable when it's sent to the cloud, since normally it's pickleable?
If this were really the problem, then wouldn't I get this error all the time - isn't a Pipeline object normally included in the context sent to the cloud?
If the Pipeline object isn't normally included in the context sent to the cloud, then why is a Pipeline object being included in my case?
(1)
When you call p.run() on a Pipeline with cloud=True, one of the first things that happens is that p.runner.job=apiclient.Job(pipeline.options) is set in apache_beam.runners.dataflow_runner.DataflowPipelineRunner.run.
Without this attribute set, the Pipeline is pickleable. But once this is set, the Pipeline is no longer pickleable, since p.runner.job.proto._Message__tags[17] is a TypeValueValuesEnum, which is defined as a nested class in apache_beam.internal.clients.dataflow.dataflow_v1b3_messages. AFAIK nested classes cannot be pickled (even by dill - see How can I pickle a nested class in python?).
(2)-(3)
Counterintuitively, a Pipeline object is normally not included in the context sent to the cloud. When you call p.run() on a Pipeline with cloud=True, only the following objects are pickled (and note that the pickling happens after p.runner.job gets set):
If save_main_session=True, then all global objects in the module designated __main__ are pickled. (__main__ is the script that you ran from the command line).
Each transform defined in the pipeline is individually pickled
In your case, you encountered #1, which is why your solution worked. I actually encountered #2 where I defined a beam.Map lambda function as a method of a composite PTransform. (When composite transforms are applied, the pipeline gets added as an attribute of the transform...) My solution was to define those lambda functions in the module instead.
A longer-term solution would be for us to fix this in the Apache Beam project. TBD!
This should be fixed in the google-dataflow 0.4.4 sdk release with https://github.com/apache/incubator-beam/pull/1485
I resolved this problem by encapsulating the body of the main within a run() method and invoking run().

py2neo.neo4j.GraphDatabaseService.node(id) raise ClientError(e)

I think this might be an obvious to the seasoned py2neo users, but I could not get over it since I'm new. I'm trying to follow py2neo online doc: http://book.py2neo.org/en/latest/graphs_nodes_relationships/, but I was able to use the 'Node' methods for the instance returned from
GraphDatabaseService.create, but when I use GraphDatabaseService.node to retrieve the node, all the expected Node methods stop working, I've narrowed it down to an example below using the Node.len method.
Thanks in advance for any helpful insights.
Bruce
My env:
windows 7 professional
pycharm 3.4
py2neo 1.6.4
python2.7.5
Here are the codes:
from py2neo import node, neo4j
db = neo4j.GraphDatabaseService()
db.clear()
a, = db.create(node({'name': ['a']}))
a.add_labels('Label')
b = db.node(a._id)
print db.neo4j_version
print b, type(b)
print "There is %s node in db" % db.order
print len(b)
Here are the outputs:
C:\Python27\python.exe C:/Users/you_zhang/PycharmProjects/py2neo/ex11.py
(2, 0, 3, u'')
(10) <class 'py2neo.neo4j.Node'>
There is 1 node in db
Traceback (most recent call last):
File "C:/Users/you_zhang/PycharmProjects/py2neo/ex11.py", line 11, in <module>
print len(b)
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 1339, in __len__
return len(self.get_properties())
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 1398, in get_properties
self._properties = assembled(self._properties_resource._get()) or {}
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 1349, in _properties_resource
return self._subresource("properties")
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 403, in _subresource
uri = URI(self.__metadata__[key])
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 338, in __metadata__
self.refresh()
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 360, in refresh
self._metadata = ResourceMetadata(self._get().content)
File "C:\Users\you_zhang\AppData\Roaming\Python\Python27\site-packages\py2neo\neo4j.py", line 367, in _get
raise ClientError(e)
py2neo.exceptions.ClientError: Not Found
Your exact code snippet works for me (OS X, neo4j 2.1.2). There shouldn't be any problem. Have you tried to install the latest version of neo4j and run your code on a fresh and untouched database? I have encountered inconsistencies in corrupted databases.
Have you tried to load the node with .find()?
result = db.find('Label')
for n in result:
print(n)

Resources