I'm trying to deploy https://hub.docker.com/r/bamos/openface/ image to Heroku, so I downloaded the code from https://github.com/cmusatyalab/openface and followed those steps https://devcenter.heroku.com/articles/container-registry-and-runtime.
The push and release seems to be working, but when I try accessing the app url, I get this error why inspecting via heroku logs --tail:
Traceback (most recent call last):
File "./demos/web/websocket-server.py", line 22, in <module>
import txaio
ImportError: No module named txaio
So I tried adding txaio >= 2.10.0 to the requirements.txt, but nothing changed.
What else can I try? Am I doing something wrong?
Related
For some time now I am doing object detection on the GPU using tensorflow.
I have the following problem: I am forced to stick with version 2.3.0 because all versions after 2.3.0 fail with the following error:
Traceback (most recent call last):
File "src/main/python/roldetector/video_rol_detector.py", line 6, in <module>
import tensorflow as tf
File "/home/tensorflow/.local/lib/python3.6/site-packages/tensorflow/__init__.py", line 438, in <module>
_ll.load_library(_main_dir)
File "/home/tensorflow/.local/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 154, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow/core/kernels/libtfkernel_sobol_op.so: undefined symbol: _ZN10tensorflow8OpKernel11TraceStringEPNS_15OpKernelContextEb
I know there are more people having this problem but I have not seen a working solution yet.
The most remarkable - almost contradictory - is that docker is supposed to prevent this
kind of problem. I am working on ubuntu 20.04 but I should not need to specify that because if the docker image is correct then it should not matter what is running on the host for it to work. Right?
Does anyone have a working solution so I can start using the latest version of tensorflow? (All versions after 2.3.0 seem to suffer from the same problem)
Regards,
Chris
I am trying to run a beam job on dataflow using the python sdk.
My directory structure is :
beamjobs/
setup.py
main.py
beamjobs/
pipeline.py
When I run the job directly using python main.py, the job launches correctly. I use setup.py to package my code and I provide it to beam with the runtime option setup_file.
However if I run the same job using bazel (with a py_binary rule that includes setup.py as a data dependency), I end up getting an error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 804, in run
work, execution_context, env=self.environment)
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/workitem.py", line 131, in get_work_items
work_item_proto.sourceOperationTask.split)
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/workercustomsources.py", line 144, in __init__
source_spec[names.SERIALIZED_SOURCE_KEY]['value'])
File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/pickler.py", line 290, in loads
return dill.loads(s)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 275, in loads
return load(file, ignore, **kwds)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 270, in load
return Unpickler(file, ignore=ignore, **kwds).load()
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 472, in load
obj = StockUnpickler.load(self)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 462, in find_class
return StockUnpickler.find_class(self, module, name)
ModuleNotFoundError: No module named 'beamjobs'
This is surprising to me because the logs show above:
Successfully installed beamjobs-0.0.1 pyyaml-5.4.1
So my package is installed successfully.
I don't understand this discrepancy between running with python or running with bazel.
In both cases, the logs seem to show that dataflow tries to use the image gcr.io/cloud-dataflow/v1beta3/python37:2.29.0
Any ideas?
Ok, so the problem was that I was sending the file setup.py as a dependency in bazel; and I could see in the logs that my package beamjobs was being installed correctly.
The issue is that the package was actually empty, because the only dependency I included in the py_binary rule was that setup.py file.
The fix was to also include all the other python files as part of the binary. I did that by creating py_library rules to add all those other files as dependencies.
Probably the wrapper-runner script generated by Bazel (you can find path to it by calling bazel build on a target) restrict set of modules available in your script. The proper approach is to fetch PyPI dependencies by Bazel, look at example
I'm on a Widows 7 machine. I reinstalled anaconda, made a new environment and then installed spyder 4.0 using conda. The Spyder 4 dialog box briefly opens and then crashes. Anaconda prompt says that it's a type error in the kite client. Below is the full output:
(spyder4) C:\>spyder
Traceback (most recent call last):
File "C:\Users\tb\AppData\Local\Continuum\anaconda3\envs\spyder4\lib\site
-packages\spyder\app\mainwindow.py", line 3711, in main
mainwindow = run_spyder(app, options, args)
File "C:\Users\tb\AppData\Local\Continuum\anaconda3\envs\spyder4\lib\site
-packages\spyder\app\mainwindow.py", line 3552, in run_spyder
main.setup()
File "C:\Users\tb\AppData\Local\Continuum\anaconda3\envs\spyder4\lib\site
-packages\spyder\app\mainwindow.py", line 960, in setup
self.completions.start()
File "C:\Users\tb\AppData\Local\Continuum\anaconda3\envs\spyder4\lib\site
-packages\spyder\plugins\completion\plugin.py", line 292, in start
client_info['plugin'].start()
File "C:\Users\tb\AppData\Local\Continuum\anaconda3\envs\spyder4\lib\site
-packages\spyder\plugins\completion\kite\plugin.py", line 144, in start
self.client.start()
File "C:\Users\tb\AppData\Local\Continuum\anaconda3\envs\spyder4\lib\site
-packages\spyder\plugins\completion\kite\client.py", line 62, in start
self.sig_client_started.emit(self.languages)
TypeError: KiteClient.sig_client_started[list].emit(): argument 1 has unexpected
type 'str'
I also tried using miniconda and the results were the same with the type error in the KiteClient.
When I tried conda update spyder=4.0.0 in the base environment Anaconda Prompt says there's package conflicts with jupyter and qtconsole and doesn't upgrade Spyder to the new version.
I also tried conda update anaconda inside the base environment and the results were the same as above with the type error in the KiteClient.
Does anyone know how I can get Spyder 4 installed or how I can troubleshoot it? Maybe remove Kite somehow. The new version of Spyder looks really sleek, I would love to use it.
I am following the instructions in the textbook of the course 6.832, appendix A, on how to install Drake locally on Linux.
All the installation steps have completed and seems to be successful. In addition, I have installed all the prerequisites as described. However, when I run the test in section 2.3
(python -c 'import pydrake; print(pydrake.__file__)')
I have experienced several errors.
It seems that it is trying to access older version of several lib***.so files than what I have.
F.eks: Pydrake tried to include libgfortran.so.3, when I only have libgfortran.so.4 on my computer. I tried to do some "hackfixes" by using the ln -s command to make the terminal accept "libgfortran.so.4" as "libgfortran.so.3". But, now I ran into another error that I don't know how to solve.
It says:
Traceback (most recent call last): File "", line 1, in
File
"/opt/drake/lib/python2.7/site-packages/pydrake/init.py", line 32,
in from . import common File
"/opt/drake/lib/python2.7/site-packages/pydrake/common/init.py",
line 3, in from ._module_py import * ImportError:
/opt/drake/lib/python2.7/site-packages/pydrake/common/../../../../libdrake.so:
undefined symbol:
_ZN6google8protobuf2io17CodedOutputStream28WriteVarint32FallbackToArrayEjPh
How do I handle this problem?
If you followed section A.2.1 "download the binaries" verbatim, you would be downloading https://drake-packages.csail.mit.edu/drake/continuous/drake-latest-xenial.tar.gz, the package for Ubuntu 16.04 (Xenial), which links to libgfortran.so.3.
Since you are on Ubuntu 18.04 (Bionic), you would instead need to download https://drake-packages.csail.mit.edu/drake/continuous/drake-latest-bionic.tar.gz, which links to libgfortran.so.4.
I have installed oscar in django 1.7 on windows 8 and followed the official tutorials but after the installation when i use python django-admin.py startproject demoshop command it gives an error :
(oscar) c:\Python34\Scripts>python django-admin.py startproject demoshop
Traceback (most recent call last):
File "django-admin.py", line 2, in <module>
from django.core import management
ImportError: No module named 'django'
I have also set the environment variables
What am I doing wrong? Appreciate any help.