"No module named optimizers.PSO" - python-import

I am trying to run a python file called: "optimizer.py"
".py". Such file has several "import" lines, the first one being:
import optimizers.PSO as pso
on the same folder where ""optimizer.py" lives, there is a folder called "optimizers", where inside there are the many files mentioned in the imports, including the "PSO.py"
When I try to run the optimizer.py file, which the README.md mentions being the main file, all I get is this:
Traceback (most recent call last):
File "optimizer.py", line 8, in <module>
import optimizers.PSO as pso
ImportError: No module named optimizers.PSO
What can I be doing wrong? Perhaps is not my fault?
Fell free to ask for more info and thanks for the effort

Try copy the PSO.py to the folder of your main python file (folder of optimizer.py). then use import PSO as PSO.

Related

Error running Beam job with DataFlow runner (using Bazel): no module found error

I am trying to run a beam job on dataflow using the python sdk.
My directory structure is :
beamjobs/
setup.py
main.py
beamjobs/
pipeline.py
When I run the job directly using python main.py, the job launches correctly. I use setup.py to package my code and I provide it to beam with the runtime option setup_file.
However if I run the same job using bazel (with a py_binary rule that includes setup.py as a data dependency), I end up getting an error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 804, in run
work, execution_context, env=self.environment)
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/workitem.py", line 131, in get_work_items
work_item_proto.sourceOperationTask.split)
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/workercustomsources.py", line 144, in __init__
source_spec[names.SERIALIZED_SOURCE_KEY]['value'])
File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/pickler.py", line 290, in loads
return dill.loads(s)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 275, in loads
return load(file, ignore, **kwds)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 270, in load
return Unpickler(file, ignore=ignore, **kwds).load()
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 472, in load
obj = StockUnpickler.load(self)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 462, in find_class
return StockUnpickler.find_class(self, module, name)
ModuleNotFoundError: No module named 'beamjobs'
This is surprising to me because the logs show above:
Successfully installed beamjobs-0.0.1 pyyaml-5.4.1
So my package is installed successfully.
I don't understand this discrepancy between running with python or running with bazel.
In both cases, the logs seem to show that dataflow tries to use the image gcr.io/cloud-dataflow/v1beta3/python37:2.29.0
Any ideas?
Ok, so the problem was that I was sending the file setup.py as a dependency in bazel; and I could see in the logs that my package beamjobs was being installed correctly.
The issue is that the package was actually empty, because the only dependency I included in the py_binary rule was that setup.py file.
The fix was to also include all the other python files as part of the binary. I did that by creating py_library rules to add all those other files as dependencies.
Probably the wrapper-runner script generated by Bazel (you can find path to it by calling bazel build on a target) restrict set of modules available in your script. The proper approach is to fetch PyPI dependencies by Bazel, look at example

travis-ci path to the shared library or how to link shared library to python

ci professionals,
I cannot figure out why this code cannot find the shared library. Please see the log
https://pastebin.com/KvJP9Ms3
[31mImportError while importing test module '/home/travis/build/alexlib/pyptv/tests/test_pyptv_batch.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/test_pyptv_batch.py:1: in <module>
from pyptv import pyptv_batch
pyptv/pyptv_batch.py:20: in <module>
from optv.calibration import Calibration
E ImportError: liboptv.so: cannot open shared object file: No such file or directory[0m
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
[1m[31m=========================== 1 error in 0.43 seconds ============================[0m
the pull request
https://github.com/alexlib/pyptv/pull/4
and the build https://travis-ci.org/alexlib/pyptv/builds/342237102
We have a C library (http://github.com/openptv/openptv) that we need to compile and using Cython bindings add to Python, then we use Python through bindings. The tests work locally but not on Travis-CI (great service). I think it's a simple issue with paths, but I couldn't figure out how to deal with this.
Thanks in advance
Alex
The answer I have found was to set also the DYLD_LIBRARY_PATH that is required on Mac OS X:
export PATH=$PATH:/usr/local/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/usr/local/lib

Precompile main.py with the micropython binary image for esp8266

There is boot.py available by default in the micropython image.
I have tested a code, in the python module main.py. I would like to do the following
I would like to compile a image, so it makes it easier to flash it to more than 10 devices and I do not have to start webrepl.
is there a way to stop boot messages that says micropython version number etc.?
I tried the following: apparently they are already activated:
https://forum.micropython.org/viewtopic.php?t=2334
I successfully compiled an image using the following:
https://cdn-learn.adafruit.com/downloads/pdf/building-and-running-micropython-on-the-esp8266.pdf
Question:
how to create an image with main.py, where should this file go in this folder /home/vagrant/micropython/esp8266 ?
You need to change micropython\esp8266\modules\inisetup.py.
In this file, a block of code writes boot.py file at micropython start-up. Like below
with open("boot.py", "w") as f:
f.write("""\
# This file is executed on every boot (including wake-boot from deepsleep)
#import esp
#esp.osdebug(None)
import gc
#import webrepl
#webrepl.start()
gc.collect()
import mymain
""")
Notice last line import mymain. Copy your mymain.py file to the micropython\esp8266\modules directory.
mymain.py file should not have if __name__ == '__main__' block, so that it is executed at import. All other files that mymain is importing should also be in the modules directory. After building the code, all required files will get included with the binary.
1) boot.py is generated by the following script:
/home/vagrant/micropython/esp8266/script/inisetup.py
the function: setup() writes boot.py to the filesystem at every start up.
this would be the place to add main.py also writing it in the file.
or to add it in scripts and start it with boot.py
2) stop boot messages: "performing initial checks" is on inisetup.py. some are on port_diag.py in the scripts folder.

py.test trying to import wrong module on Travis but not locally

I have a Travis CI build that is failing; py.test seems to be trying to import the wrong module, though I cannot reproduce this locally. I expect it to import tools.lint.tests.test_lint, not lint.tests.test_lint, as you can see in the traceback, given that build has --full-trace! This leads to the error beneath it when it tries to do a relative import from the tools package.
The short trackback is:
___________________ ERROR collecting lint/tests/test_lint.py ___________________
.tox/py27/lib/python2.7/site-packages/py/_path/local.py:650: in pyimport
__import__(modname)
lint/__init__.py:1: in <module>
from . import lint
lint/lint.py:15: in <module>
from .. import localpaths
E ValueError: Attempted relative import beyond toplevel package
Given the name of the top level package is just the directory that the repo is in, I wouldn't be surprised to see that differ—but I'd still expect to see it there!
Take a look at the path Travis has that file at: /home/travis/build/w3c/wpt-tools/lint/tests/test_lint.py. The directory called tools on your computer is called wpt-tools on Travis, following the name of the repo on GitHub.
Vitally, wpt-tools isn't a valid Python package name, as Python packages cannot contain hyphens in the name. (They have to be an identifier). This leads py.test to conclude it isn't a package, despite the __init__.py contained within, and hence it doesn't include it in the import path, leading to the error when code tries a relative import from what is meant to be the top-level package.
There's a couple of solutions here:
The possibly simple one is renaming the repository so that it doesn't contain any hyphens, though obviously if you're an established repository this is likely undesirable.
Get Travis CI to run the code from some directory, by copying/moving the repository to a directory whose name doesn't contain a hyphen, at the start of before_install, using something like:
before_install:
- mv `pwd` /tmp/tools
- cd /tmp/tools
This will then run all the install and later steps from /tmp/tools, which will allow everything to run as expected.
(Note you cannot use a symbolic link here as os.getcwd() in Python will eliminate the link from the path, returning the real path, leading that seeming workaround not to work at all.)

Biopython cannot find file

I am trying to run a qblast from the Python prompt and after importing all the libraries I need, Python cannot find my file:
>>> record = SeqIO.read(open("sinchimeras_1.fasta"), format="fasta")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 2] No such file or directory: 'sinchimeras_1.fasta'
I have tried to write down all the route for the file ("/Users/imac...") and to move the file to the Python and to the Biopython folders, and I get the same message.
Where do I have to save my file? What am I doing wrong?
You need to move that file to your working directory or use an absolute path.
This problem is independent of Biopython; the IOError is coming from open("sinchimeras_1.fasta").
Explanation
When you provide Python the relative path "sinchimeras_1.fasta", it looks in the current directory for such a file. So, instead of moving your Fasta file to a Python/Biopython folder, ensure it's in your working directory (os.getcwd() may be helpful).
Alternatively, you can supply the absolute path to the file in place of "sinchimeras_1.fasta" (e.g. open("/Users/imac.../sinchimeras_1.fasta")).

Resources