I have an neo4j-admin import script set up with --bad-tolerance=100000 (note, also tried --bad-tolerance 100000) as a flag. My script fails during import during the collect dense nodes step with the following message: unexpected error: Too many bad entries 1001, where last one was: InputRelationship:...
I thought bad tolerance was supposed to address that flag so that it would fail at the (in this case) 100,001st bad entry?
bin/neo4j-admin import is feature wise not yet on par with the good old bin/neo4j-import tool - which is marked deprecated in 3.1.1.
To use --bad-tolerance you need to go back to use bin/neo4j-import.
Related
When I trying to install nltk and download the file punket using nltk.download('punkt').
I am getting the following errors. Have tried many alternative codes and changing networks.
error
Please help with this error.
Post applying :-
= df['num_words'] = df['text'].apply(lambda x:len(nltk.word_tokenize(x)))
I am gettring the error:-
**Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
import nltk
nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/english.pickle**
I tried some alternative codes like
import nltk
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
pass
else:
ssl._create_default_https_context = _create_unverified_https_context
nltk.download()
Also tried changing the networks as at some places I found it is saying server issue.
Try to launch the jupyter notebooks session as administrator (open the command or anaconda prompt as administrator).
The last option would be to download the corpus manually. You may find this, helpful in your case.
Nim as a language provides .nimble files to describe its packages (example of a .nimble file). I know that the file is parsed by the nimble package and CLI-tool, as they need the information inside the .nimble file for their tasks.
I want basically all the information in there, dependencies, author, license, description, version, all of it. So in order to not do the same work twice and potentially run into issues should the format change, I would like to use the nimble package itself to parse the .nimble file for me.
I know the correct proc for it, which is getPkgInfoFromFile, but I can't seem to access it with import nimble/nimblepkg/packageparser.
Whenever I use that line I receive an error that there is no such file.
What am I doing wrong?
Further: getPkgInfoFromFile requires an Options instance that it generates when parsing a CLI command. I don't have a CLI command, so I'm not generating such an instance, can I use the proc somehow without one?
Thanks to ringabout I came to the correct solution, but first to the question.
Question 1: How do I access the proc in the first place?
You can access nimble like a package, but the import is not import nimble/nimblepkg/packageparser it is directly import nimblepkg/packageparser.
This requires you to have both nimble' installed as a library as well as the compiler` installed as a library.
So you'll have to install those first:
nimble install nimble
nimble install nim # Originally this was called "compiler", but was renamed to "nim"
Ignore any warnings if they pop up.
Now you can compile the following dummy-example:
#dummy.nim
import nimblepkg/packageparser
echo "Pointer to packageparser proc: ", packageparser.getPkgInfoFromFile.repr
with: nimble -d:ssl --mm:refc -r build (-d:ssl is required for nimble's HTTP-client and --mm:refc is required as nimble appears to not work with orc)
Question 2: Can I run the getPkgInfoFromFile without an Options instance?
Yes-ish. You still need one, but it doesn't have to be a "real" one, you can just instantiate one yourself on the fly.
import nimblepkg/[options, packageinfotypes, packageparser]
proc generateDummyOptions(): Options =
result = initOptions()
result.setNimBin()
result.setNimbleDir()
proc parseNimbleFile*(nimblePath: string): PackageInfo =
let options = generateDummyOptions()
result = getPkgInfoFromFile(nimblePath.NimbleFile, options)
I am trying to create several .qgs project files to be served at a later time by an instance of qgis Server.
To this end I need to start a new PyQGIS application several times upon request. The application runs smoothly the first time it is called, but if I try to run it a second time I get a Segmentation Fault error.
Here is an example code that triggers the problem:
from qgis.core import QgsApplication
import os
os.environ["QT_QPA_PLATFORM"] = "offscreen"
for i in range(2):
print(f'Iteration number {i}')
print('\tSet prefix path')
QgsApplication.setPrefixPath('/usr', False)
print('\tInstantiating application')
qgs = QgsApplication([], False)
print('\tInitializing application')
qgs.initQgis()
print('\tExiting')
qgs.exitQgis()
When executed, I get this output:
Iteration number 0
Set prefix path
Instantiating application
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
Initializing application
Exiting
Iteration number 1
Set prefix path
Instantiating application
Initializing application
Segmentation fault
Something similar happens if I enclose the content of the loop inside a function and call it multiple times. In this case the segmentation fault happens upon calling qgs.exitQgis() the second time (and any vector or raster layers added before that would be invalid).
Maybe the problem is that for some reason qgs.exitQgis() is not really cleaning up before exit?
The code is running on a Python:3.9 docker container that comes with Debian Bullseye.
Qgis has been installed following the instruction from the QGIS docs:
https://qgis.org/en/site/forusers/alldownloads.html#debian-ubuntu. QGIS version is QGIS 3.22.3-Białowieża 'Białowieża'.
To prevent an import error when loading qgis.core I had to set the environment variable PYTHONPATH = /usr/lib/python3/dist-packages/.
UPDATE: Following a suggestion of one comment found on this post:
https://gis.stackexchange.com/questions/250933/using-exitqgis-in-pyqgis,
I substituted qgs.exitQgis() with qgs.exit() and now the app can be instantiated again any number of times without crashing.
It is still not clear what causes the segmentation fault, but at least I found this workaround.
It seems like the problem was fixed in QGIS ver. 3.24 Tisler. Now qgs.exitQgis() can be called in a loop without triggering a segmentation fault.
I send some samples for Sanger sequencing to a commercial facility. I'm able to read the files they send using the command
from Bio import SeqIO
from Bio import Seq
rec = SeqIO.read("isolation-round4/3dr23_Forward.ab1",'abi-trim').seq
But recently, due to a move, we had to send the samples elsewhere for sequencing. Now, if I try to run the same command on the output I get an error:
UnboundLocalError: local variable 'qual' referenced before assignment in
File "C:\Users\Anaconda3\lib\site-packages\Bio\SeqIO\AbiIO.py", line 462, in AbiIterator letter_annotations={"phred_quality": qual}
I would appreciate any help in dealing with this. Here are two files, one that works and one that does not, if you would like to have a look.
Thanks in advance for your help!
Bug should have already been fixed in Biopython 1.77
Update: See https://github.com/biopython/biopython/issues/3221 - turned out to be a new unexpected configuration of the ABI software producing files with no quality scores.
I am using a batch file which calls an SPSS production job which runs many syntax files.
In the syntax files I want to able to check some variables, and if certain conditions are not met then I want to stop the production job, exit SPSS and return an error code to the batch file.
The batch file needs to stop running the next commands based on the error code returned. I know how to do this in the batch file already.
The most basic solution could be if the error code is not 0 then stop, and the error text will be output to a separate text file from within the syntax. A bonus would be a different error code which I could then match to where in the syntax that code is thrown.
What is the best way to achieve this in the SPSS syntax and or production file?
One way to do this would be to execute Statistics as an external mode Python job. Then you could interrogate any results, catch exceptions, and set exit codes and messages however, you like. Here is an example:
jobs.py:
Python jobs
import sys
sys.path.append(r"""c:/spss23/python/lib/site-packages""")
import spss
try:
spss.Submit("""INSERT FILE="c:/temp/syntax1.sps".""")
except:
print "syntax1.spss failed"
exit(code=1)
try:
spss.Submit("""INSERT FILE="c:/temp/syntax2.sps".""")
except:
print "syntax1.spss failed"
exit(code=2)
Then the bat file would do
python c:/myjobs/jobs.py
print %ERRORLEVEL%
or similar. The job would need to save the output in appropriate format using OMS or shell redirection. (The blocks after try and except should be indented.)
In external mode, you could use code like this or you could interrogate items in the Viewer.
import spss, spssdata
curs = spssdata.Spssdata("variable2")
for case in curs:
if case[0] == 6:
exit(99)
curs.CClose