Caffe HDF5 Output Layer error - looking for working example - hdf5

All - I'm having some trouble getting HDF5 output files to work in Caffe. Has anyone successfully used the HDF5 output file layer in Caffe? If so could you provide an example or help me debug my definition. I'm not able to find any public examples/tutorials using the HD5F output feature, so I'm afraid it might not be very robust yet. Thanks in advance
Here is my prototxt:
layer {
type: "HDF5Output"
name: "hdf5output"
bottom: "Ytest" #
bottom: "ip2" #
hdf5_output_param {
file_name: "./datah5/output.h5"
}
include { phase: TEST }
}
The Caffe error snip is copied below. The output file 'outfile.h5' exists and has some data in it. HDF5 input seems to work ok
I0803 20:30:36.776832 27929 solver.cpp:338] Iteration 0, Testing net (#0)
I0803 20:30:36.785679 27929 hdf5_output_layer.cpp:32] Saving HDF5 file ./datah5/output.h5
I0803 20:30:36.785854 27929 hdf5_output_layer.cpp:37] Successfully saved 100 rows
I0803 20:30:36.792243 27929 hdf5_output_layer.cpp:32] Saving HDF5 file ./datah5/output.h5
HDF5-DIAG: Error detected in HDF5 (1.8.11) thread 70366426137120:
#000: ../../../src/H5D.c line 170 in H5Dcreate2(): unable to create dataset
major: Dataset
minor: Unable to initialize object
#001: ../../../src/H5Dint.c line 439 in H5D__create_named(): unable to create and link to dataset
major: Dataset
minor: Unable to initialize object
#002: ../../../src/H5L.c line 1638 in H5L_link_object(): unable to create new link to object
major: Links
minor: Unable to initialize object
#003: ../../../src/H5L.c line 1882 in H5L_create_real(): can't insert link
major: Symbol table
minor: Unable to insert object
#004: ../../../src/H5Gtraverse.c line 861 in H5G_traverse(): internal path traversal failed
major: Symbol table
minor: Object not found
#005: ../../../src/H5Gtraverse.c line 641 in H5G_traverse_real(): traversal operator failed
major: Symbol table
minor: Callback failed
#006: ../../../src/H5L.c line 1674 in H5L_link_cb(): name already exists
major: Symbol table
minor: Object already exists
F0803 20:30:36.792457 27929 hdf5.cpp:101] Check failed: status >= 0 (-1 vs. 0) Failed to make float da
taset data
*** Check failure stack trace: ***
# 0x3fff835520f0 (unknown)

Your.prototxt if perfectly fine. The reason you are getting this error is because you are running the network for multiple iterations. In the first iteration, the HDF5Output layer will create a file with the name ./datah5/output.h5. But in the next iteration, it will again try to create a file with the same name and fail. It is failing because the file already exists.
To deal with this you can do two things.
Run only a single batch at a time. Take the output and rename/move it before running another batch.
Edit the caffe code to use the iteration count in the output file name. Refer this - https://groups.google.com/forum/#!topic/caffe-users/zkGKk5UbInI

Related

Experiment does not run due to Error Messages

I have started programming an Arithmetic Strategy Use task in PsychoPy. The idea is to have a total of 80 arithmetic problems, which would essentially end up being 4 conditions; single addition (20 problems), single subtraction (20 problems), double addition (20 problems), double subtraction (20 problems).
What I have done so far:
I created 4 excel sheets; one per condition with 20 arithmetic problems
I inserted a routine called Trial and inserted 4 loops with Single Subtraction, Single Addition, Double Subtraction and Double addition.
I included a strategy report question after each trial
I tried to run the experiment, however, several error messages keep popping up and I am not sure how to troubleshoot them! Please find the error messages below:
*Traceback (most recent call last):
File “/Users/nina/Desktop/PsychoPy.app/Contents/Resources/lib/python3.8/psychopy/app/builder/builder.py”, line 1419, in onPavloviaRun
File “/Users/nina/Desktop/PsychoPy.app/Contents/Resources/lib/python3.8/psychopy/app/builder/builder.py”, line 1413, in onPavloviaSync
File “/Users/nina/Desktop/PsychoPy.app/Contents/Resources/lib/python3.8/psychopy/app/pavlovia_ui/project.py”, line 844, in syncProject
File “/Users/nina/Desktop/PsychoPy.app/Contents/Resources/lib/python3.8/psychopy/app/pavlovia_ui/functions.py”, line 148, in showCommitDialog
File “/Users/nina/Desktop/PsychoPy.app/Contents/Resources/lib/python3.8/psychopy/projects/pavlovia.py”, line 1167, in commit
File “/Users/nina/Desktop/PsychoPy.app/Contents/Resources/lib/python3.8/git/cmd.py”, line 542, in
File “/Users/nina/Desktop/PsychoPy.app/Contents/Resources/lib/python3.8/git/cmd.py”, line 1005, in _call_process
File “/Users/nina/Desktop/PsychoPy.app/Contents/Resources/lib/python3.8/git/cmd.py”, line 822, in execute
git.exc.GitCommandError: Cmd(‘/Users/nina/Desktop/PsychoPy.app/Contents/Resources/git-core/git’) failed due to: exit code(128)
cmdline: /Users/nina/Desktop/PsychoPy.app/Contents/Resources/git-core/git commit -m _
stderr: ‘fatal: Unable to create ‘/Users/ninajost/Desktop/.git/index.lock’: File exists.
Another git process seems to be running in this repository, e.g.
an editor opened by ‘git commit’. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
*
Any tips would be greatly appreciated!
I tried to run the experiment and expected it to run.

Error when opening a .nc file with raster package

I'm new to raster package in R, I was trying to open a .nc file with the package raster and some error popped out. In case you want to try I was using this dataset from copernicus of monthly sea salinity for the years 2018 and 2019 (the grid was Quebec St.Laurence stuary and Gaspesie coast).
I opened similar data files before but never got this error, and a search online did not clarify too much
Here is my script
library(raster)
library(ncdf4)
#Load the .nc files describing SSS.
SSS = stack('SSS.nc')
and the output error
Warning message:
In .rasterObjectFromCDF(x, type = objecttype, band = band, ...) :
"level" set to 1 (there are 17 levels)
thnx
I expected to create a a rasterstack object to work with
There is no error, there is a warning. If you want another level you can select it.
Either way, the "raster" package is obsolete, and you should try this with terra.
You can probably do
library(terra)
x <- rast('SSS.nc')
"Probably" because you do not provide a file. You should at least include a hyperlink to the file you are using. It is hard to help you without your example being reproducible.

How to use marginal, probability method in pycrfsuite.Tagger()

Documentation is not helpful to me at all.
First, I tried using set() ,but I don't understand what it means by
set an instance for future calls
I could successfully feed my data using my dataset's structure described below.
So, I am not sure why I need to use set for that as it mentioned.
Here is my feature sequence of type scipy.sparse after I called nonzero() method.
[['66=1', '240=1', '286=1', '347=10', '348=1'],...]
where ... imply, same structure as previous elements
Second problem I encountered is Tagger.probability() and Tagger.marginal().
For Tagger.probability, I used the same input as Tagget.tag(), and I get this follwoing error.
and if my input is just a list instead of list of list. I get the following error.
Traceback (most recent call last):
File "cliner", line 60, in <module>
main()
File "cliner", line 49, in main
train.main()
File "C:\Users\Anak\PycharmProjects\CliNER\code\train.py", line 157, in main
train(training_list, args.model, args.format, args.use_lstm, logfile=args.log, val=val_list, test=test_list)
File "C:\Users\Anak\PycharmProjects\CliNER\code\train.py", line 189, in train
model.train(train_docs, val=val_docs, test=test_docs)
File "C:\Users\Anak\PycharmProjects\CliNER\code\model.py", line 200, in train
test_sents=test_sents, test_labels=test_labels)
File "C:\Users\Anak\PycharmProjects\CliNER\code\model.py", line 231, in train_fit
dev_split=dev_split )
File "C:\Users\Anak\PycharmProjects\CliNER\code\model.py", line 653, in generic_train
test_X=test_X, test_Y=test_Y)
File "C:\Users\Anak\PycharmProjects\CliNER\code\machine_learning\crf.py", line 220, in train
train_pred = predict(model, X) # ANAK
File "C:\Users\Anak\PycharmProjects\CliNER\code\machine_learning\crf.py", line 291, in predict
print(tagger.probability(xseq[0]))
File "pycrfsuite/_pycrfsuite.pyx", line 650, in pycrfsuite._pycrfsuite.Tagger.probability
ValueError: The numbers of items and labels differ: |x| = 12, |y| = 73
For Tagger.marginal(), I can only produce error similar to first error shown of Tagger.probabilit().
Any clue on how to use these 3 methods?? Please give me shorts example of use cases of these 3 methods.
I feel like there must be some example of these 3 methods, but I couldn't find one. Am I looking at the right place. This is the website I am reading documentation from
Additional info: I am using CliNER. in case any of you are familiar with it.
https://python-crfsuite.readthedocs.io/en/latest/pycrfsuite.html
I know this questions is over a year old, but I just had to figure out the same thing as well -- I am also leveraging some of the CliNER framework. For the CliNER specific solution, I forked the repo and rewrote the predict method in the ./code/machine_learning/crf.py file
To obtain the marginal probability, you need to add the following line to the for loop that iterates over the pycrf_instances after yseq is created (see line 196 here)
y_probs = [tagger.marginal(y, ii) for ii, y in enumerate(yseq)]
And then you can return that list of marginal probabilities from the predict method -- you will in turn be required to rewrite additional functions in the to accommodate this change.

How can I read a directory on iso9660 from the path table when the table does not include size?

According to the spec for the structure of an iso9660 / ecma119, the path table contains records for each path, including the location of the starting sector and its name, but not its size. I can find the directory entry, but don't know how many sectors (normally 2048 bytes) it contains. Is it one? Two? Six?
If I "walk the directory tree", each directory entry includes the referenced location and size, so I can know how many bytes (essentially, how many sectors, since a directory must use entire sectors) to read. However, the path table only includes the starting location, and not the size, leaving me not knowing how many bytes to read.
In an example iso I have (ubuntu-18.04.1-live-server-amd64.iso fwiw), the root directory entry in the primary volume descriptor shows:
Root Directory:
Directory Record Length: 34
Extended Attribute Length: 0
Location of Extent: 20 $00000014 00:00:20
Data Length: 2048 $00000800
Recording Date and Time: 23:39:04 07/25/2018 GMT 0
File Flags: $02 visible regular dir non-record no-perms single-extent
File Unit Size: 0
Interleave Gap Size: 0
Volume Sequence Number: 1
File Identifier: . (current directory)
Since it says the Data Length is 2048, I know to read just one sector.
However, the root directory entry in the path table shows:
Path Record Length: 10 $0A
Extended Attribute Length: 0 $00
Location of Extent: 20 $00000014 00:00:20
Parent Directory Number: 1 $0001
File Identifier: . (current directory)
It also points to sector 20, but doesn't tell me how many sectors it uses, leaving me guessing.
Yes, unused bytes in a sector should be all 0x00, so if I read in a sector, read records, and then come to one whose first byte (length) is 0x00, then I know I have reached the end of records, but that has three issues:
If that were the canonical way, why bother including size in the directory entry?
If it includes 2 or 3 sectors, it is more efficient for me to read them all at once than one at a time.
If I have a directory whose records precisely fill a sector, without some size attribute, I don't know if the next sector is supposed to be read as an entry, or if the directory ended here.
Basically, I know how to read the ordered path table to get the directory entry, but don't know how to use that to know how many sectors to read for the directory itself. I could, in theory, read the parent to get the entry for this directory to know the size, but that adds a seek and read and pretty much defeats the purpose of the path table.
Ah, I figured it out. Because the directory entries always start with a directory entry for the directory itself, and the data length always is bytes 10-17 (10-13 for little-endian, 13-17 for big-endian), you can just read bytes 10-17 from the beginning of the sector and get the size. Still not as efficient as putting it in the path table itself - no idea why they did not - but it works.

WTX test data for running the maps

I have created a mapping of copybook elements to WSDL fields. And the map was built successfully. But while running the map locally, I am getting either of the two error for the two different operations that I am mapping:
1) For the first mapping: 'Input valid but unknown data found' and in the trace logs I am getting : INPUT 1 exists (3012 bytes) but has no content.
More details of error:
(Level 0: Offset 0, len 0, comp 1 of 0, #1, DI 000000000001:)
Data at offset 0 ('<retrFunction1'
of TYPE X'0004' (retrFunction1Request retrFunction1Request Message WSDLService).
INPUT 1 exists (3012 bytes) but has no content.
End of Validation messages for INPUT CARD 1.
(Offset 26130: Map Number 0 (CobolFuncData), DI 000000000000:)
TYPE X'0148' (COBOL_OBJECT Group CopyBook) has been built.
(Offset 26130: Map Number 0 (CobolFuncData), DI 000000000000:)
TYPE X'0124' (000_COBOL_OPERATION Record CopyBook) has been built.
OUTPUT 1 was built successfully.
2) For the second mapping: 'One of more inputs are invalid' and in the trace logs I am getting : INPUT 1 exists, but its type is in error.
Further for case2 I am getting:
(Level 3: Offset 0, len 0, comp 1 of 2, #1, DI 000000000001:)
Data at offset 0 ('xmlns'
of TYPE X'0008' (Prefix XMLS WSDLService).
I think the issue is not with the mapping of WSDL type trees with the COBOL type trees but with the XML Request and Response data that I am using for running these maps locally. Is there any guidelines that I can use to create the correct input and run the map locally successfully in WTX.
PS. I am using creating the type tree by importing WSDL and not XSD. I am not getting the node 'DOC' in the type tree when I imported my WSDL. In this case what type tree level should I be using for creating my map. I have tried WSDLService -> Type -> ~TypeName -> TypeDef and WSDLService -> Type -> ~TypeName -> Seq
I found one reason for this issue myself. The reason being that the request XML did not match the type tree level I was using in the mapping for the transformation. The better way to do this mapping if you are using a WSDL instead of XSD (and 'DOC XSD' doesn't show up in your WSDL/XSD type tree) is to use type for your input card as the 'input' for your operation from the WSDL (as n example: Input yourOperationName Operation yourWSDLService).
As a rule, it is most important to fully understand your WSDL and WSDL operation and XSD strutures in order to create mappings for your transformations.

Resources