I'm trying to use the new 'evaluation' action after inference to generate some metrics for my output. However, the .csv files just show scores of '0' for average_distance and '1' for Jaccard and Dice for each of my data volumes. I can't seem to find any documentation for the evaluation action, so I'm not sure what I'm doing wrong. Also, the --dataset_to_infer=Validation option doesn't seem to work, both inference and evaluation are being applied to all data rather than just the validation set.
Thanks!
For the evaluation issue, we're working on the documentation. The dataset_to_infer option is only tested for the applications in NiftyNet/niftynet/application; applications from the model zoo are not upgraded to support it yet (please file an issue with more details https://github.com/NifTK/NiftyNet/issues if you believe it's a bug).
For the time being, pointing directly to the inference result in the config.ini worked for me.
e.g.
[inferred]
csv_file = model_dir/save_seg_dir/inferred.csv
I believe this file is not found currently and then evaluation defaults to comparing labels to labels. See the issue on GitHub.
Related
In the course of debugging issues, I've found it hard to decipher exactly which tasks are causing problems. I've used the 'dask_key_name' kwarg successfully in delayed tasks to assign a human-readable name to the key for those delayed tasks (based on the documentation here: https://docs.dask.org/en/latest/delayed-api.html). I've tried to do the following in the hopes that it would do the same for the read_parquet tasks, but it appears it still uses a hashed value to create the key (e.g., ('read-parquet-ed9e6c4c474e851e176e7eafb8753490', 5)).
item = 'custom_string'
self.all_pfs_dict['read'][item] = dd.read_parquet(item_to_read, index=False, gather_statistics=False, dask_key_name=item + '-read')
Am I doing something wrong or is there an alternative way to name dask dataframe tasks?
There is no way to rename dataframe tasks like this today.
I have previously had a similar question, but it does not seem to support such a thing except from_pandas() method.
from_pandas() has the name parameter to set name, but others like read_parquet() don't.
So if you want to do it, you need to change the Dask code linked above.
Please I need solution with correcting and running this code for mask rcnn
As commented by Gtomika, please provide error trace in text format. Also provide more details about the repo where you got the model from and any other information that you think is relevant.
Based on my past experience, I'm pretty sure that you are using Matterplot's mask RCNN and your issue is due to class count mismatch. You are trying to load weight of a model that was trained on a different class count. You should exclude some layers such as 'mrcnn_bbox_fc','mrcnn_class_logits’.
Fix is to change model.load_weights(COCO_MODEL_PATH, by_name=True) to model.load_weights(filepath, by_name=True, exclude=[ "mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
Refer this issue for more information.
(From https://groups.google.com/d/msg/bazel-discuss/cIBIP-Oyzzw/caesbhdEAAAJ)
What is the recommended way for rules to export information about failures such that downstream tools can include them in UIs.
Example use case:
I ran bazel test //my:target, and one of the actions for //my:target fails because there is an unknown variable "usrname" in my/target.foo at line 7 column 10. It would also like to report that "username" is a valid variable and this is a possible misspelling. And thus wants to suggest an addition of an "e" character.
One way I have thought to do this is to have a separate file that my action produces //my:target.errors that is in a separate output group and have it write machine parseable data there in addition to human readable data on stdout.
I can then find all of these files and parse the data in them in downstream tools.
Is there any prior work on this, or does everything just try to parse the human readable output?
I recommend running the error checkers as extra actions.
I don't think Bazel currently has hooks for custom error handlers like you describe. Please consider opening a feature request: https://github.com/bazelbuild/bazel/issues/new
Using XMLProvider from the FSharp.Data package like:
type internal MyProvider = XmlProvider<Sample = "C:\test.xml">
The test.xml file contains a total of 151,838 lines which makes up 15 types.
Working in the same project as the type declaration MyProvider is a pain, as it seems the XmlProvider is triggered everytime I hit CTRL+SPACE (Edit.CompleteWord) - and therefore regenerates all the models, which can take up to 10sec.
Is there any known work around, or setting to cache the generated models from XmlProvider?
I'm afraid F# Data does not currently have any caching mechanism for the inferred schema. It sounds like something that should not be too hard to add - if anyone is interested in contributing, please open an issue on GitHub to start the discussion!
My recommendation for the time being would be to try to simplify the sample XML, so that it is shorter and contains just a few representative records of all the different kinds.
I am trying to understand blobtrack.cpp code provided as a sample code with OpenCV. In this code class named CvBlobTrackerAuto is used. I tried to find some documentation about this class but it does not provide a detailed explanation.
I am particularly interested in
CvBlobTrackerAuto::Process(IplImage *pImg, IplImage *pMask = NULL) function. What does this do and what is the task of this mask used here?
Thank you in advance
I've been working with CvBlobTrackerAuto in the last few weeks. Here are some of things I have figured out.
CvBlobTrackerAuto::Process is used to process the last captured image in order to update the tracking information (blob ids and positions). Actually, CvBlobTrackerAuto is an abstract class since it doesn't provide an implementation for CvBlobTrackerAuto::Process. The only concrete implementation there is (as far as I can tell) is CvBlobTrackerAuto1, which can be found in blobtrackingauto.cpp.
What CvBlobTrackerAuto1::Process does is to implement the following pipeline:
Foreground detection: this produces a binary mask corresponding to the foreground.
Blob tracking: updates the position of blobs. It may use mean shift, particle filters or a combination of these.
Post processing: (I'm not sure of what this section does).
Blob deletion: it is "experimental and simple" according to a comment in there. It deletes blobs which have been too small or near the image borders in the last frames.
Blob detection: detects new blobs. See enteringblobdetection.cpp.
Trajectory generation: (not sure of what it does).
Track analysis: (not sure of what it does. But I do remember having read the code and deciding that it had no influence on blob tracking, so I disabled it.)
In this particular implementation of CvBlobTrackerAuto::Process, the pMask parameter is used for nothing at all. It has a default value of NULL and it is assigned to a variable once, only to be overwritten some lines later.
The OpenCv sample to be found in samples/c/blobtrack_sample.cpp is built around this CvBlobTrackerAuto1 class, providing different options to each module in the pipeline.
I hope it helps.
I was directed to this link when I posted the same question in OpenCV mailing group. This doc explains OpenCV Blobtracker and its modules.
Hope this helps anyone interested.