I've got 50+ measures and I want to create histogram with binned data and switching displayed measures dynamically. I know how create bins manually but is there any possibility to create bins dynamically?
Related
I'm currently making a custom dataset with 1 class. The images i am labeling contains several of these objects in each image (between 30-70). I therefore wonder if I should count each of the objects in each image as "1 datapoint" when evaluating the size of the dataset?
I.e: Does more objects per image require less images?
Being this a detection problem, the size of the dataset is given both by the number of images and the number of objects. There is no reason to choose one of the two because they are both equally important numbers.
If you really want to define "size" you probably have to start from the error metric. Usually for object detection mIoU (Mean Intersection over Union) is used. This metric is at the object level so it doesn't care if you have 10 or 1 million images.
Finally, it could be that having many objects per image will allow you to use a smaller number of total images, but this can only be confirmed experimentally.
Our core aim is:
to use Image Processing to read/scan an architectural Floor Plan Image (exported from a CAD software)
to use Image Processing to read/scan an architectural Floor Plan Image (exported from a CAD software) extract the various lines and curves, group them into Structural Entities like walls, columns, beams etc. – ‘Wall_01’, ‘Beam_03’ and so on
extract the dimensions of each of these Entities based on the scale and the length of the lines in the Floor Plan Image (since AutoCAD lines are dimensionally accurate as per the specified Scale)
and associate each of these Structural Entities (and their dimensions) with a ‘Room’.
We have flexibility in that we can define the exact shapes of the different Structural Entities in the Floor Plan Image (rectangles for doors, rectangles with hatch lines for windows etc.) and export them into a set of images for each Structural Entity (e.g. one image for walls, one for columns, one for doors etc.).
For point ‘B’ above, our current approach based on OpenCV is as follows:
Export each Structural Entity into its own image
Use Canny and HoughLine Transform to identify lines within the image
Group these lines into individual Structural Elements (like ‘Wall_01’)
We have managed to detect/identify the line segments using Canny+HoughLine Transform with a reasonable amount of accuracy.
Original Floor Plan Image
Individual ‘Walls’ Image:
Line Segments identified using Canny+HoughLine:
(I don't have enough reputation to post images yet)
So the current question is - what is the best way to group these lines together into a logical Structural Entity like ‘Wall_01’?
Moreover, are there any specific OpenCV based techniques that can help us group the line segments into logical Entities? Are we approaching the problem correctly? Is there a better way to solve the problem?
Update:
Adding another image of valid wall input image.
You mention "exported from a CAD software". If the export format is PDF, it contains vector data for all graphic elements. You might be better off trying to extract and interpret that. Seems a bit cumbersome to go from a vector format to a pixel format which you then try to bring back to a numerical model.
If you have clearly defined constraints as to what your walls, doors, etc will look like in your image, you would use exactly those. If you are generating the CAD exports yourself, modify the settings there so as to facilitate this
For instance, the doors are all brown and are closed figures.
Same for grouping the walls. In the figures, it looks like you can group based on proximity (i.e, anything within X pixels of each other is one group). Although, the walls to the right of the text 'C7' and below it may get grouped into one.
If you do not have clear definitions, you may be looking at some generic image recognition problems, which means A.I or Machine Learning. This would require a large variety of inputs for it to learn from, and may get very complex
Can I use a pca subspace trained on, say, eight features and one thousand time points to evaluate a single reading? That is, if I keep, say, the top six components, my transformation matrix will be 8x6 and using this to transform test data that is the same size as the training data would give me an 6x1000 vector.
But what if I want to look for anomalies at each time point independently? That is, can rather than use an 8x1000 test set, can I use 1000 separate transformation on 8x1 dimensional test vectors and get the same result? This vector will get transformed into the exact same spot as if it were the first row in a much larger data matrix, but the distance of that one vector from the principal axis doesn't appear to be meaningful. When I perform this same procedure on the truncated reference data, this distance isn't zero either, only the sum of all distances over the entire reference data set is zero. So if I can't show that the reference data is not "anomalous", how can I use this on test data?
Is it the case that the size of the data "object" used to train pca is the size of object that can be evaluated with it?
Thanks for any help you can give.
The following is in reference to dynamic 16-bit images in ImageJ64.
I am aiming to be able to "plot" a rate of change for each pixel in the image for the whole sequence of images (60 per set) and use the different gradient values of this plot as representation of the change in that pixel over time thus displaying dynamic data as a still image. Any ideas on where to start and any tools that may be of use?
There are many possible "rates of change", everything depends on particular application. Some of possible solutions include (assuming that pix is a set of a particular pixel's values across your images):
values amplitude max(pix)-min(pix)
values variance (or standard deviation) var(pix) (or std(pix))
more complex functions can be used, if you are interested in actual "visual effect change" rather then simple pixel value by for example computing variance of directional partial derivatives etc. As stated before - everything depends on your application, what kind of change are you interested in.
I am going to develop an app which takes current and voltage as input values. I need to display certain values which are generated infinite times on applying certain calculations on those inputs in a graph. I have searched many web pages but I am not able to understand the real facts. Is there any way to plot line graph with plotting of points dynamically in iOS?