Which is better approach to train images - opencv

Can we put the training data into separate directories for each class, and loop through the images in each directory, and set the labels based on the directory like if i put the positive images in one directory with 50 images and assign all that images to 1 and another directory with 50 negative images assign all that images -1 ? is this the right approach or this make the trainer untrain ?
string PosImagesDirectory="E:\\faces\\";
string NegImagesDirectory_2="D:\\not_faces\\";
I first loop through all the images of faces and assign them 1 and than loop through not_face and assign them -1
Or using the approach that in only have one directory like
string YourImagesDirectory_2="D:\\images\\";
and it contain both positive and negative images , and take images randomly , and i mark them number that which image is positive and which is negative but i am not clear about this approach .
I want to train my data through images using feature algorithms like SIFT/HOG/Bow

I don't understand your second approach. Do you mean to label them manually one image at a time when they are loaded?
I think that the first approach is ok. You do not need to label them manually, just iterate and label them.

Related

how to encrypt parts of image in least significant bits of another image?

I want to encrypt some parts of image and embed them into least significant bits of another image.I have pictures in the form of small picturebox in windows forms c#.Can anyone help me with encrypting these blocks?
I doubt it is the correct or fastest way, but likely the easiest way is to just use modulus operator. So for instance if you want to squeeze 2 images into one which has greyscale data in byte format (0-255). For simplicity lets assume you want an even split of 4 bits per image. 2^4=16. So if you take every pixel in that image and mod it:
pic1Pixel = pic1Pixel -pic1Pixel %16
that is going to peel the bottom significance out of that image. Then in the other image do this:
pic2Pixel = floor(pic2Pixel /16)
Do whatever you need to do (casting and floor or whatnot) to ensure the operation happens and then is rounded correctly (language dependent).
Then simply add your two bitmaps pixel by pixel.
compoundPixel = pic1Pixel + pic2Pixel
If after that you want to pull out the first image:
pic1Pixel = 16*(floor(compoundPixel/16))
second image:
pic2Pixel = 16* (compoundPixel%16)
There is almost certainly a cleaner way to do it with simple bit shifting, but I don't feel like debugging/testing anything right now and don't know the sintax off hand. In short though you would just shift in the first 4 bits from first pic then the first 4 bits from the second pic. To recall you would shift out appropriately or mask and normalize.

Detect electronic circuit elements based on images

I'm trying to detect elements from an electonic circuit based on binary images. Therefore I have to separate it into parts. Each part should describe one element, e.g. a resistor or a capacity. I also want to detect branchpoints, where multiple line (or multiple elements) are connected.
The following picture shows an example circuit, which contains two resistors and two branch-points: Example Circuit with two resistors:
.
Thats what I want my program to detect automatically.
I already implemented an algorithm which is able to detect line segments and branchpoints, when the input-image contains lines with 1px linewidth.
The problem is transforming an image into this 1px linemodel. Some like this:
Does anyone know how to do it?
Thanks in advance!
Niklas
In Matlab you can use the following code
% Read image
I = double(imread('circit.png'));
I = I(:,:,1);
% Run thining opreation
IThin = bwmorph(~I,'thin',Inf);
% Show image
imshow(IThin)
And the resulted image is:

How to count red blood cells/circles in Octave 3.8.2

I have an image with a group of cells and I need to count them. I did a similar exercise using bwlabel, however this one is a bit more challenging because there are some little cells that I don't want to count. In addition, some cells are on top of each other. I've seem some MATLAB examples online but they all involved functions that aren't available. Do you have any ideas how to separate the overlapping cells?
Here's the image:
To make it clearer: Please help me count the number of red blood cells (which have a circular shape) like so:
The image is in grayscale but I think you can distinguish which ones are red blood cells. They have a distinctive biconcave shape... Everything else doesn't matter. But to be more specific here is an image with all the things that I want to ignore/discard/not count highlighted in red.
The main issue is the overlapping of cells.
The following is an ImageJ macro to do this (which is free software too). I would recommend you use ImageJ (or Fiji), to explore this type of stuff. Then, if you really need it, you can write an Octave program to do it.
run ("8-bit");
setAutoThreshold ("Default");
setOption ("BlackBackground", false);
run ("Convert to Mask");
run ("Fill Holes");
run ("Watershed");
run ("Analyze Particles...", "size=100-Infinity exclude clear add");
This approach gives this result:
And it is point and click equivalent as:
Image > Type > 8-bit
Image > Adjust > Threshold
select "Default" and untick "dark background" on the threshold dialogue. Then click "Apply".
Process > Binary > Fill holes
Process > Binary > Watershed
Analyze > Analyze particles...
7 Set "100-Infinity" as range of valid particle size on the "Analyze particles" dialogue
On ImageJ, if you have a bianry image, watershed actually performs the distance transform, and then the watershed.
Octave has all the functions above except watershed (I plan on implementing it soon).
If you can't use ImageJ for your problem (why not? It can run in headless mode too), then an alternative is to get the area of each object, and if too high, then assume it's multiple cells. It kinda of depends on your question and if can generate a value for average cell size (and error).
Another alternative is to measure the roundness of each object identified. Cells that overlap will be less round, you can identify them that way.
It depends on how much error are you willing to accept on your program output.
This is only to help with "noise" but why not continue using bwlabel and try using bwareaopen to get rid of small objects? It seems the cells are pretty large, just set some size threshold to get rid of small objects http://www.mathworks.com/matlabcentral/answers/46398-removing-objects-which-have-area-greater-and-lesser-than-some-threshold-areas-and-extracting-only-th
As for overlapping cells, maybe setting an upperbound for the size of a single cell. so when you have two cells overlapping, it will classify this as "greater than one cell" or something like that. so it at least acknowledges the shape, but can't determine exactly how many cells are there

Using metafizzy isotope to create a custom center alignment of items in rows, is this possible?

I have a design with a custom layout for a lot of images which looks simple but development wise isn't. What I want to do is use Isotope to align and filter all of the images, but the alignment is custom. The images should always center within their container and the maximum number of items in a row alternates, so it goes Row 1 has maximum of 3 images, Row 2 has a maximum of 2 images, repeat this order for all other rows. I included a link to a quick diagram of how this will play out with different amounts of images. I'm not sure how or if Isotope can center these images no matter what the total number of them is per row. Any thoughts on this one?
No, for something like this, you better look into algorithms related to circle packing and bin packing. You find good algorithms via MATLAB and Mathematica.

How to detect if a frame is odd or even on an interlaced image?

I have a device that is taking TV screenshots at precise times (it doesn't take incomplete frames).
Still this screenshot is an interlace image made from two different original frames.
Now, the question is if/how is possible to identify which of the lines are newer/older.
I have to mention that I can take several sequential screenshots if needed.
Take two screenshots one after another, yielding a sequence of two images (1,2). Split each screenshot into two fields (odd and even) and treat each field as a separate image. If you assume that the images are interlaced consistently (pretty safe assumption, otherwise they would look horrible), then there are two possibilities: (1e, 1o, 2e, 2o) or (1o, 1e, 2o, 2e). So at the moment it's 50-50.
What you could then do is use optical flow to improve your chances. Say you go with the
first option: (1e, 1o, 2e, 2o). Calculate the optical flow f1 between (1e, 2e). Then calculate the flow f2 between (1e, 1o) and f3 between (1o,2e). If f1 is approximately the same as f2 + f3, then things are moving in the right direction and you've picked the right arrangement. Otherwise, try the other arrangement.
Optical flow is a pretty general approach and can be difficult to compute for the entire image. If you want to do things in a hurry, replace optical flow with video tracking.
EDIT
I've been playing around with some code that can do this cheaply. I've noticed that if 3 fields are consecutive and in the correct order, the absolute error due to smooth, constant motion will be minimized. On the contrary, if they are out of order (or not consecutive), this error will be greater. So one way to do this is two take groups of 3 fields and check the error for each of the two orderings described above, and go with the ordering that yielded the lower error.
I've only got a handful of interlaced videos here to test with but it seems to work. The only down-side is its not very effective unless there is substantial smooth motion or the number of used frames is low (less than 20-30).
Here's an interlaced frame:
Here's some sample output from my method (same frame):
The top image is the odd-numbered rows. The bottom image is the even-numbered rows. The number in the brackets is the number of times that image was picked as the most recent. The number to the right of that is the error. The odd rows are labeled as the most recent in this case because the error is lower than for the even-numbered rows. You can see that out of 100 frames, it (correctly) judged the odd-numbered rows to be the most recent 80 times.
You have several fields, F1, F2, F3, F4, etc. Weave F1-F2 for the hypothesis that F1 is an even field. Weave F2-F3 for the hypothesis that F2 is an even field. Now measure the amount of combing in each frame. Assuming that there is motion, there will be some combing with the correct interlacing but more combing with the wrong interlacing. You will have to do this at several times in order to find some fields when there is motion.

Resources