How do I generate images of the different moon phases?
i.e., I want something like this:
and I'm thinking about overlapping two images to get that kind of result, for example
&
based on U.S. Navy moon illumination data (any alternatives?)
Moon calendar for 2016:
http://aa.usno.navy.mil/cgi-bin/aa_moonill2.pl?form=2&year=2016&task=00&tz=6&tz_sign=-1
I made the first image with GIMP, so I'm thinking about automating that process with a GIMP script fed with the parsed moon data (illuminated fraction, at first).
I'm looking for a non-gimp dependent alternative.
Thanks for your time.
Get a selection on the moon (likely, Alpha-to-selection)
Save to channel
Remove the selection
Scale the saved channel horizontally (vertically centered)
Recreate the original selection
Subtract the scaled channel
Subtract on the halves of the image
Bucket-fill the remaining selection
The horizontal scaling factor is likely to be a cosine of the POM day.
A completely different, and non-Gimp-dependent alternative: generate a SVG file directly. In practice you would have a base image on which you would overlay a crescent (which is, like in my other answer, a half circle on one side and a half-ellipse on the other. Creating a Bezier spline that very closely approximates a circle arc or an ellipse arc is very easy.
Related
How to get an array of coordinates of a (drawn) line in image? Coordinates should be relative to image borders. Input: *.img . Output array of coordinates (with fixed step). Any 3rd party software to do this? For example there is high contrast difference - white background and color black line; or red and green etc.
Example:
Oh, you mean non-straight lines. You need to define a "line". Intuitively, you might mean a connected area of the image with a high aspect ratio between the length of its medial axis and the distance between medial axis and edges (ie relatively long and narrow, even if it winds around). Possible approach:
Threshold or select by color. Perhaps select by color based on a histogram of colors, or posterize as described here: Adobe Photoshop-style posterization and OpenCV, then call scipy.ndimage.measurements.label()
For each area above, skeletonize. Helpful tutorial: "Skeletonization using OpenCV-Python". However, you will likely need the distance to the edges as well, so use skimage.morphology.medial_axis(..., return_distance=True)
Do some kind of cleanup/filtering on the skeleton to remove short branches, etc. Thinking about your particular use, and assuming your lines don't loop around, you can just find the longest single path in the skeleton. This is where you can also decide if a shape is a "line" or not, based on how long the longest path in its skeleton is, relative to distance to the edges. Not sure how to best do that in opencv, but "Analyze Skeleton" in Fiji/ImageJ will let you filter by branch length.
What is left is the most elongated medial axis of the original "line" shape. You can resample that to some step that you prefer, or fit it with a spline, etc.
Due to the nature of what you want to do, it is hard to come up with a sample code that will work on a range of images. This is likely to require some careful tuning. I recommend using a small set of images (corpus), running any version of your algo on them and checking the results manually until it is pretty good, then trying it on a large corpus.
EDIT: Original answer, only works for straight lines:
You probably want to use the Hough transform (OpenCV tutorial).
Python sample code: Horizontal Line detection with OpenCV
EDIT: Related question with sample code to skeletonize: How can I get a full medial-axis line with its perpendicular lines crossing it?
following up on my other question, do you guys know a good example in OpenCV, with a simple Black/White-Calibration Picture and appropriate detection-algorithms?
I just want to show some B&W-image on a screen, take a picture of that image from afar and calculate the size of the shown image, to calculate the distance to said screen.
Before I invent the wheel again, I recon this is so easy that it could be achieved through many different ways in OpenCV, yet I thought I'd ask if there's a preferred way around, possibly with some sample code.
(I got some face-detection code running using haarcascade-xml files already)
PS: I already have the resolution/dpi-part of my screen covered, so I know how big a picture would be in cm on my screen.
EDIT:
I'll make it real simple, I need:
A pattern, that is easily recognizable in an Image. Right now I'm experimenting with a checkerboard. The people who made ARDefender used this.
An appropriate algorithm to tell me the exact pixel coordinates of pattern 1) in a picture using OpenCV.
Well, it's hard to say which image is the best for recognition - in different illumination any color could be interpret as another color. Simple example:
As you can see both traffic signs have red color border but even on one image upper sign border is obviously not red.
So in my opinion you should use image with many different colors (like a rainbow). And also you said that it should be easy recognizable in different angles. That's why circle shape is the best for it.
That's why your image should look like this:
So idea of detection such object is the following:
Make different color segmentation (blue, red, green etc). For this use HSV color space.
Detect circles of specific color on image.
That area which has the biggest count of circles seems to be your object.
you just have to take pictures of your B&W object from several known distances (1m, 2m, 3m, ...) and then for each distance check the size of your object in the corresponding image.
From those datas, you will be able to create a linear function giving you the distance from the size in pixels (y = ax + b should do ;) ), translate it into your code and you're done.
Cheers
I've created an iPhone app that can scan an image of a page of graph paper and can then tell me which squares have been blacked out and which squares are blank.
I do this by scanning from left to right and use the graph paper's lines as guides. When I encounter a graph paper line, I start to look for black, until I hit the graph paper line again. Then, instead of continuing along the scan line, I go ahead and completely scan the square for black. Then I continue on to the next box. At the end of the line, I skip down so many pixels before starting the scan on a new line (since I have already figured out how tall each box is).
This sort of works, but there are problems. Sometimes I mistake the graph lines as "black". Sometimes, if the image is skewed, or I don't have uniform lighting across the page, then I don't get good results.
What I'd like to do is to specify a few "alignment" boxes that I then resize and rotate (and skew) the picture to align with those. Then, I was thinking that once I have the image aligned, I would then know where all the boxes are and won't have to scan for the boxes, just scan inside the location of the boxes to see if they are black. This should be faster and more reliable. And if I were to operate on images coming from the camera, I'd have more flexibility in asking the user to align the picture to match the alignment marks, rather than having to align the image myself.
Given that this is my first Image Processing project, I feel like I am reinventing the wheel. I'd like suggestions on how to do this, and whether to utilize libraries like OpenCV.
I am enclosing an image similar to what I would like processed. I am looking for a list of all squares that have a significant amount of black marking, i.e. A8, C4, E7, G4, H1, J9.
Issues to be aware of:
Light coverage of the image may not be ideal, but should be relatively consistent across the image (i.e. no shadows)
All squares may be empty or all dark, and the algorithm needs to be able to determine that
the image may be skewed or rotated about any of the axis. Rotation about the z axis maybe easy to fix. There may be rotation around the x or y axis making ones side of the image be wider than the other. However, if I scan the image in realtime as it comes from the camera, I can ask the user to align the alignment marks with marks on the screen. How best to ensure that alignment to give the user appropriate feedback? Just checking to make sure that the 4 corners are dark could result in a false positive when the camera is pointing to a black surface.
not every square will be equally or consistently blacked, but I think there will be enough black to make it unquestionable to a human eye.
the blue grid may be useful, but there are cases where the black markings may overlap the blue grid. I think a virtual grid is probably better than relying on the printed grid. I would think that using the alignment markers to align the image, would then allow for a precise virtual grid to be laid out. And then the contents of each grid box could be sampled, to see if it was predominantly black, vs scanning from left-to-right, no? Here is another image with more markings on the grid. In this image, in addition to the previous marking in A8, C4, E7, G4, H1, J9, I have marked E2, G8 and G9, and I4 and J4 and you can see how the blue grid is obscured.
This is my first phase of this project. Eventually I'd like to scale this algorithm to be able to process at least a few hundred slots and possibly different colors.
To start with, this problem reminded me a bit of these demo's that might be useful to learn from:
The DNA microarray image processing
The Matlab Sudoku solver
The Iphone Sudoku solver blog post, explaining the image processing
Personally, I think the most simple approach would be to detect the squares in your image.
1) Remove the background and small cruft
f_makebw = #(I) im2bw(I.data, double(median(I.data(:)))/1.3);
bw = ~blockproc(im, [128 128], f_makebw);
bw = bwareaopen(bw, 30);
2) Remove everything but the squares and circles.
se = strel('disk', 5);
bw = imerode(bw, se);
% Detect the squares and cricles via morphology
[B, L] = bwboundaries(bw, 'noholes');
3) Detect the squares using 'extend' from regionprops. The 'Extent' metric measures what proportion of the bounding-box is filled. This makes it a
nice measure to distinguish between circles and squares
stats = regionprops(L, 'Extent');
extent = [stats.Extent];
idx1 = find(extent > 0.8);
bw = ismember(L, idx1);
4) This leaves you with your features, to synchronize or rectify the image with. An easy, and robust way, to do this, is via the Autocorrelation Function.
This gives nice peaks, which are easily detected. These peaks can be matched against the ACF peaks from a template image via the Hungarian algorithm. Once matched, you can correct rotation and scaling as you now have a linear system which you can solve:
x = Ax'
Translation can then be corrected using run-of-the-mill cross correlation against the same pre defined template.
If all goes well, you know have an aligned or synchronized image, which should help considerably in determining the position of the dots.
I've been starting to do something similar using my GPUImage iOS framework, so that might be an alternative to doing all of this in OpenCV or something else. As it's name indicates, GPUImage is entirely GPU-based, so it can have some tremendous performance benefits over CPU-bound processing (up to 180X faster for doing things like processing live video).
As a first stage, I took your images and ran them through a simple luminance thresholding filter with a threshold of 0.5 and arrived at the following for your two images:
I just added an adaptive thresholding filter, which attempts to correct for local illumination variances, and works really well for picking out text. However, in your images it uses too small of an averaging radius to handle your blobs well:
and seems to bring out your grid lines, which it sounds like you wish to ignore.
Maurits provides a more comprehensive description of what you could do, but there might be a way to implement these processing operations as high-performance GPU-based filters instead of relying on slower OpenCV versions of the same calculations. If you could grab rotation and scaling information from this thresholded image, you could construct a transform that could also be applied as a filter to your thresholded image to produce your final aligned image, which could then be downsampled and read out by your application to determine which grid locations were filled in.
These GPU-based thresholding operations run in less than 2 ms for 640x480 frames on an iPhone 4, so it might be possible to chain filters together to analyze incoming video frames as fast as the device's video camera can provide them.
I uploaded an example image for better understanding: http://www.imagebanana.com/view/kaja46ko/test.jpg
In the image you can see some scanlines and a marker (the white retangle with the circle in it). I want OpenCV to go along a specified area (in the example outlined trough the scanlines) that should be around 5x5. If that area contains a gradient from black to white, I want OpenCV to save the position of that area, so that I can work with it later.
The final result would be to differentiate between the marker and the other retangles separated trough black and white lines.
Is something like that possible? I googled a lot but I only found edge detectors but that's not what I want, I really need the detection of the black to white gradient only.
Thanks in advance.
it would be a good idea to filter out some of the areas by calculating their histogram.
You can use cvCalcHist for the task, then you can establish some threshold to determine if the black-white pixels percentage corresponds to that of a gradient. This will not solve the task but it will help you in reducing complexity.
Then, you can erode the image to merge all the white areas. After applying threshold, it would be possible to find connected components (using cvFindContours) that will separate images in black zones or white zones. You can then detect gradients by finding 5x5 areas that contain both a piece of a white zone and black zone simultaneously.
hope it helps.
Thanks for your answerer dnul, but it didn't really help me work this out. I though about a histogram to approach the problem but it's not quite what I want.
I solved this problem by creating a 40x40 matrix which holds 5x5 matrix's containing the raw pixel data in all 3 channels. I iterated trough each 40px-area and inside iterated trough each border of 5px-area. I checked each pixel and saved the ones which are darker then a certain threshold a storage.
After the iteration I had a rough idea of how many black pixels their are, so I checked each one of them for neighbors with white-pixels in all 3 channels. I then marked each of those pixels and saved them to another storage.
I then used the ransac algorithm to construct lines out of these points. It constructs about 5-20 lines per marker edge. I then looked at the lines which meet each other and saved the position of those that meet in a square angle.
The 4 points I get from that are the edges of the marker.
If you want to reproduce this you would have to filter the image in beforehand and apply a threshold to make it easier to distinguish between black and white pixels.
A sample picture, save after finding the points and before constructing the lines:
http://www.imagebanana.com/view/i6gfe6qi/9.jpg
What you are describing is edge detection. This is exactly how, say, the Canny edge detector works. It looks for dark pixels near light pixels, and based on a threshold that you pass in (There is also the adaptive canny, which figures out the threshold for you), and sets them to all black or all white (aka 'marks' them).
See here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html
I implemented some adaptive binarization methods, they use a small window and at each pixel the threshold value is calculated. There are problems with these methods:
If we select the window size too small we will get this effect (I think the reason is because of window size is small)
(source: piccy.info)
At the left upper corner there is an original image, right upper corner - global threshold result. Bottom left - example of dividing image to some parts (but I am talking about analyzing image's pixel small surrounding, for example window of size 10X10).
So you can see the result of such algorithms at the bottom right picture, we got a black area, but it must be white.
Does anybody know how to improve an algorithm to solve this problem?
There shpuld be quite a lot of research going on in this area, but unfortunately I have no good links to give.
An idea, which might work but I have not tested, is to try to estimate the lighting variations and then remove that before thresholding (which is a better term than "binarization").
The problem is then moved from adaptive thresholding to finding a good lighting model.
If you know anything about the light sources then you could of course build a model from that.
Otherwise a quick hack that might work is to apply a really heavy low pass filter to your image (blur it) and then use that as your lighting model. Then create a difference image between the original and the blurred version, and threshold that.
EDIT: After quick testing, it appears that my "quick hack" is not really going to work at all. After thinking about it I am not very surprised either :)
I = someImage
Ib = blur(I, 'a lot!')
Idiff = I - Idiff
It = threshold(Idiff, 'some global threshold')
EDIT 2
Got one other idea which could work depending on how your images are generated.
Try estimating the lighting model from the first few rows in the image:
Take the first N rows in the image
Create a mean row from the N collected rows. You know have one row as your background model.
For each row in the image subtract the background model row (the mean row).
Threshold the resulting image.
Unfortunately I am at home without any good tools to test this.
It looks like you're doing adaptive thresholding wrong. Your images look as if you divided your image into small blocks, calculated a threshold for each block and applied that threshold to the whole block. That would explain the "box" artifacts. Usually, adaptive thresholding means finding a threshold for each pixel separately, with a separate window centered around the pixel.
Another suggestion would be to build a global model for your lighting: In your sample image, I'm pretty sure you could fit a plane (in X/Y/Brightness space) to the image using least-squares, then separate the pixels into pixels brighter (foreground) and darker than that plane (background). You can then fit separate planes to the background and foreground pixels, threshold using the mean between these planes again and improve the segmentation iteratively. How well that would work in practice depends on how well your lightning can be modeled with a linear model.
If the actual objects you try to segment are "thinner" (you said something about barcodes in a comment), you could try a simple opening/closing operation the get a lighting model. (i.e. close the image to remove the foreground pixels, then use [closed image+X] as threshold).
Or, you could try mean-shift filtering to get the foreground and background pixels to the same brightness. (Personally, I'd try that one first)
You have very non-uniform illumination and fairly large object (thus, no universal easy way to extract the background and correct the non-uniformity). This basically means you can not use global thresholding at all, you need adaptive thresholding.
You want to try Niblack binarization. Matlab code is available here
http://www.uio.no/studier/emner/matnat/ifi/INF3300/h06/undervisningsmateriale/week-36-2006-solution.pdf (page 4).
There are two parameters you'll have to tune by hand: window size (N in the above code) and weight.
Try to apply a local adaptive threshold using this procedure:
convolve the image with a mean or median filter
subtract the original image from the convolved one
threshold the difference image
The local adaptive threshold method selects an individual threshold for each pixel.
I'm using this approach extensively and it's working fine with images having non uniform background.