Using opencv, we can display text over an image. The standard method is
cv2.putText(img, "my_text", (x, y), font, 1.0, (255, 255, 255), 0)
But I want to display a small icon image followed by text (in this case "my_text").
I could overlay an image over the image in this way:
icon_img = cv2.imread("icon.png")
icon_img1= cv2.resize(icon_img, (20,20))
x_offset=y_offset=5
img[y_offset:y_offset+icon_img1.shape[0], x_offset:x_offset+icon_img1.shape[1]] = icon_img1
But I want the icon to shift along with the display text.
An icon might look like this
Thank you!
This isn't really possible in opencv.
Opencv has no concept of z-layers in its basic GUI, so you would be responsible for setting the location of every element in the image. Opencv's GUI was not really developed to be the interface for a full fledged application, but more for testing and development purposes. If you really want a full fledged GUI I recommend Qt or wxPython.
However, for this very basic problem, what you could do is something like
icon_img = cv2.imread("icon.png")
icon_img1= cv2.resize(icon_img, (20,20))
x_offset=y_offset=5
img[y_offset:y_offset+icon_img1.shape[0], x_offset:x_offset+icon_img1.shape[1]] = icon_img1
cv2.putText(img, "my_text", (x_offset+icon_img1.shape[1], y_offset+icon_img1.shape[0]), font, 1.0, (255, 255, 255), 0)
This would shift the text to be placed after the image, right at the bottom right corner of the image
I am not certain I understand what you are doing, but I gather you want to place an icon with some related text at various positions around the frame over a video, yes? And I assume you want the text to stay in the same place relative to the icon, yes?
If so, I would create a small "piece of canvas" (i.e. a Mat) just big enough for the icon and the text and place them both on it.
Then, as each frame of video arrives, create an ROI in the video frame, and use copyTo() to copy that little piece of canvas ontop of the video frame:
// define image ROI at bottom-right of video frame
cv::Mat imageROI(frame, cv::Rect(frame.cols-canvas.cols,
frame.rows-canvas.rows,
canvas.cols,canvas.rows));
// insert icon canvas
canvas.copyTo(imageROI);
Related
I have a .tiff video file with growing fibers that look like the image below
Now
imagine that this fiber will constantly grow and shrink in a straight line. Now I'd like to somehow crop out the region of the video that contains just the fiber with, for example, a black background image.
Now when I play the video I'd like to just see the growing fiber region of the video with the black background everywhere else.
Question: Is there a way to preform a "custom" crop of irregular shaped objects in ImageJ?
If you don't know if ImageJ can do this sort of image processing any other software options are welcome.
Thanks for any help
Yes, you can do this in ImageJ. If you can find a threshold method that captures your fiber, you can turn that into a selection (ROI), and then Clear Outside to turn everything else black:
Image > Adjust > Threshold and choose the threshold, or use one of the automatic methods. But don't apply the threshold!
Edit > Selection > Create Selection (turns the thresholded area into an ROI)
Edit > Clear Outside (makes the background black -- assuming you have set your background color to black)
If you want to make the window smaller, you can do Image > Crop with the selection active. This will crop the image to the rectangular bounding box of the ROI. But this size will vary according to the size of the fiber. So you might want to do this when the fiber is at its largest.
I'm developing an OCR app that reads the digits and copy them to clipboard automatically instead of manually typing...
I'm using (TesseractOCR) ... But before recognizing and in the image manipulating I'm improving the image for better recognition.
I used ImageMagick library and the filtered image looks like this :
But the Output of recognition is :
446929231986789 //The first and last numbers (4 & 9) were added
So I Want to detect only the white box to crop ...
I know that OpenCV do the trick but unfortunately it's C++ library and I don't speak that language :(
And I knew that iOS8 has a new CIDetector of type Rectangles but I don't want to neglect the previous versions of iOS
MY IMAGEMAGICK Filter CODE :
//Starting
MagickWandGenesis();
magick_wand = NewMagickWand();
//Reading the image....
NSString *tempFilePath = //Path of image
// Monochrome image
MagickQuantizeImage(magick_wand,2,GRAYColorspace,1,MagickFalse,MagickFalse);
// Write to temporary file
MagickWriteImage(magick_wand,
[tempFilePath cStringUsingEncoding:NSASCIIStringEncoding]
);
DestroyMagickWand(magick_wand);//Free up memory
// Load UIImage from temporary file
UIImage *imgObj = [UIImage imageWithContentsOfFile:tempFilePath];
// Display on device
Many thanks ..
I would go with simple pixel search. Since you want to crop the white area with digits all you need to do is to find left, right, top and bottom borders of the rectangle. Provided that rectangle is axis aligned and has enough white space around digits you should find first row or column that has continuous number of white pixels. For example to find left border (which I guess would be around 78th column) start searching from column 0 and go right. For each column count continuous white pixels (single for-loop from top to bottom). By continuous I mean series that is not interrupted by black one. If count will reach, say, 80% of height you have your left border. Do the rest accordingly starting from right side, top or bottom and move in the opposite direction. I guess there are some fancy procedures to detect the rectangle but your input has quite distinguishable characteristics. So instead of linking to some lib I suggest DIY. To speed things up you could increase your row by 2 or more. Or you could scale your image down, treshold it do 2 colors.
There is also one more way to do this. Flood-fill with white starting from one of the corners.
Please be kind, I'm new to this....
I have an application that I'm developing where I need to take a PNG image, which has a transparency layer, and treat another color (I'm thinking of using RGB( 1, 1, 1 ), since it's so close to pure black that I can hard-code it) as a separate transparency layer. The reason for this is that I have a background image sitting behind the PNG image that I would like to still display as my sprite gets filled (by adding a progress bar to the sprite), and I only want the portions of the sprite that aren't of the given color to reflect the color fill of the progress bar. In this way, I can avoid having to deal with vector computations for the outline of the image within the sprite, flood the area outside of the discernable image with my new "transparent" color, and be on my merry way.
I've tried using shaders, but they seem to be less than helpful.
There is no way that I know of due to Open GL not allowing you to do it. You will either have to modify the pixel data manually or write a shader (Which you hav already done ).
I have a project to customize clothes ,let say a t-shirt, that have following features:
change colors.
add few lines of text ( <= 4) and change the font from a list.
add image or photo to the t-shirt.
rotate the t-shirt to custom back side.
rotate the image and zoom in/out.
save the result as a project locally and send it to a webservice ( i think to use NSDictionary/json ).
save as an image.
so my question is :
Should I use multiples images to simulate colors changes. Or should I use QuartzCore ( I am not an expert in QuartzCore but if I have to use it I'll learn). Or is there a better approach for this ?
Thank you.
The simple way to do this is to render the T-Shirt image into a CGContext, then walk the rows and columns and change pixels showing a "strong" primary color to the desired tint. You would take a photo of a person wearing a bright red (or other primary color) t-shirt, then in your code only change pixels where the red color has a high luminance and saturation (i.e. the "r" value is over some threshold and the b and g components are low).
The modified image is then going to look a bit flat, as when you change the pixels to one value (the new tint) there will be no variation in luminance. To make this more realistic, you would want to make each pixel have the same luminance as it did before. You can do this by converting back and forth from RGB to a color space like HCL. Apple has a great doc on color (in the Mac section) that explains color spaces (google 'site:developer.apple.com "Color Spaces"')
To reach your goal, you will have to tackle these technologies:
create a CGContext and render an image into it using Quartz
figure out how to read each pixel (pixels can have alpha and different orderings)
figure out a good way to identify the proper pixels (test by making these black or white)
for each pixel you want to change, convert the RGB to HCL to get its luminance
replace the pixel with a pixel of a different Color and Hue but the same Luminence
use the CGContext to make a new image
If all this seems to difficult then you'll have to have different images for every color you want.
Here is an oval, and a box
The goal is to place the oval inside the green box.
If you imagine the green box on the bottom to be your bounds, the top image can be placed anywhere inside the green box. The oval cannot flow outside of the green box.
Input is just the two images and I'm told to "put the red oval in the green box." If it is not possible (eg: the oval is too big), nothing happens.
It is trivial to do it by hand in a image editor: just drag the top image over the green box and make sure it doesn't flow out the sides.
How should this problem be approached?
There are a variety of ways of doing this and choosing one depends on problem constraints. In the simplest case, if you know the exact colours of the red, blue, and green, and know that none of the shapes are rotated the solution is simple. First binarize the image so that only one object is separated (oval or rectangle) then find the highest,lowest,leftmost,and rightmost point for that object. Repeat for the other object. That information will tell you if the ellipse can fit in the rectangle.
If those constraints are too rigid, then you will probably want to use blob detection. Perhaps cvblob or cvblobslib. They can handle the much more general case of varying colours and orientations.